Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
audio
audio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Whisper Fine-Tuning Evaluation: Local vs Commercial ASR

WER Comparison Chart

A "back of the envelope" evaluation comparing fine-tuned Whisper models running locally against commercial ASR APIs via Eden AI.

The Question

Can fine-tuning Whisper achieve measurable WER reductions, even when comparing local inference against cloud-based commercial models?

TL;DR

Yes. Fine-tuned Whisper Large Turbo running locally achieved 5.84% WER, beating the best commercial API (Assembly at 7.30%) by 20%.

The Significant Finding

While this is a single-sample experiment, the key takeaway is compelling: with fine-tuning, locally-run models can equal or better cloud-hosted commercial models.

Even matching cloud performance is remarkable when you consider:

  • Local models have no per-request API costs
  • No data privacy concerns from sending audio to external services
  • Full control over the inference pipeline
  • Potential for continued improvement through additional fine-tuning

The fact that a fine-tuned local model can beat commercial APIs makes the case even stronger.

Test Setup

  • Fine-tuning date: November 17, 2025
  • Fine-tuning platform: Modal (A100 GPU)
  • Training dataset: 1 hour of audio, chunked and timestamped using WhisperX
  • Evaluation: Single audio sample (137 words) on local hardware
  • Test audio: eval/test-audio.wav with ground truth in eval/truth.txt
  • Commercial APIs tested: Assembly, Gladia, OpenAI (via Eden AI)

Note: This is a one-sample experiment designed to answer a specific question about fine-tuning effectiveness, not an exhaustive evaluation.

Results: Local Fine-Tunes vs Commercial APIs

Rank Model Type WER Performance
1 Whisper Large Turbo (FT) Local Fine-tune 5.84% Production Ready
2 Assembly AI Commercial API 7.30% Very Good
3 Gladia Commercial API 8.03% Very Good
4 Whisper Small (FT) Local Fine-tune 8.76% Very Good
5 Whisper API (OpenAI) Commercial API 8.76% Very Good
6 Whisper Base (FT) Local Fine-tune 14.60% Good
7 Whisper Tiny (FT) Local Fine-tune 14.60% Good

Key Findings

  • Fine-tuned Whisper Large Turbo beats all commercial APIs (5.84% vs 7.30% best commercial)
  • 20% WER improvement over best commercial option (Assembly AI)
  • 33% WER improvement over Whisper API (OpenAI)
  • Fine-tuned Whisper Small matches Whisper API (OpenAI) at 8.76% WER - apples-to-apples comparison
  • Local inference advantages: No per-request costs, privacy-preserving, full control over pipeline

Apples-to-Apples: Whisper Fine-Tunes vs Whisper API

To isolate the impact of fine-tuning, here's a direct comparison of fine-tuned local Whisper models against OpenAI's Whisper API:

Whisper Apples-to-Apples Comparison

This comparison shows that:

  • Fine-tuned Whisper Large Turbo beats Whisper API by 33%
  • Fine-tuned Whisper Small matches Whisper API performance exactly
  • Even smaller models (Base, Tiny) achieve respectable results with fine-tuning

Repository Structure

Local-STT-Fine-Tune-Tests/
β”œβ”€β”€ README.md                    # This file
β”œβ”€β”€ requirements.txt             # Python dependencies
β”‚
β”œβ”€β”€ scripts/                     # Evaluation scripts
β”‚   β”œβ”€β”€ evaluate_models.py       # Main evaluation script
β”‚   β”œβ”€β”€ create_wer_chart.py      # WER visualization generator
β”‚   β”œβ”€β”€ create_whisper_comparison_chart.py  # Whisper apples-to-apples chart
β”‚   └── run_evaluation.sh        # Convenience runner
β”‚
β”œβ”€β”€ docs/                        # Documentation
β”‚   β”œβ”€β”€ EVALUATION_SUMMARY.md    # Comprehensive analysis & recommendations
β”‚   └── paths.md                 # Model path reference
β”‚
β”œβ”€β”€ eval/                        # Test data
β”‚   β”œβ”€β”€ test-audio.wav           # Test audio file (137 words)
β”‚   └── truth.txt                # Ground truth transcription
β”‚
β”œβ”€β”€ visualizations/              # Charts and graphs
β”‚   β”œβ”€β”€ wer_comparison_chart.png # WER comparison bar chart (all models)
β”‚   β”œβ”€β”€ wer_comparison_chart.svg # Vector version
β”‚   β”œβ”€β”€ whisper_apples_to_apples_comparison.png # Whisper FT vs API
β”‚   └── whisper_apples_to_apples_comparison.svg # Vector version
β”‚
└── results/                     # Evaluation outputs
    β”œβ”€β”€ latest/                  # Most recent results
    β”‚   β”œβ”€β”€ report.txt           # Human-readable report
    β”‚   β”œβ”€β”€ results.json         # Machine-readable data
    β”‚   └── model_comparison_chart.txt
    β”œβ”€β”€ transcriptions/          # Individual model outputs
    β”œβ”€β”€ archive/                 # Historical runs
    └── evaluation_*.txt         # Timestamped reports

Quick Start

Run the evaluation with a single command:

./scripts/run_evaluation.sh

Or manually:

# Create venv if needed
uv venv
source .venv/bin/activate

# Install dependencies
uv pip install -r requirements.txt

# Run evaluation
python scripts/evaluate_models.py

What Gets Evaluated

For each model, the script calculates:

  • WER (Word Error Rate) - Primary metric
  • MER (Match Error Rate)
  • WIL (Word Information Lost)
  • WIP (Word Information Preserved)
  • Error breakdown:
    • Hits (correct words)
    • Substitutions
    • Deletions
    • Insertions

Output Files

Results are saved to the results/ directory:

  • evaluation_report_YYYYMMDD_HHMMSS.txt - Human-readable report
  • evaluation_results_YYYYMMDD_HHMMSS.json - Machine-readable results
  • transcription_<model_name>.txt - Individual transcriptions

Understanding WER

WER is calculated as:

WER = (Substitutions + Deletions + Insertions) / Total Words in Reference

Interpretation:

  • < 5%: Excellent - Near-human level
  • 5-10%: Very Good - Production ready
  • 10-20%: Good - Acceptable for most uses
  • 20-30%: Fair - May need post-processing
  • > 30%: Poor - Needs improvement

Report Format

The evaluation report includes:

  1. Ranked Results - Models sorted by WER (best to worst)
  2. Detailed Metrics - Full breakdown for each model
  3. Conclusions - Best/worst performers and improvement analysis
  4. WER Interpretation - Context for the results

Requirements

  • Python 3.8+
  • PyTorch
  • Transformers
  • jiwer
  • CUDA (optional, for GPU acceleration)

Documentation

Technical Details

Metrics Calculated

  • WER (Word Error Rate) - Primary metric
  • MER (Match Error Rate)
  • WIL (Word Information Lost)
  • WIP (Word Information Preserved)
  • Error breakdown: Hits, Substitutions, Deletions, Insertions

WER Interpretation

  • < 5%: Excellent - Near-human level
  • 5-10%: Very Good - Production ready
  • 10-20%: Good - Acceptable for most uses
  • 20-30%: Fair - May need post-processing
  • > 30%: Poor - Needs improvement

Requirements

  • Python 3.8+
  • PyTorch
  • Transformers (Hugging Face)
  • jiwer
  • CUDA (optional, for GPU acceleration)

Notes

  • The script automatically detects CUDA and uses GPU if available
  • Each run generates timestamped outputs for comparison tracking
  • Transcriptions are saved individually for manual review
  • Failed model loads are reported separately in the evaluation report

Contributing

To test additional models:

  1. Add model path to scripts/evaluate_models.py in the MODELS dictionary
  2. Run ./scripts/run_evaluation.sh
  3. Check results/latest/ for updated rankings

License

MIT License - See LICENSE file for details

Downloads last month
49