audio
audio |
|---|
Whisper Fine-Tuning Evaluation: Local vs Commercial ASR
A "back of the envelope" evaluation comparing fine-tuned Whisper models running locally against commercial ASR APIs via Eden AI.
The Question
Can fine-tuning Whisper achieve measurable WER reductions, even when comparing local inference against cloud-based commercial models?
TL;DR
Yes. Fine-tuned Whisper Large Turbo running locally achieved 5.84% WER, beating the best commercial API (Assembly at 7.30%) by 20%.
The Significant Finding
While this is a single-sample experiment, the key takeaway is compelling: with fine-tuning, locally-run models can equal or better cloud-hosted commercial models.
Even matching cloud performance is remarkable when you consider:
- Local models have no per-request API costs
- No data privacy concerns from sending audio to external services
- Full control over the inference pipeline
- Potential for continued improvement through additional fine-tuning
The fact that a fine-tuned local model can beat commercial APIs makes the case even stronger.
Test Setup
- Fine-tuning date: November 17, 2025
- Fine-tuning platform: Modal (A100 GPU)
- Training dataset: 1 hour of audio, chunked and timestamped using WhisperX
- Evaluation: Single audio sample (137 words) on local hardware
- Test audio:
eval/test-audio.wavwith ground truth ineval/truth.txt - Commercial APIs tested: Assembly, Gladia, OpenAI (via Eden AI)
Note: This is a one-sample experiment designed to answer a specific question about fine-tuning effectiveness, not an exhaustive evaluation.
Results: Local Fine-Tunes vs Commercial APIs
| Rank | Model | Type | WER | Performance |
|---|---|---|---|---|
| 1 | Whisper Large Turbo (FT) | Local Fine-tune | 5.84% | Production Ready |
| 2 | Assembly AI | Commercial API | 7.30% | Very Good |
| 3 | Gladia | Commercial API | 8.03% | Very Good |
| 4 | Whisper Small (FT) | Local Fine-tune | 8.76% | Very Good |
| 5 | Whisper API (OpenAI) | Commercial API | 8.76% | Very Good |
| 6 | Whisper Base (FT) | Local Fine-tune | 14.60% | Good |
| 7 | Whisper Tiny (FT) | Local Fine-tune | 14.60% | Good |
Key Findings
- Fine-tuned Whisper Large Turbo beats all commercial APIs (5.84% vs 7.30% best commercial)
- 20% WER improvement over best commercial option (Assembly AI)
- 33% WER improvement over Whisper API (OpenAI)
- Fine-tuned Whisper Small matches Whisper API (OpenAI) at 8.76% WER - apples-to-apples comparison
- Local inference advantages: No per-request costs, privacy-preserving, full control over pipeline
Apples-to-Apples: Whisper Fine-Tunes vs Whisper API
To isolate the impact of fine-tuning, here's a direct comparison of fine-tuned local Whisper models against OpenAI's Whisper API:
This comparison shows that:
- Fine-tuned Whisper Large Turbo beats Whisper API by 33%
- Fine-tuned Whisper Small matches Whisper API performance exactly
- Even smaller models (Base, Tiny) achieve respectable results with fine-tuning
Repository Structure
Local-STT-Fine-Tune-Tests/
βββ README.md # This file
βββ requirements.txt # Python dependencies
β
βββ scripts/ # Evaluation scripts
β βββ evaluate_models.py # Main evaluation script
β βββ create_wer_chart.py # WER visualization generator
β βββ create_whisper_comparison_chart.py # Whisper apples-to-apples chart
β βββ run_evaluation.sh # Convenience runner
β
βββ docs/ # Documentation
β βββ EVALUATION_SUMMARY.md # Comprehensive analysis & recommendations
β βββ paths.md # Model path reference
β
βββ eval/ # Test data
β βββ test-audio.wav # Test audio file (137 words)
β βββ truth.txt # Ground truth transcription
β
βββ visualizations/ # Charts and graphs
β βββ wer_comparison_chart.png # WER comparison bar chart (all models)
β βββ wer_comparison_chart.svg # Vector version
β βββ whisper_apples_to_apples_comparison.png # Whisper FT vs API
β βββ whisper_apples_to_apples_comparison.svg # Vector version
β
βββ results/ # Evaluation outputs
βββ latest/ # Most recent results
β βββ report.txt # Human-readable report
β βββ results.json # Machine-readable data
β βββ model_comparison_chart.txt
βββ transcriptions/ # Individual model outputs
βββ archive/ # Historical runs
βββ evaluation_*.txt # Timestamped reports
Quick Start
Run the evaluation with a single command:
./scripts/run_evaluation.sh
Or manually:
# Create venv if needed
uv venv
source .venv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Run evaluation
python scripts/evaluate_models.py
What Gets Evaluated
For each model, the script calculates:
- WER (Word Error Rate) - Primary metric
- MER (Match Error Rate)
- WIL (Word Information Lost)
- WIP (Word Information Preserved)
- Error breakdown:
- Hits (correct words)
- Substitutions
- Deletions
- Insertions
Output Files
Results are saved to the results/ directory:
evaluation_report_YYYYMMDD_HHMMSS.txt- Human-readable reportevaluation_results_YYYYMMDD_HHMMSS.json- Machine-readable resultstranscription_<model_name>.txt- Individual transcriptions
Understanding WER
WER is calculated as:
WER = (Substitutions + Deletions + Insertions) / Total Words in Reference
Interpretation:
- < 5%: Excellent - Near-human level
- 5-10%: Very Good - Production ready
- 10-20%: Good - Acceptable for most uses
- 20-30%: Fair - May need post-processing
- > 30%: Poor - Needs improvement
Report Format
The evaluation report includes:
- Ranked Results - Models sorted by WER (best to worst)
- Detailed Metrics - Full breakdown for each model
- Conclusions - Best/worst performers and improvement analysis
- WER Interpretation - Context for the results
Requirements
- Python 3.8+
- PyTorch
- Transformers
- jiwer
- CUDA (optional, for GPU acceleration)
Documentation
- Evaluation Summary - Detailed analysis with recommendations
- Model Paths - Reference for model locations
- Latest Results - Most recent evaluation outputs
- Comparison Chart - Visual WER comparison
Technical Details
Metrics Calculated
- WER (Word Error Rate) - Primary metric
- MER (Match Error Rate)
- WIL (Word Information Lost)
- WIP (Word Information Preserved)
- Error breakdown: Hits, Substitutions, Deletions, Insertions
WER Interpretation
- < 5%: Excellent - Near-human level
- 5-10%: Very Good - Production ready
- 10-20%: Good - Acceptable for most uses
- 20-30%: Fair - May need post-processing
- > 30%: Poor - Needs improvement
Requirements
- Python 3.8+
- PyTorch
- Transformers (Hugging Face)
- jiwer
- CUDA (optional, for GPU acceleration)
Notes
- The script automatically detects CUDA and uses GPU if available
- Each run generates timestamped outputs for comparison tracking
- Transcriptions are saved individually for manual review
- Failed model loads are reported separately in the evaluation report
Contributing
To test additional models:
- Add model path to
scripts/evaluate_models.pyin theMODELSdictionary - Run
./scripts/run_evaluation.sh - Check
results/latest/for updated rankings
License
MIT License - See LICENSE file for details
- Downloads last month
- 49

