Whisper Large V3 Turbo - French Quebecois (CTranslate2)

This is the CTranslate2 optimized version of the fine-tuned model ele-sage/whisper-large-v3-turbo-fr-quebecois.

It contains the model weights converted to Float16 for significantly faster inference and lower memory usage compared to the original Transformers model. This format is natively supported by WhisperX and Faster-Whisper.

For details regarding the training dataset (Assemblée Nationale, Common Voice), hyperparameters, and evaluation metrics, please refer to the original model card.

⚡ How to Use

Option 1: Using WhisperX (Recommended for Alignment)

Installation & Setup:

Follow the instructions from the official repository: WhisperX GitHub.

Usage: Command Line & Python

Simply use ele-sage/whisper-large-v3-turbo-fr-quebecois-ct2 as the model argument.

# Example CLI usage
whisperx path/to/audio.wav --model large-v2 ele-sage/whisper-large-v3-turbo-fr-quebecois-ct2 --language fr

Option 2: Using Faster-Whisper

Faster-Whisper is the standalone engine behind WhisperX.

Installation:

pip install faster-whisper

Python:

from faster_whisper import WhisperModel

model_id = "ele-sage/whisper-large-v3-turbo-fr-quebecois-ct2"

model = WhisperModel(model_id, device="cuda", compute_type="float16")

segments, info = model.transcribe("path/to/audio.wav", beam_size=5)

print("Detected language '%s' with probability %f" % (info.language, info.language_probability))

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

Important consideration:

The model was fine-tuned specifically on French data, which might degrade performance on other languages.

Citation

If you use this model, please consider citing the original Whisper paper:

@misc{radford2022whisper,
  doi = {10.48550/ARXIV.2212.04356},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
Downloads last month
24
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ele-sage/whisper-large-v3-turbo-fr-quebecois-ct2

Paper for ele-sage/whisper-large-v3-turbo-fr-quebecois-ct2