Automatic Speech Recognition
Transformers
PyTorch
JAX
TensorBoard
Safetensors
Norwegian
whisper
audio
asr
hf-asr-leaderboard
Instructions to use NbAiLabArchive/scream_medium_beta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use NbAiLabArchive/scream_medium_beta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="NbAiLabArchive/scream_medium_beta")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("NbAiLabArchive/scream_medium_beta") model = AutoModelForSpeechSeq2Seq.from_pretrained("NbAiLabArchive/scream_medium_beta") - Notebooks
- Google Colab
- Kaggle
Librarian Bot: Add base_model information to model
#2
by librarian-bot - opened
This pull request aims to enrich the metadata of your model by adding openai/whisper-medium as a base_model field, situated in the YAML block of your model's README.md.
How did we find this information? We performed a regular expression match on your README.md file to determine the connection.
Why add this? Enhancing your model's metadata in this way:
- Boosts Discoverability - It becomes straightforward to trace the relationships between various models on the Hugging Face Hub.
- Highlights Impact - It showcases the contributions and influences different models have within the community.
For a hands-on example of how such metadata can play a pivotal role in mapping model connections, take a look at librarian-bots/base_model_explorer.
This PR comes courtesy of Librarian Bot. If you have any feedback, queries, or need assistance, please don't hesitate to reach out to @davanstrien. Your input is invaluable to us!
versae changed pull request status to merged