Instructions to use FasterDecoding/medusa-vicuna-7b-v1.3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FasterDecoding/medusa-vicuna-7b-v1.3 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("FasterDecoding/medusa-vicuna-7b-v1.3", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update config.json
1
#2 opened over 1 year ago
by
vishwasthedeveloper
Adding safetensors variant
#1 opened about 2 years ago
by
Narsil