Instructions to use bigcode/santacoder-fast-inference with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigcode/santacoder-fast-inference with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="bigcode/santacoder-fast-inference")# Load model directly from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("bigcode/santacoder-fast-inference") model = AutoModelWithLMHead.from_pretrained("bigcode/santacoder-fast-inference") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- b136312e5c5f3545a73a7fcd7ddcf7000c54f023d9a14fdd701e4bcff2e604b5
- Size of remote file:
- 2.25 GB
- SHA256:
- b332f1400e8e536f079ad599119ccc8ba50783c5aa5f1a9923ef08c014dbabc5
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.