Text Generation
Transformers
Safetensors
English
llama
code
text-generation-inference
How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "InfiniAILab/CodeDrafter-500M"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "InfiniAILab/CodeDrafter-500M",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/InfiniAILab/CodeDrafter-500M
Quick Links

Model Card for CodeDrafter-500M

A draft model for Llama3.1/3.2/3.3 series models, specialized in python coding. This model is finetuned from the first 4 layers of facebook/layerskip-llama3.2-1B.

Citation

@article{chen2024sequoia,
  title={Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding},
  author={Chen, Zhuoming and May, Avner and Svirschevski, Ruslan and Huang, Yuhsun and Ryabinin, Max and Jia, Zhihao and Chen, Beidi},
  journal={arXiv preprint arXiv:2402.12374},
  year={2024}
}
Downloads last month
52
Safetensors
Model size
0.5B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for InfiniAILab/CodeDrafter-500M

Finetuned
(1)
this model
Quantizations
1 model

Datasets used to train InfiniAILab/CodeDrafter-500M

Collection including InfiniAILab/CodeDrafter-500M

Paper for InfiniAILab/CodeDrafter-500M