Instructions to use ControlLLM/Llama3.1-8B-OpenMath16-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ControlLLM/Llama3.1-8B-OpenMath16-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ControlLLM/Llama3.1-8B-OpenMath16-Instruct")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("ControlLLM/Llama3.1-8B-OpenMath16-Instruct", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ControlLLM/Llama3.1-8B-OpenMath16-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ControlLLM/Llama3.1-8B-OpenMath16-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ControlLLM/Llama3.1-8B-OpenMath16-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ControlLLM/Llama3.1-8B-OpenMath16-Instruct
- SGLang
How to use ControlLLM/Llama3.1-8B-OpenMath16-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ControlLLM/Llama3.1-8B-OpenMath16-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ControlLLM/Llama3.1-8B-OpenMath16-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ControlLLM/Llama3.1-8B-OpenMath16-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ControlLLM/Llama3.1-8B-OpenMath16-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ControlLLM/Llama3.1-8B-OpenMath16-Instruct with Docker Model Runner:
docker model run hf.co/ControlLLM/Llama3.1-8B-OpenMath16-Instruct
Commit History
link the open source code c088c6c verified
Add paper link a582b8e verified
Update README.md 8ce20ba verified
Update README.md 8c8ba18 verified
Update README.md c4c3925 verified
Update README.md 0125761 verified
Update README.md b4667ba verified
Update README.md 1ed2c60 verified
Update README.md 07189e4 verified
Update README.md 0d94b52 verified
Update README.md ba08dcc verified
Update README.md 71470b4 verified
Add model's evaluation result in readme. de2cb9d verified
Update model card with benchmark result plotting 7e75399 verified
Update README.md d114d94 verified
Update model card with benchmark data source 1cfd3f7 verified
Update model card with metrics 50abae0 verified
initial commit of readme 1549e51
hawei_LinkedIn commited on