How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'Quick Links
Exllama v2 Quantizations of Llama3-8B-Instruct-Replete-Adapted
Using turboderp's ExLlamaV2 v0.1.6 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Replete-AI/Llama3-8B-Instruct-Replete-Adapted
Prompt format
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
|---|---|---|---|---|---|---|---|
| 8_0 | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| 6_5 | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, recommended. |
| 5_0 | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| 4_25 | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| 3_5 | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
Download instructions
With git:
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2 Llama3-8B-Instruct-Replete-Adapted-exl2-6_5
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download a specific branch, use the --revision parameter. For example, to download the 6.5 bpw branch:
Linux:
huggingface-cli download bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2 --revision 6_5 --local-dir Llama3-8B-Instruct-Replete-Adapted-exl2-6_5
Windows (which apparently doesn't like _ in folders sometimes?):
huggingface-cli download bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2 --revision 6_5 --local-dir Llama3-8B-Instruct-Replete-Adapted-exl2-6.5
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Datasets used to train bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2
Evaluation results
- pass@1 on HumanEvalself-reported0.647
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard
- multiple_choice_accuracy on TruthfulQA (0-shot)validation set Open LLM Leaderboard
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bartowski/Llama3-8B-Instruct-Replete-Adapted-exl2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'