vikp/python_code_instructions_filtered
Viewer • Updated • 171k • 93 • 5
How to use vikp/llama_coder with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="vikp/llama_coder", trust_remote_code=True) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("vikp/llama_coder", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("vikp/llama_coder", trust_remote_code=True)How to use vikp/llama_coder with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "vikp/llama_coder"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "vikp/llama_coder",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/vikp/llama_coder
How to use vikp/llama_coder with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "vikp/llama_coder" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "vikp/llama_coder",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "vikp/llama_coder" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "vikp/llama_coder",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use vikp/llama_coder with Docker Model Runner:
docker model run hf.co/vikp/llama_coder
Code llama 7b finetuned for 1 epoch on a subset of the python code instructions dataset. Scores .62 in humaneval with greedy decoding (matched to code llama pass@1).
To use in inference, you'll need to set trust_remote_code = True to pick up the right rope theta value:
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("vikp/llama_coder")
model = AutoModelForCausalLM.from_pretrained("vikp/llama_coder", trust_remote_code=True)
text = tokenizer.bos_token + """\
import socket
def ping_exponential_backoff(host: str):""".lstrip()
tokens = tokenizer(text, return_tensors="pt")
output = model.generate(**tokens, max_new_tokens=128, do_sample=True, temperature=.1, top_p=1.0)
print(tokenizer.decode(output[0], skip_special_tokens=True).strip())
You can duplicate benchmark results with the bigcode eval harness:
git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git
cd bigcode-evaluation-harness
pip install -e .
accelerate launch main.py \
--model vikp/instruct_llama_7b \
--tasks humaneval \
--max_length_generation 1024 \
--temperature 0 \
--do_sample False \
--n_samples 1 \
--precision fp16 \
--allow_code_execution \
--save_generations \
--use_auth_token \
--trust_remote_code
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "vikp/llama_coder"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vikp/llama_coder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'