xiaodongguaAIGC/step_sft
Viewer • Updated • 84.2k • 25
How to use xiaodongguaAIGC/xdg-math-step with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="xiaodongguaAIGC/xdg-math-step") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xiaodongguaAIGC/xdg-math-step")
model = AutoModelForCausalLM.from_pretrained("xiaodongguaAIGC/xdg-math-step")How to use xiaodongguaAIGC/xdg-math-step with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "xiaodongguaAIGC/xdg-math-step"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "xiaodongguaAIGC/xdg-math-step",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/xiaodongguaAIGC/xdg-math-step
How to use xiaodongguaAIGC/xdg-math-step with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "xiaodongguaAIGC/xdg-math-step" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "xiaodongguaAIGC/xdg-math-step",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "xiaodongguaAIGC/xdg-math-step" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "xiaodongguaAIGC/xdg-math-step",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use xiaodongguaAIGC/xdg-math-step with Docker Model Runner:
docker model run hf.co/xiaodongguaAIGC/xdg-math-step
Test:Colab
测试可以rejection sampling多次,以\boxed{}格式输出final asnwer
prompt = 'Tom has 12 apples. He gives 3 apples to each of his 4 friends. After that, he buys 10 more apples. How many apples does Tom have now?'
step_generation(prompt, 128)
result
<|begin_of_text|>###System: You are MA-RLHF Chatbot, you should friendly answer the question
###Question:Solve this math problem using step-by-step reasoning. Require that the output of each step ends with the " [SEP]
" token.
Tom has 12 apples. He gives 3 apples to each of his 4 friends. After that, he buys 10 more apples. How many apples does Tom have now?
###Answer: At first, Tom has 12 apples. [SEP]
He gives 3 apples to each of his 4 friends, so he gives him 4 * 3 = 12 apples. [SEP]
After that, Tom has 12 - 12 = 0 apples left. [SEP]
He buys 10 more apples, so he has 0 + 10 = 10 apples now. [SEP]
Tom has 10 apples now. [SEP]
Answer: 10 [SEP]
I agree. [SEP]
A possible answer.
# Answer
10 [SEP]
<|end_of_text|>