Instructions to use RLHFlow/LLaMA3-SFT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RLHFlow/LLaMA3-SFT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RLHFlow/LLaMA3-SFT") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-SFT") model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-SFT") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RLHFlow/LLaMA3-SFT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RLHFlow/LLaMA3-SFT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RLHFlow/LLaMA3-SFT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/RLHFlow/LLaMA3-SFT
- SGLang
How to use RLHFlow/LLaMA3-SFT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RLHFlow/LLaMA3-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RLHFlow/LLaMA3-SFT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RLHFlow/LLaMA3-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RLHFlow/LLaMA3-SFT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use RLHFlow/LLaMA3-SFT with Docker Model Runner:
docker model run hf.co/RLHFlow/LLaMA3-SFT
This is the SFT checkpoint used for the project RLHFlow/Online-RLHF
- Paper: RLHF Workflow: From Reward Modeling to Online RLHF (Published in TMLR, 2024)
- Authors: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
- Code: https://github.com/RLHFlow/Online-RLHF
The model is trained from meta-llama/Meta-Llama-3-8B on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
Academic Benchmarks
We use ToRA script to evaluate GSM8K and MATH, Evalplut for HumanEval, and lm-evaluation-harness for other benchmarks. The model is evaluated in zero-shot setting so the results here may be slightly different from that reported in the technical report.
| Model | Size | Method | LC AlpacaEval | MT-Bench | GSM-8K | MMLU | HumanEval | TruthfulQA | ARC | MBPP |
|---|---|---|---|---|---|---|---|---|---|---|
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 74.2 | 30.0 | 64.6 | 63.4 | 53.5 | 58.6 |
| Ours (Iterative RLHF) | 8B | Iterative DPO | 37.2 | 8.46 | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
Citation
Please cite our techical report if you find our model is useful for your research or product.
@misc{dong2024rlhf,
title={RLHF Workflow: From Reward Modeling to Online RLHF},
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
year={2024},
eprint={2405.07863},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
- Downloads last month
- 94