How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "RuleReasoner/RuleReasoner-4B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "RuleReasoner/RuleReasoner-4B",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/RuleReasoner/RuleReasoner-4B
Quick Links

If you use the model in your research, please cite the original papers as below.

@article{liu2025rulereasoner,
      title={RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling}, 
      author={Yang Liu and Jiaqi Li and Zilong Zheng},
      year={2025},
      eprint={2506.08672},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.08672}, 
}

Code: https://github.com/bigai-nlco/RuleReasoner

Downloads last month
44
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with RuleReasoner/RuleReasoner-4B.

Model tree for RuleReasoner/RuleReasoner-4B

Finetuned
(276)
this model
Quantizations
1 model

Dataset used to train RuleReasoner/RuleReasoner-4B

Paper for RuleReasoner/RuleReasoner-4B