Agent-STAR-RL-3B
This repository contains the Agent-STAR-RL-3B model, a 3B parameter Large Language Model fine-tuned for long-horizon tool orchestration tasks. It was introduced in the paper Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe.
Model Description
Agent-STAR is a unified post-training pipeline consisting of [Data Synthesis → SFT → RL]. This specific checkpoint is the RL-tuned version based on the Qwen2.5-3B-Instruct backbone, optimized for the TravelPlanner benchmark.
The model was developed to handle complex, multi-turn agentic environments where it must call various tools to satisfy multifaceted constraints. According to the research findings, smaller models like this 3B variant benefit from staged rewards and enhanced exploration during the RL phase to achieve high performance.
Resources
- Paper: Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe
- GitHub Repository: WxxShirley/Agent-STAR
- Dataset: Agent-STAR-TravelDataset
Inference
To run inference with this model, please refer to the instructions and ReAct-based inference pipeline provided in the official GitHub repository.
Citation
If you find Agent-STAR helpful to your work, please consider citing:
@misc{wu2026agentstar,
title={Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe},
author={Xixi Wu and Qianguo Sun and Ruiyang Zhang and Chao Song and Junlong Wu and Yiyan Qi and Hong Cheng},
year={2026},
eprint={2603.21972},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2603.21972},
}
Acknowledgements
We appreciate the open-sourced rLLM framework and the authors of TravelPlanner for providing the benchmark and resources that supported this research.
- Downloads last month
- 262