ECHO-2: A Large-Scale Distributed Rollout Framework for Cost-Efficient Reinforcement Learning
Abstract
ECHO-2 is a distributed reinforcement learning framework that enables efficient post-training of large language models by overlapping rollout generation, dissemination, and training while managing policy staleness and network latency.
Reinforcement learning (RL) is a critical stage in post-training large language models (LLMs), involving repeated interaction between rollout generation, reward evaluation, and centralized learning. Distributing rollout execution offers opportunities to leverage more cost-efficient inference resources, but introduces challenges in wide-area coordination and policy dissemination. We present ECHO-2, a distributed RL framework for post-training with remote inference workers and non-negligible dissemination latency. ECHO-2 combines centralized learning with distributed rollouts and treats bounded policy staleness as a user-controlled parameter, enabling rollout generation, dissemination, and training to overlap. We introduce an overlap-based capacity model that relates training time, dissemination latency, and rollout throughput, yielding a practical provisioning rule for sustaining learner utilization. To mitigate dissemination bottlenecks and lower cost, ECHO-2 employs peer-assisted pipelined broadcast and cost-aware activation of heterogeneous workers. Experiments on GRPO post-training of 4B and 8B models under real wide-area bandwidth regimes show that ECHO-2 significantly improves cost efficiency while preserving RL reward comparable to strong baselines.
Community
Current RLHF/RLAIF is bottlenecked by rollouts and wasteful GPU idling. ECHO-2 changes the cost structure: we decouple RL into three planes—rollout (global inference swarm), learning (staleness-aware multi-step updates), and data/reward (fully modular)—and coordinate them with lightweight versioning and pipelined broadcast. The result is near-continuous learner utilization even under heterogeneous, unreliable WAN workers, enabling RL to scale out across a global fleet rather than up inside datacenters. We validate on GRPO-based reasoning/code tasks and a poker sandbox integration.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RollArt: Scaling Agentic RL Training via Disaggregated Infrastructure (2025)
- Understanding and Exploiting Weight Update Sparsity for Communication-Efficient Distributed RL (2026)
- RL-VLA$^3$: Reinforcement Learning VLA Accelerating via Full Asynchronism (2026)
- When RL Meets Adaptive Speculative Training: A Unified Training-Serving System (2026)
- FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning (2026)
- RLinf-USER: A Unified and Extensible System for Real-World Online Policy Learning in Embodied AI (2026)
- Rollout-Training Co-Design for Efficient LLM-Based Multi-Agent Reinforcement Learning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper