F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the Rare
Abstract
RLVR methods using group sampling suffer from bias toward likely trajectories and missed rare-correct ones; a difficulty-aware advantage scaling technique improves performance on benchmarks without increasing computational cost.
Reinforcement Learning with Verifiable Rewards (RLVR) is commonly based on group sampling to estimate advantages and stabilize policy updates. In practice, large group sizes are not feasible due to computational limits, which biases learning toward trajectories that are already likely. Smaller groups often miss rare-correct trajectories while still containing mixed rewards, concentrating probability on common solutions. We derive the probability that updates miss rare-correct modes as a function of group size, showing non-monotonic behavior, and characterize how updates redistribute mass within the correct set, revealing that unsampled-correct mass can shrink even as total correct mass grows. Motivated by this analysis, we propose a difficulty-aware advantage scaling coefficient, inspired by Focal loss, that down-weights updates on high-success prompts. The lightweight modification can be directly integrated into any group-relative RLVR algorithm such as GRPO, DAPO, and CISPO. On Qwen2.5-7B across in-domain and out-of-domain benchmarks, our method improves pass@256 from 64.1 rightarrow 70.3 (GRPO), 69.3 rightarrow 72.5 (DAPO), and 73.2 rightarrow 76.8 (CISPO), while preserving or improving pass@1, without increasing group size or computational cost.
Community
Reinforcement Learning with Verifiable Rewards (RLVR) is commonly based on group sampling to estimate advantages and stabilize policy updates. In practice, large group sizes are not feasible due to computational limits, which biases learning toward trajectories that are already likely. Smaller groups often miss rare-correct trajectories while still containing mixed rewards, concentrating probability on common solutions. We derive the probability that updates miss rare-correct modes as a function of group size, showing non-monotonic behavior, and characterize how updates redistribute mass within the correct set, revealing that unsampled-correct mass can shrink even as total correct mass grows. Motivated by this analysis, we propose a difficulty-aware advantage scaling coefficient, inspired by Focal loss, that down-weights updates on high-success prompts. The lightweight modification can be directly integrated into any group-relative RLVR algorithm such as GRPO, DAPO, and CISPO. On Qwen2.5-7B across in-domain and out-of-domain benchmarks, our method improves pass@256 from 64.1 → 70.3 (GRPO), 69.3 → 72.5 (DAPO), and 73.2 → 76.8 (CISPO), while preserving or improving pass@1, without increasing group size or computational cost.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Variance: Prompt-Efficient RLVR via Rare-Event Amplification and Bidirectional Pairing (2026)
- MC-GRPO: Median-Centered Group Relative Policy Optimization for Small-Rollout Reinforcement Learning (2026)
- AMIR-GRPO: Inducing Implicit Preference Signals into GRPO (2026)
- SetPO: Set-Level Policy Optimization for Diversity-Preserving LLM Reasoning (2026)
- Thickening-to-Thinning: Reward Shaping via Human-Inspired Learning Dynamics for LLM Reasoning (2026)
- Clipping-Free Policy Optimization for Large Language Models (2026)
- Prompt Augmentation Scales up GRPO Training on Mathematical Reasoning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper