QuantLRM: Quantization of Large Reasoning Models via Fine-Tuning Signals
Abstract
QuantLRM uses weight update magnitude signals from fine-tuning to improve quantization of Large Reasoning Models, achieving better performance than traditional methods through channel importance estimation.
Weight-only quantization is important for compressing Large Language Models (LLMs). Inspired by the spirit of classical magnitude pruning, we study whether the magnitude of weight updates during reasoning-incentivized fine-tuning can provide valuable signals for quantizing Large Reasoning Models (LRMs). We hypothesize that the smallest and largest weight updates during fine-tuning are more important than those of intermediate magnitude, a phenomenon we term "protecting both ends". Upon hypothesis validation, we introduce QuantLRM, which stands for weight quantization of LRMs via fine-tuning signals. We fit simple restricted quadratic functions on weight updates to protect both ends. By multiplying the average quadratic values with the count of zero weight updates of channels, we compute channel importance that is more effective than using activation or second-order information. We run QuantLRM to quantize various fine-tuned models (including supervised, direct preference optimization, and reinforcement learning fine-tuning) over four reasoning benchmarks (AIME-120, FOLIO, temporal sequences, and GPQA-Diamond) and empirically find that QuantLRM delivers a consistent improvement for LRMs quantization, with an average improvement of 6.55% on a reinforcement learning fine-tuned model. Also supporting non-fine-tuned LRMs, QuantLRM gathers effective signals via pseudo-fine-tuning, which greatly enhances its applicability.
Community
QuantLRM code is coming soon🚀 — follow us on GitHub for updates!
arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/quantlrm-quantization-of-large-reasoning-models-via-fine-tuning-signals-5980-8dfaa116
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- What Makes Low-Bit Quantization-Aware Training Work for Reasoning LLMs? A Systematic Study (2026)
- Breaking the Blocks: Continuous Low-Rank Decomposed Scaling for Unified LLM Quantization and Adaptation (2026)
- D$^2$Quant: Accurate Low-bit Post-Training Weight Quantization for LLMs (2026)
- SASQ: Static Activation Scaling for Quantization-Aware Training in Large Language Models (2025)
- LoPRo: Enhancing Low-Rank Quantization via Permuted Block-Wise Rotation (2026)
- Correct, Concise and Complete: Multi-stage Training For Adaptive Reasoning (2026)
- CALM: A CKA-Guided Adaptive Layer-Wise Modularization Framework for LLM Quantization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper