Abstract
A novel framework for improving large language model performance through textual frequency analysis, including laws, distillation, and curriculum training approaches.
While textual frequency has been validated as relevant to human cognition in reading speed, its relatedness to Large Language Models (LLMs) is seldom studied. We propose a novel research direction in terms of textual data frequency, which is an understudied topic, to the best of our knowledge. Our framework is composed of three units. First, this paper proposes Textual Frequency Law (TFL), which indicates that frequent textual data should be preferred for LLMs for both prompting and fine-tuning. Since many LLMs are closed-source in their training data, we propose using online resources to estimate the sentence-level frequency. We then utilize an input paraphraser to paraphrase the input into a more frequent textual expression. Next, we propose Textual Frequency Distillation (TFD) by querying LLMs to conduct story completion by further extending the sentences in the datasets, and the resulting corpora are used to adjust the initial estimation. Finally, we propose Curriculum Textual Frequency Training (CTFT) that fine-tunes LLMs in an increasing order of sentence-level frequency. Experiments are conducted on our curated dataset Textual Frequency Paired Dataset (TFPD) on math reasoning, machine translation, commonsense reasoning and agentic tool calling. Results show the effectiveness of our framework.
Community
cool
The best paper I've ever read this year
A foundamental LLM law which suggests higher frequency to be preferred during prompting.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Can Linguistically Related Languages Guide LLM Translation in Low-Resource Settings? (2026)
- An Empirical Study of Many-Shot In-Context Learning for Machine Translation of Low-Resource Languages (2026)
- To Words and Beyond: Probing Large Language Models for Sentence-Level Psycholinguistic Norms of Memorability and Reading Times (2026)
- LLM-as-an-Annotator: Training Lightweight Models with LLM-Annotated Examples for Aspect Sentiment Tuple Prediction (2026)
- NCL-UoR at SemEval-2026 Task 5: Embedding-Based Methods, Fine-Tuning, and LLMs for Word Sense Plausibility Rating (2026)
- Learning to Predict Future-Aligned Research Proposals with Language Models (2026)
- Impact of enriched meaning representations for language generation in dialogue tasks: A comprehensive exploration of the relevance of tasks, corpora and metrics (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Adam’s Law: Textual Frequency Law on Large Language Models
This paper identifies and formalizes the Textual Frequency Law (TFL): large language models systematically prefer textual data that appears frequently in their training corpus, both when processing prompts and when being fine-tuned. Rare or unusual phrasings degrade model performance even when they are semantically identical to more common expressions. The authors propose three practical interventions -- an input paraphraser, Textual Frequency Distillation (TFD), and Curriculum Textual Frequency Training (CTFT) -- to exploit this law for improved LLM performance.
Key Idea
The Textual Frequency Law states that LLMs perform better on text expressed in frequent, common patterns and worse on semantically equivalent text expressed in rare or unusual ways. This bias toward frequency is baked into the models through their training data distribution and affects both inference (how well they understand prompts) and learning (how efficiently they absorb new information during fine-tuning).
Method / Approach
The paper proposes three techniques that leverage the Textual Frequency Law. First, an input paraphraser rephrases user inputs into more frequently occurring expressions before feeding them to the LLM, improving comprehension without changing the model. Second, Textual Frequency Distillation (TFD) converts training data into higher-frequency expressions to make fine-tuning more sample-efficient. Third, Curriculum Textual Frequency Training (CTFT) orders fine-tuning data from low to high frequency, letting the model first learn from harder rare examples before consolidating with common patterns.
Results
All three proposed methods yield consistent improvements across benchmarks. The input paraphraser provides a zero-cost boost at inference time, while TFD and CTFT improve fine-tuning outcomes. The curriculum-based ordering (low frequency first, then high) proves particularly effective, suggesting that exposure to rare patterns early followed by reinforcement with common patterns creates a more robust learning trajectory.
Get this paper in your agent:
hf papers read 2604.02176 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper


