DataFlex: A Unified Framework for Data-Centric Dynamic Training of Large Language Models
Abstract
DataFlex is a unified framework for dynamic data-centric training of large language models that supports sample selection, domain mixture adjustment, and sample reweighting while maintaining compatibility with standard training workflows and enabling efficient large-scale deployment.
Data-centric training has emerged as a promising direction for improving large language models (LLMs) by optimizing not only model parameters but also the selection, composition, and weighting of training data during optimization. However, existing approaches to data selection, data mixture optimization, and data reweighting are often developed in isolated codebases with inconsistent interfaces, hindering reproducibility, fair comparison, and practical integration. In this paper, we present DataFlex, a unified data-centric dynamic training framework built upon LLaMA-Factory. DataFlex supports three major paradigms of dynamic data optimization: sample selection, domain mixture adjustment, and sample reweighting, while remaining fully compatible with the original training workflow. It provides extensible trainer abstractions and modular components, enabling a drop-in replacement for standard LLM training, and unifies key model-dependent operations such as embedding extraction, inference, and gradient computation, with support for large-scale settings including DeepSpeed ZeRO-3. We conduct comprehensive experiments across multiple data-centric methods. Dynamic data selection consistently outperforms static full-data training on MMLU across both Mistral-7B and Llama-3.2-3B. For data mixture, DoReMi and ODM improve both MMLU accuracy and corpus-level perplexity over default proportions when pretraining Qwen2.5-1.5B on SlimPajama at 6B and 30B token scales. DataFlex also achieves consistent runtime improvements over original implementations. These results demonstrate that DataFlex provides an effective, efficient, and reproducible infrastructure for data-centric dynamic training of LLMs.
Community
DataFlex is a data-centric training framework that enhances model performance by either selecting the most influential samples, optimizing their weights, or adjusting their mixing ratios.
DataFlex 是PKU DCAI实验室和LLaMA-Factory 团队联合开发的统一大模型数据中心化动态训练框架,一站式支持数据选择、数据混合、样本重加权三大核心能力,完美兼容原生训练流程,还支持 DeepSpeed ZeRO-3 大规模训练,能大幅提升实验可复现性与模型效果,不管是做研究还是实际开发,都很实用,欢迎一起交流~
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Next-Generation LLM Training: From the Data-Centric Perspective (2026)
- On Representation Redundancy in Large-Scale Instruction Tuning Data Selection (2026)
- AlignTune: Modular Toolkit for Post-Training Alignment of Large Language Models (2026)
- AngelSlim: A more accessible, comprehensive, and efficient toolkit for large model compression (2026)
- Late-to-Early Training: LET LLMs Learn Earlier, So Faster and Better (2026)
- Two-Stage Optimizer-Aware Online Data Selection for Large Language Models (2026)
- OPUS: Towards Efficient and Principled Data Selection in Large Language Model Pre-training in Every Iteration (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
found a good walkthrough of this at https://arxivexplained.com/p/dataflex-a-unified-framework-for-data-centric-dynamic-training-of-large-language-models the data-centric angle is underrated imo. most people focus on architecture changes but what's in the training data, and how it shifts over time, matters just as much
Get this paper in your agent:
hf papers read 2603.26164 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper