TAG-MoE: Task-Aware Gating for Unified Generative Mixture-of-Experts
TAG-MoE: Task-Aware Gating for Unified Generative Mixture-of-Experts
Yu Xu1,2β , Hongbin Yan1, Juan Cao1, Yiji Cheng2, Tiankai Hang2, Runze He2, Zijin Yin2, Shiyi Zhang2, Yuxin Zhang1, Jintao Li1, Chunyu Wang2β‘, Qinglin Lu2, Tong-Yee Lee3, Fan Tang1Β§
1University of Chinese Academy of Sciences, 2Tencent Hunyuan, 3National Cheng-Kung University
Abstract:
Unified image generation and editing models suffer from severe task interference in dense diffusion transformers architectures, where a shared parameter space must compromise between conflicting objectives (e.g., local editing v.s. subject-driven generation). While the sparse Mixture-of-Experts (MoE) paradigm is a promising solution, its gating networks remain task-agnostic, operating based on local features, unaware of global task intent. This task-agnostic nature prevents meaningful specialization and fails to resolve the underlying task interference. In this paper, we propose a novel framework to inject semantic intent into MoE routing. We introduce a Hierarchical Task Semantic Annotation scheme to create structured task descriptors (e.g., scope, type, preservation). We then design Predictive Alignment Regularization to align internal routing decisions with the task's high-level semantics. This regularization evolves the gating network from a task-agnostic executor to a dispatch center. Our model effectively mitigates task interference, outperforming dense baselines in fidelity and quality, and our analysis shows that experts naturally develop clear and semantically correlated specializations.
π§ Environment Setup
We recommend using uv with the provided pyproject.toml / uv.lock.
1. Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
2. Create and activate virtual environment
git clone https://github.com/ICTMCG/TAG-MoE.git && cd TAG-MoE
uv venv --python 3.12 --python-preference only-managed
source .venv/bin/activate
3. Install dependencies
UV_EXTRA_INDEX_URL=https://download.pytorch.org/whl/cu126 uv sync
π Inference
Note:
- TAG-MoE inference requires 60GB+ available VRAM. We ran inference tests on 2Γ A100 40GB GPUs.
- We leverage a VLM model to refine the instruction according to the input for better result.
uv run python infer.py \
--pretrained_model_path Qwen/Qwen-Image \
--transformer_model_path YUXU915/TAG-MoE \
--device 0,1 \
--image input.png \
--prompt "Change the weather to sunny, remove the umbrella, the girl's hands hung naturally at her sides" \
--output result.png
WebUI
uv run python run_gradio.py \
--pretrained_model_path Qwen/Qwen-Image \
--transformer_model_path YUXU915/TAG-MoE \
--device 0,1
π Citation
If you find this work useful, please consider citing:
@misc{xu2026tagmoetaskawaregatingunified,
title={TAG-MoE: Task-Aware Gating for Unified Generative Mixture-of-Experts},
author={Yu Xu and Hongbin Yan and Juan Cao and Yiji Cheng and Tiankai Hang and Runze He and Zijin Yin and Shiyi Zhang and Yuxin Zhang and Jintao Li and Chunyu Wang and Qinglin Lu and Tong-Yee Lee and Fan Tang},
year={2026},
eprint={2601.08881},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.08881},
}
π Acknowledgements
This project builds upon the following excellent open-source works:
- Diffusers β https://github.com/huggingface/diffusers
- MegaBlocks β https://github.com/databricks/megablocks
- Downloads last month
- -
Model tree for YUXU915/TAG-MoE
Base model
Qwen/Qwen-Image