Instructions to use nyu-visionx/webssl300m_decoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nyu-visionx/webssl300m_decoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-to-image", model="nyu-visionx/webssl300m_decoder")# Load model directly from transformers import AutoImageProcessor, AutoModelForPreTraining processor = AutoImageProcessor.from_pretrained("nyu-visionx/webssl300m_decoder") model = AutoModelForPreTraining.from_pretrained("nyu-visionx/webssl300m_decoder") - Notebooks
- Google Colab
- Kaggle
File size: 1,528 Bytes
0e6a28d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ---
license: mit
library_name: transformers
pipeline_tag: image-to-image
---
# Scale-RAE: Scaling Text-to-Image Diffusion Transformers with Representation Autoencoders
Official model weights for the paper [Scaling Text-to-Image Diffusion Transformers with Representation Autoencoders](https://huggingface.co/papers/2601.16208).
Representation Autoencoders (RAEs) enable diffusion modeling in high-dimensional semantic latent spaces. Scale-RAE scales this framework to large-scale, freeform text-to-image generation. RAEs consistently outperform traditional VAEs during pretraining across various model scales, offering faster convergence and better generation quality.
- **Project Page:** [https://rae-dit.github.io/scale-rae/](https://rae-dit.github.io/scale-rae/)
- **GitHub Repository:** [https://github.com/ZitengWangNYU/Scale-RAE](https://github.com/ZitengWangNYU/Scale-RAE)
- **Paper:** [https://arxiv.org/abs/2601.16208](https://arxiv.org/abs/2601.16208)
## Usage
For full text-to-image generation using Scale-RAE, please follow the installation and inference instructions in the [official repository](https://github.com/ZitengWangNYU/Scale-RAE).
## Citation
```bibtex
@article{scale-rae-2026,
title={Scaling Text-to-Image Diffusion Transformers with Representation Autoencoders},
author={Shengbang Tong and Boyang Zheng and Ziteng Wang and Bingda Tang and Nanye Ma and Ellis Brown and Jihan Yang and Rob Fergus and Yann LeCun and Saining Xie},
journal={arXiv preprint arXiv:2601.16208},
year={2026}
}
``` |