Qwen-Scope: Decoding Intelligence, Unleashing Potential
We are excited to introduce Qwen-Scope, an interpretability module trained on the Qwen3 and Qwen3.5 series models. Specifically, we integrated and trained Sparse Autoencoders (SAEs) within Qwenβs hidden layers. By implementing sparsity constraints, we can automatically extract data features that are highly decoupled, low-redundancy, and significantly more interpretable. Qwen-Scope can be used not only to analyze the internal mechanisms of Qwenβs behavior but also holds immense potential for model optimization. Application scenarios include steerable inference control, evaluation sample distribution analysis and comparison, data classification and synthesis, and model training and optimization. See our technical report for more details.
Model Details
| Property | Value |
|---|---|
| Base model | Qwen3.5-27B |
SAE width (d_sae) |
81920 |
Hidden size (d_model) |
5120 |
| Expansion factor | 16Γ |
| Top-K | 100 |
| Hook point | Residual stream |
| Layers covered | 0 β 63 (64 layers total) |
| File format | PyTorch .pt dict |
Architecture
This is a TopK SAE β at each forward pass, exactly 100 features are kept non-zero.
Each checkpoint file layer{n}.sae.pt is a Python dict with four tensors:
| Key | Shape | Description |
|---|---|---|
W_enc |
(81920, 5120) |
Encoder weight matrix |
W_dec |
(5120, 81920) |
Decoder weight matrix |
b_enc |
(81920,) |
Encoder bias |
b_dec |
(5120,) |
Decoder bias |
Files
This repository contains one SAE checkpoint per transformer layer (layers 0β63):
layer0.sae.pt
layer1.sae.pt
...
layer63.sae.pt
Feature Activation Extraction
End-to-end demo: run the base LLM, hook the residual stream at a chosen layer, and extract sparse SAE feature activations. For most of the situations, using SAEs trained on base models to explore the internal process of post-training checkpoints is also reasonable.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# ββ 1. Load base model ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
model_name = "Qwen/Qwen3.5-27B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float32)
model.eval()
# ββ 2. Load SAE for a target layer βββββββββββββββββββββββββββββββββββββββββββ
LAYER = 0 # choose any layer in 0β63
sae = torch.load(f"layer{LAYER}.sae.pt", map_location="cpu")
W_enc = sae["W_enc"] # (81920, 5120)
b_enc = sae["b_enc"] # (81920,)
def get_feature_acts(residual: torch.Tensor) -> torch.Tensor:
"""residual: (..., 5120) β sparse feature activations (..., 81920)"""
pre_acts = residual @ W_enc.T + b_enc
topk_vals, topk_idx = pre_acts.topk(100, dim=-1)
acts = torch.zeros_like(pre_acts)
acts.scatter_(-1, topk_idx, topk_vals)
return acts
# ββ 3. Hook residual stream after the target transformer layer ββββββββββββββββ
captured = {}
def _hook(module, input, output):
hidden = output[0] if isinstance(output, tuple) else output
captured["residual"] = hidden.detach().cpu()
hook = model.model.layers[LAYER].register_forward_hook(_hook)
# ββ 4. Forward pass βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
text = "The capital of France is"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
model(**inputs)
hook.remove()
# ββ 5. Extract feature activations βββββββββββββββββββββββββββββββββββββββββββ
residual = captured["residual"] # (1, seq_len, 5120)
feature_acts = get_feature_acts(residual) # (1, seq_len, 81920)
# Inspect active features for the last token
last_token_acts = feature_acts[0, -1] # (81920,)
active_idx = last_token_acts.nonzero(as_tuple=True)[0]
print(f"Active features : {active_idx.tolist()}")
print(f"Feature values : {last_token_acts[active_idx].tolist()}")
Gradio Demo
We also provide a gradio demo app.py. You can run it locally:
python app.py \
--model Qwen/Qwen3.5-27B \
--model-name-sae-trained-from qwen3.5-27b \
--model-name-analyzing-now qwen3.5-27b \
--sae-path Qwen/SAE-Res-Qwen3.5-27B-W80K-L0_100 \
--top-k 100 \
--num-layers 64 \
--sae-width 81920 \
--d-model 5120 \
--server-port 7860
Caution
It is strictly prohibited to use interpretability tools for non-scientific research purposes to interfere with model capabilities, or to fabricate, generate, and disseminate harmful information that violates public order, good morals, and socialist core values, including pornographic, violent, discriminatory, or incendiary content. Violators will have their authorization automatically terminated and shall bear all legal liabilities arising therefrom. The right of final interpretation of this statement belongs to the project owner.
Citation
If you use these SAEs in your research, please cite:
@misc{qwen_scope,
title={{Qwen-Scope}: Turning Sparse Features into Development Tools for Large Language Models},
author={Boyi Deng and Xu Wang and Yaoning Wang and Yu Wan and Yubo Ma and Baosong Yang and Haoran Wei and Jialong Tang and Huan Lin and Ruize Gao and Tianhao Li and Qian Cao and Xuancheng Ren and Xiaodong Deng and An Yang and Fei Huang and Dayiheng Liu and Jingren Zhou},
year={2026},
eprint={2605.11887},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2605.11887},
}
- Downloads last month
- -
Model tree for Qwen/SAE-Res-Qwen3.5-27B-W80K-L0_100
Base model
Qwen/Qwen3.5-27B