Text Generation
MLX
Safetensors
qwen3_moe
conversational
8-bit precision

DASD-30B-A3B-Thinking-Preview-qx86-hi-mlx

This is the performance compared to the baseline Qwen3-30B-A3B-Thinking-2507 at a similar quant.

A few other models included for comparison that only contain Qwen base Brainwaves

DASD     0.462,0.529,0.840,0.636,0.406,0.766,0.596
baseline 0.410,0.444,0.691,0.635,0.390,0.769,0.650

Qwen3-30B-A3B-YOYO-V2
qx86-hi  0.531,0.690,0.885,0.685,0.448,0.785,0.646
Qwen3-30B-A3B-YOYO-V3
qx86-hi  0.472,0.550,0.880,0.698,0.442,0.789,0.650
Qwen3-30B-A3B-YOYO-V4
qx86-hi  0.511,0.674,0.885,0.649,0.442,0.769,0.618
Qwen3-30B-A3B-YOYO-V5
qx86-hi  0.511,0.669,0.885,0.653,0.440,0.772,0.619
Qwen3-30B-A3B-YOYO-AutoThink
qx86-hi  0.454,0.481,0.869,0.673,0.404,0.777,0.643

Some 30B Nightmedia models

Qwen3-30B-A3B-Architect18
qx86-hi  0.577,0.760,0.879,0.760,0.446,0.803,0.702
Qwen3-30B-A3B-Element6-1M
qx86-hi  0.568,0.737,0.880,0.760,0.450,0.803,0.714

This model DASD-30B-A3B-Thinking-Preview-qx86-hi-mlx was converted to MLX format from Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview using mlx-lm version 0.30.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("DASD-30B-A3B-Thinking-Preview-qx86-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
13
Safetensors
Model size
31B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/DASD-30B-A3B-Thinking-Preview-qx86-hi-mlx

Quantized
(7)
this model

Datasets used to train nightmedia/DASD-30B-A3B-Thinking-Preview-qx86-hi-mlx