model_id stringclasses 1
value | model_display_name stringclasses 1
value | publisher stringclasses 1
value | quantization stringclasses 3
values | format stringclasses 1
value | regime stringclasses 2
values | hardware dict | runtime dict | benchmark dict | evaluation dict | session dict |
|---|---|---|---|---|---|---|---|---|---|---|
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_S | GGUF | moe_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "Windows 11"
} | {
"engine": "LM Studio",
"context_length": 16384,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": true,
"temperature": 0.7,
"top_p": 0.9
} | {
"tok_per_sec": 7.4,
"ttft_seconds": 1.75,
"tokens_generated": 761,
"vram_used_gb": null,
"ram_used_gb_peak": 31,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": 4,
"hit_length_target": true,
"notes": "RAM-ceilinged on 32GB system. Reproduces 5.5x slower than the public X-post claim of 41 tok/sec — same author's thread requires 64GB RAM as the unstated prerequisite. GGUF repackaged by bartowski."
} | {
"date": "2026-05-06T00:00:00",
"run_type": "warm"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_S | GGUF | moe_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "Windows 11"
} | {
"engine": "LM Studio",
"context_length": 32768,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": true,
"temperature": 0.7,
"top_p": 0.9
} | {
"tok_per_sec": 7.43,
"ttft_seconds": 1.85,
"tokens_generated": 796,
"vram_used_gb": null,
"ram_used_gb_peak": 31,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": 4,
"hit_length_target": true,
"notes": "Doubling context from 16K to 32K barely moved tok/sec (7.40 → 7.43). Flat curve = constant overhead dominates, not KV-attention-scan cost. Consistent with disk-paging bottleneck rather than RAM-bandwidth bottleneck. GGUF repackaged by bartowski."
} | {
"date": "2026-05-06T00:00:00",
"run_type": "warm"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | IQ4_XS | GGUF | moe_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "Windows 11"
} | {
"engine": "LM Studio",
"context_length": 16384,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": true,
"temperature": 0.7,
"top_p": 0.9
} | {
"tok_per_sec": 7.4,
"ttft_seconds": null,
"tokens_generated": null,
"vram_used_gb": null,
"ram_used_gb_peak": 31,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": null,
"hit_length_target": null,
"notes": "Speed-only run, no quality eval captured. Smaller quant (~2GB less than Q4_K_S, 18.81GB vs 20.59GB) tested to isolate quant size as variable. Same tok/sec confirms quant size is not the lever — RAM ceiling is the binding constraint regardless of weight f... | {
"date": "2026-05-06T00:00:00",
"run_type": "warm"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_M | GGUF | moe_partial_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "WSL2 Ubuntu 26.04 on Windows 11"
} | {
"engine": "llama-server (llama.cpp b9049-2496f9c14)",
"context_length": 16384,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": false,
"temperature": 0.7,
"top_p": null
} | {
"tok_per_sec": 29.7,
"ttft_seconds": 2.44,
"tokens_generated": 1024,
"vram_used_gb": null,
"ram_used_gb_peak": null,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": null,
"hit_length_target": null,
"notes": "ncmoe=32 (8 layers experts on GPU, 32 on CPU). ngl=999, threads=6. Prompt: 20.46 tok/s. Total: 36.9s. VRAM ~6649 MiB / 8187 MiB. CPU-mapped experts: 16581 MiB. Quantized by Unsloth (UD). Baseline partial offload — clean, no VRAM pressure. 4x faster than ... | {
"date": "2026-05-07T00:00:00",
"run_type": "cold_server"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_M | GGUF | moe_partial_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "WSL2 Ubuntu 26.04 on Windows 11"
} | {
"engine": "llama-server (llama.cpp b9049-2496f9c14)",
"context_length": 16384,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": false,
"temperature": 0.7,
"top_p": null
} | {
"tok_per_sec": 32,
"ttft_seconds": 1.18,
"tokens_generated": 1024,
"vram_used_gb": null,
"ram_used_gb_peak": null,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": null,
"hit_length_target": null,
"notes": "ncmoe=30 (10 layers experts on GPU, 30 on CPU). ngl=999, threads=6. Prompt: 42.48 tok/s. Total: 33.2s. Sweet spot for 16K context. 7.7% faster decode than ncmoe 32. 4.3x faster than full offload."
} | {
"date": "2026-05-07T00:00:00",
"run_type": "cold_server"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_M | GGUF | moe_partial_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "WSL2 Ubuntu 26.04 on Windows 11"
} | {
"engine": "llama-server (llama.cpp b9049-2496f9c14)",
"context_length": 16384,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": false,
"temperature": 0.7,
"top_p": null
} | {
"tok_per_sec": 16.33,
"ttft_seconds": 2.05,
"tokens_generated": 1024,
"vram_used_gb": null,
"ram_used_gb_peak": null,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": null,
"hit_length_target": null,
"notes": "ncmoe=28 (12 layers experts on GPU, 28 on CPU). ngl=999, threads=6. Prompt: 24.38 tok/s. Total: 64.7s. VRAM pressure triggered — CUDA page faults through PCIe halved throughput. Sharp cliff, not gradual."
} | {
"date": "2026-05-07T00:00:00",
"run_type": "cold_server"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_M | GGUF | moe_partial_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "WSL2 Ubuntu 26.04 on Windows 11"
} | {
"engine": "llama-server (llama.cpp b9049-2496f9c14)",
"context_length": 32768,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": false,
"temperature": 0.7,
"top_p": null
} | {
"tok_per_sec": 35.36,
"ttft_seconds": 1.44,
"tokens_generated": 1024,
"vram_used_gb": null,
"ram_used_gb_peak": null,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": null,
"hit_length_target": null,
"notes": "ncmoe=30 (10 layers experts on GPU, 30 on CPU). ngl=999, threads=6. Prompt: 34.72 tok/s. Total: 30.4s. Best overall config. Hybrid SSM+attention means KV cache only scales for 10/40 attention layers — 32K is nearly free. 4.8x faster than full offload. Re... | {
"date": "2026-05-07T00:00:00",
"run_type": "cold_server"
} |
Qwen/Qwen3.6-35B-A3B | qwen3.6-35b-a3b | qwen | Q4_K_M | GGUF | moe_partial_cpu_expert_offload | {
"gpu": "NVIDIA GeForce RTX 4060 Ti",
"vram_gb": 8,
"system_ram_gb": 32,
"ram_spec": "DDR5-6000 CL36",
"cpu": "AMD Ryzen 5 7600X",
"platform": "WSL2 Ubuntu 26.04 on Windows 11"
} | {
"engine": "llama-server (llama.cpp b9049-2496f9c14)",
"context_length": 65536,
"gpu_offload": "max",
"expert_offload_to_cpu": true,
"kv_cache_quant_k": "q8_0",
"kv_cache_quant_v": "q8_0",
"flash_attention": true,
"thinking_disabled": false,
"temperature": 0.7,
"top_p": null
} | {
"tok_per_sec": 17.41,
"ttft_seconds": 2.83,
"tokens_generated": 1024,
"vram_used_gb": null,
"ram_used_gb_peak": null,
"prompt_id": "moe-explainer-500w"
} | {
"quality_5pt": null,
"hit_length_target": null,
"notes": "ncmoe=30 (10 layers experts on GPU, 30 on CPU). ngl=999, threads=6. Prompt: 17.70 tok/s. Total: 61.7s. VRAM cliff at 65K — KV cache for 10 attention layers at q8_0 grows to ~680 MiB, pushing total GPU past ~7 GB. Same ~50% throughput drop as ncmoe 28 at ... | {
"date": "2026-05-07T00:00:00",
"run_type": "cold_server"
} |
Windows RTX 4060 Ti 8GB — MoE Expert Offload Bench (2026-05)
Practitioner test of MoE expert offload on Qwen3.6-35B-A3B with a 32GB-RAM consumer rig. Two regimes tested: full offload (all experts → CPU) and partial offload (-ncmoe N sweep via llama-server from source).
TL;DR
- Full offload (all experts → CPU): RAM-ceilinged at 7.40 tok/sec on 32GB. Recipe assumes 64GB.
- Partial offload (
-ncmoe 30, 10 layers' experts on GPU): 35.36 tok/sec at 32K context. 4.8x faster. - The 8GB VRAM cliff is sharp — crossing ~7 GB total GPU usage halves throughput instantly.
Hardware
| Component | Spec |
|---|---|
| GPU | RTX 4060 Ti 8GB |
| System RAM | 32GB DDR5-6000 dual-channel (EXPO confirmed at 6000 MT/s) |
| CPU | AMD Ryzen 5 7600X |
| Platform | Windows 11 (full offload: LM Studio native; partial offload: WSL2 Ubuntu 26.04) |
Data files
| File | Regime | Runs | Date |
|---|---|---|---|
data/bench-2026-05-06.jsonl |
Full offload (LM Studio, -ncmoe 99 equivalent) |
3 | 2026-05-06 |
data/bench-2026-05-07.jsonl |
Partial offload (-ncmoe sweep via llama-server) |
5 | 2026-05-07 |
Methodology
Same MoE-explainer prompt across all runs (~500-word target).
Full offload runs (2026-05-06)
- Engine: LM Studio
- "Force Model Expert Weights onto CPU" → ON (equivalent to
--n-cpu-moe 99) - KV cache: q8_0. Flash attention: ON. Thinking: disabled.
- GPU offload: max. Temperature: 0.7, Top-p: 0.9.
Partial offload runs (2026-05-07)
- Engine: llama-server (llama.cpp b9049-2496f9c14, ggml 0.11.0) built from source inside WSL2
- CUDA 13.2.1, compute capability 8.9
- Flags:
-ngl 999 -ncmoe N -fa on --cache-type-k q8_0 --cache-type-v q8_0 -t 6 - Thinking: ON (Qwen3.6 default — all tokens went to reasoning_content)
- Temperature: 0.7. Max tokens: 1024. Cold server per config.
- Variable:
-ncmoe(32, 30, 28) and-c(16384, 32768, 65536)
Findings — Full offload
- Q4_K_S @ 16K: 7.40 tok/sec
- Q4_K_S @ 32K: 7.43 tok/sec (flat — disk-paging bottleneck, not KV-scan cost)
- IQ4_XS @ 16K: ~7.40 tok/sec (same speed — quant size is not the lever)
Bottleneck is system RAM size. 32GB system pages expert weights to NVMe during inference.
Findings — Partial offload (-ncmoe sweep)
| ncmoe | Context | Experts on GPU | Decode tok/sec | Notes |
|---|---|---|---|---|
| 32 | 16K | 8 layers | 29.70 | Clean, no VRAM pressure |
| 30 | 16K | 10 layers | 32.00 | Sweet spot at 16K |
| 28 | 16K | 12 layers | 16.33 | VRAM cliff — PCIe page faults |
| 30 | 32K | 10 layers | 35.36 | Best overall config |
| 30 | 65K | 10 layers | 17.41 | VRAM cliff from KV cache growth |
Key insights
- 8GB VRAM cliff is sharp, not gradual. Crossing ~7 GB total GPU usage triggers CUDA page faults through PCIe. Throughput drops ~50% instantly.
- Hybrid SSM+attention makes context scaling cheap. Only 10/40 layers use attention (
full_attention_interval=4). KV cache is 1/4 what a pure-attention model needs. 32K costs ~170 MiB extra over 16K. - Optimal config:
-ncmoe 30 -c 32768— 35.36 tok/sec, 4.8x faster than full offload.
Implication
On 8GB VRAM + 32GB RAM, partial expert offload via -ncmoe is the correct approach — not full offload. Building llama.cpp from source gives access to the -ncmoe N flag that LM Studio's slider doesn't expose cleanly. The sweet spot is hardware-specific: find the highest number of expert layers on GPU that fits within ~87% of VRAM.
See also
- Companion catalog (dense models on the same rig): windows-rtx-4060ti-8gb-bench-2026-05
- Downloads last month
- 41