ik_llama.cpp imatrix Quantizations of XiaomiMiMo/MiMo-V2-Flash

NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.

Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.

These quants provide best in class perplexity for the given memory footprint.

Big Thanks

Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!

Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!

Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!

Quant Collection

Perplexity computed against wiki.test.raw.

Perplexity Chart

These first two are just test quants for baseline perplexity comparison:

  • BF16 575.238 GiB (16.003 BPW)
    • Final estimate: PPL over 584 chunks for n_ctx=512 = 9.2268 +/- 0.07375
  • Q8_0 305.682 GiB (8.504 BPW)
    • Final estimate: PPL over 584 chunks for n_ctx=512 = 9.1602 +/- 0.07308

NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!

IQ5_K 213.151 GiB (5.930 BPW)

Final estimate: PPL over 584 chunks for n_ctx=512 = 9.1730 +/- 0.07317

๐Ÿ‘ˆ Secret Recipe
#!/usr/bin/env bash

custom="
# 48 Repeating Layers [0-47]

# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0

# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

# Routed Experts Layers [1-47]
blk\..*\.ffn_down_exps\.weight=iq6_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k

# Non-Repeating Layers
token_embd\.weight=q8_0
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/imatrix-MiMo-V2-Flash-BF16.dat \
    /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/MiMo-V2-Flash-256x7.2B-BF16-00001-of-00013.gguf \
    /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/MiMo-V2-Flash-IQ5_K.gguf \
    IQ5_K \
    128

IQ2_KL 106.422 GiB (2.961 BPW)

Final estimate: PPL over 584 chunks for n_ctx=512 = 12.5123 +/- 0.10685

๐Ÿ‘ˆ Secret Recipe
#!/usr/bin/env bash

custom="
# 48 Repeating Layers [0-47]

# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0

# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

# Routed Experts Layers [1-47]
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/imatrix-MiMo-V2-Flash-BF16.dat \
    /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/MiMo-V2-Flash-256x7.2B-BF16-00001-of-00013.gguf \
    /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/MiMo-V2-Flash-IQ2_KL.gguf \
    IQ2_KL \
    128

smol-IQ2_KS 82.922 GiB (2.307 BPW)

Final estimate: PPL over 584 chunks for n_ctx=512 = 18.2950 +/- 0.16698

๐Ÿ‘ˆ Secret Recipe
#!/usr/bin/env bash

custom="
# 48 Repeating Layers [0-47]

# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0

# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

# Routed Experts Layers [1-47]
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/imatrix-MiMo-V2-Flash-BF16.dat \
    /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/MiMo-V2-Flash-256x7.2B-BF16-00001-of-00013.gguf \
    /mnt/data/models/ubergarm/MiMo-V2-Flash-GGUF/MiMo-V2-Flash-smol-IQ2_KS.gguf \
    IQ2_KS \
    128

Quick Start

# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp

# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)

# Full 2x GPU offload
./build/bin/llama-server \
    --model "$model" \
    --alias ubergarm/MiMo-V2-Flash-GGUF \
    --ctx-size 32768 \
    -ctk q8_0 -ctv q8_0 \
    -sm graph \
    -smgs \
    -mea 256 \
    -ts 42,48 \
    -ngl 99 \
    -ub 2048 -b 2048 \
    --threads 1 \
    --host 127.0.0.1 \
    --port 8080 \
    --no-mmap \
    --jinja

# Hybrid CPU + 2 or more GPUs
# using new "-sm graph" 'tensor parallel' feature!
# https://github.com/ikawrakow/ik_llama.cpp/pull/1080
# https://github.com/ikawrakow/ik_llama.cpp/pull/1105
For examples take a look at: https://huggingface.co/ubergarm/GLM-4.7-GGUF#quick-start

# CPU Only
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
    --model "$model" \
    --alias ubergarm/MiMo-V2-Flash \
    --ctx-size 65536 \
    --merge-qkv \
    -ctk q8_0 -ctv q8_0 \
    -ub 4096 -b 4096 \
    --parallel 1 \
    --threads 96 \
    --threads-batch 128 \
    --numa numactl \
    --host 127.0.0.1 \
    --port 8080 \
    --no-mmap \
    --no-display-prompt \
    --log-enable \
    --jinja

NOTE: For tool/agentic use you can bring your own template with --chat-template-file myTemplate.jinja and might need --special etc. NOTE: Experiment with: -ger possibly: https://github.com/ikawrakow/ik_llama.cpp/pull/836

References

Downloads last month
164
GGUF
Model size
309B params
Architecture
mimo2
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ubergarm/MiMo-V2-Flash-GGUF

Quantized
(12)
this model