Image-Text-to-Text
MLX
Safetensors
Transformers
English
gemma4
3-bit
text-generation-inference
unsloth
reasoning
conversational
4-bit precision
File size: 1,886 Bytes
fa96bdb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
base_model: TeichAI/gemma-4-31B-it-Claude-Opus-Distill-v2
language: en
pipeline_tag: image-text-to-text
library_name: mlx
tags:
  - mlx
  - 3-bit
  - text-generation-inference
  - transformers
  - unsloth
  - gemma4
  - reasoning
license: apache-2.0
datasets:
  - TeichAI/Claude-Opus-4.6-Reasoning-887x
  - TeichAI/claude-4.5-opus-high-reasoning-250x
  - Crownelius/Opus-4.6-Reasoning-2100x-formatted
---
# 🦆 zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6

[This model](https://huggingface.co/zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6) was converted to MLX from [`TeichAI/gemma-4-31B-it-Claude-Opus-Distill-v2`](https://huggingface.co/TeichAI/gemma-4-31B-it-Claude-Opus-Distill-v2) using `mlx-vlm` version **0.5.0**.
Please refer to the [original model card](https://huggingface.co/TeichAI/gemma-4-31B-it-Claude-Opus-Distill-v2) for more details.

## 🌟 Quality

Mixed-precision quantized vision language model with an effective **4.256 bits per weight**. Combines the size and speed benefits of a 3-bit quant with higher precision where it matters most.

`mlx_vlm.convert --quantize --q-group-size 32 --quant-predicate mixed_3_6`

## 🛠️ Customizations

This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template, or changing `true` to `false`:

`{%- set enable_thinking = true %}`

You may also need to adjust your environment’s **Reasoning Section Parsing** to recognize `<|channel>thought` as the **Start String**, and `<channel|>` as the **End String**.

## 🖥️ Use with `mlx`

```bash
pip install -U mlx-vlm
```

```bash
mlx_vlm.generate --model zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
```