Image-Text-to-Text
MLX
Safetensors
Transformers
English
gemma4
3-bit
text-generation-inference
unsloth
reasoning
conversational
4-bit precision
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("image-text-to-text", model="zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
pipe(text=messages)
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText

processor = AutoProcessor.from_pretrained("zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6")
model = AutoModelForImageTextToText.from_pretrained("zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
inputs = processor.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

🦆 zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6

This model was converted to MLX from TeichAI/gemma-4-31B-it-Claude-Opus-Distill-v2 using mlx-vlm version 0.5.0. Please refer to the original model card for more details.

🌟 Quality

Mixed-precision quantized vision language model with an effective 4.256 bits per weight. Combines the size and speed benefits of a 3-bit quant with higher precision where it matters most.

mlx_vlm.convert --quantize --q-group-size 32 --quant-predicate mixed_3_6

🛠️ Customizations

This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template, or changing true to false:

{%- set enable_thinking = true %}

You may also need to adjust your environment’s Reasoning Section Parsing to recognize <|channel>thought as the Start String, and <channel|> as the End String.

🖥️ Use with mlx

pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
65
Safetensors
Model size
5B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6

Datasets used to train zecanard/gemma-4-31B-it-Claude-Opus-Distilled-v2-MLX-3bit-mixed_3_6