🦆 zecanard/gemma-4-E2B-it-ultra-uncensored-heretic-MLX-3bit-mixed_3_6
This model was converted to MLX from llmfan46/gemma-4-E2B-it-ultra-uncensored-heretic using mlx-vlm version 0.5.0.
Please refer to the original model card for more details.
🌟 Quality
Mixed-precision quantized vision language model with an effective 6.461 bits per weight. Combines the size and speed benefits of a 3-bit quant with higher precision where it matters most.
mlx_vlm.convert --quantize --q-group-size 32 --quant-predicate mixed_3_6
🛠️ Customizations
This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template, or changing true to false:
{%- set enable_thinking = true %}
You may also need to adjust your environment’s Reasoning Section Parsing to recognize <|channel>thought as the Start String, and <channel|> as the End String.
🖥️ Use with mlx
pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/gemma-4-E2B-it-ultra-uncensored-heretic-MLX-3bit-mixed_3_6 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 56
4-bit
Model tree for zecanard/gemma-4-E2B-it-ultra-uncensored-heretic-MLX-3bit-mixed_3_6
Base model
google/gemma-4-E2B