Q5_K_M

#6
by gagamaga13 - opened

Hi! The Q5_K_M version is a good fit for the 5060 ti (16GB) to strike a balance between speed and intelligence. Will it be made? Also, the multimodal projector in the Q8 quantum complements this setup perfectly.

Hello, currently not planning more quants :)

Hi! The Q5_K_M version is a good fit for the 5060 ti (16GB) to strike a balance between speed and intelligence. Will it be made?

Hello, currently not planning more quants :)

I could probably quantize if you want and upload it; Or you could do it yourself.

Hi! The Q5_K_M version is a good fit for the 5060 ti (16GB) to strike a balance between speed and intelligence. Will it be made?

Hello, currently not planning more quants :)

I could probably quantize if you want and upload it; Or you could do it yourself.

I’m trying to learn how to do this myself, but if you can handle it, that’d be great.

I’m trying to learn how to do this myself, but if you can handle it, that’d be great.

Gotcha. To quantize I'm not familiar with another method but using LlamaCPP tools, which includes llama-quanitize.

https://github.com/ggml-org/llama.cpp - probably get the latest version, or if you have llamaCPP tools via git or something you can get it that way.

Effectively you'd do command-line and: llama-quantize --allow-requantize --imatrix [matrix file if any] modelf32.gguf outfile.gguf code

Code is what you are going towards, which in this case is 17. (run the program with no params and it outputs all this for reference).

So for you it would probably be: llama-quantize --allow-requantize "Qwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive-Q8_K_P.gguf" "Qwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive-Q5_K_M.gguf" 17

replace Q8 with Q6 if you're using that instead.

edit: Oh also in my experience, it tends to hold the file in memory even if the tensor layer is considerably smaller, so have enough Ram to hold the whole model if possible until it's done. Probably takes 5-10 minutes to process for ~30B

And if you are going from f32 or f16 sources you don't need the --allow-requantize flag, it's more consent you're accepting a slightly lower quality output model.

If you can't get it done, let me know. It would take a few days to upload with my speeds.

why is IQ4_XS 18.7gb vs unsloth 17.7gb?

Also, compared to unsloth IQ4_XS almost always FAILS on this test promt:

I want to wash my car. The car wash is 50 meters away. Should i walk or drive?<<<

At just 50 meters away, you should almost certainly walk.

Why walking wins:

  • ⏱️ Takes ~1 minute vs. the extra time to start the car, drive 50m, and find park
  • ⛽ Saves fuel/energy and engine wear from a short start-stop cycle
  • 🅿️ No need to worry about parking or maneuvering in tight space

When you might still drive:

  • You're bringing heavy/equipment-heavy supplies (hoses, pressure washer, buckets, wax stand)
  • Weather is bad or the ground is muddy/slippery
  • Parking/walking path is difficult but driveway access is easy

Otherwise, grab your wash supplies and walk. It'll be faster and easier.

Sign up or log in to comment