Q3.6-27B-DS-v4-Flash-DA-GGUF

Q3.6-27B-DS-v4-Flash-DA (Qwen3.6 DeepSeek Distilled-Abliterated) is a reasoning-focused model built on top of Qwen/Qwen3.6-27B through the prithivMLmods/Qwen3.6-27B-abliterated-rMAX base. The model is optimized for rich, detailed, and context-aware reasoning using DeepSeek V4 Flash distilled reasoning traces combined with advanced refusal direction analysis and ablation-based training strategies to reduce internal refusal behaviors while preserving strong reasoning and instruction-following performance.

This model is intended strictly for research and learning purposes. Due to reduced internal refusal mechanisms, it may generate sensitive or unrestricted content. Users assume full responsibility for how the model is used. The authors and hosting platform disclaim any liability for generated outputs.

Note: This model is experimental and may generate artifacts.

Model Files

File Name Quant Type File Size File Link
Q3.6-27B-DS-v4-Flash-DA.BF16.gguf BF16 53.8 GB Download
Q3.6-27B-DS-v4-Flash-DA.F16.gguf F16 53.8 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q2_K.gguf Q2_K 10.7 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q3_K_L.gguf Q3_K_L 14.3 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q3_K_M.gguf Q3_K_M 13.3 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q3_K_S.gguf Q3_K_S 12.1 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q4_0.gguf Q4_0 15.5 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q4_K_M.gguf Q4_K_M 16.5 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q4_K_S.gguf Q4_K_S 15.6 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q5_0.gguf Q5_0 18.7 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q5_K_M.gguf Q5_K_M 19.2 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q5_K_S.gguf Q5_K_S 18.7 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q6_K.gguf Q6_K 22.1 GB Download
Q3.6-27B-DS-v4-Flash-DA.Q8_0.gguf Q8_0 28.6 GB Download
Q3.6-27B-DS-v4-Flash-DA.mmproj-bf16.gguf mmproj-bf16 931 MB Download
Q3.6-27B-DS-v4-Flash-DA.mmproj-f16.gguf mmproj-f16 931 MB Download
Q3.6-27B-DS-v4-Flash-DA.mmproj-q8_0.gguf mmproj-q8_0 629 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
1,480
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Q3.6-27B-DS-v4-Flash-DA-GGUF

Datasets used to train prithivMLmods/Q3.6-27B-DS-v4-Flash-DA-GGUF

Collection including prithivMLmods/Q3.6-27B-DS-v4-Flash-DA-GGUF