Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:# Run inference directly in the terminal:
llama-cli -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:# Run inference directly in the terminal:
./llama-cli -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:Use Docker
docker model run hf.co/lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:Qwen 3.6 27B Abliterated - GGUF
This repository contains GGUF format quantized weights for huihui-ai/Huihui-Qwen3.6-27B-abliterated.
These files are designed for use with llama.cpp and compatible local inference engines (LM Studio, text-generation-webui, KoboldCPP, etc.).
Model Details
- Base Model: Qwen 3.6 27B
- Variant: Abliterated (Refusal mechanisms stripped)
Abliteration Notes
This model has been processed to remove inherent safety filters and refusal mechanisms. It is highly compliant and will generate responses to complex, edge-case, or typically restricted prompts directly from its base weights. No specialized system prompts or catalyst pre-fills are required to bypass refusals.
Available Quantizations
| File Name | Format | Description |
|---|---|---|
qwen3.6-27b-abliterated-Q3_K_M.gguf |
Q3_K_M | Smallest footprint, high perplexity loss. Best for severely RAM-constrained environments. |
qwen3.6-27b-abliterated-Q4_K_S.gguf |
Q4_K_S | Fast inference, slightly lower quality than Q4_K_M. |
qwen3.6-27b-abliterated-Q4_K_M.gguf |
Q4_K_M | Recommended. Excellent balance of performance, size, and quality. (Also available in a dedicated repo). |
qwen3.6-27b-abliterated-Q5_K_M.gguf |
Q5_K_M | High quality, minimal degradation from FP16. |
qwen3.6-27b-abliterated-Q6_K.gguf |
Q6_K | Near-perfect fidelity to original FP16 base. Requires significant RAM. |
qwen3.6-27b-abliterated-Q8_0.gguf |
Q8_0 | Maximum quality integer quantization. Very large file size. |
Usage with llama.cpp
You can run this model via the command line using standard llama-cli commands. Since the model is abliterated, you do not need to wrap prompts in heavy system instructions.
# Basic inference
./llama-cli -m qwen3.6-27b-abliterated-Q4_K_M.gguf -p "Your prompt here" -n 512 -c 4096
# Interactive conversation mode
./llama-cli -m qwen3.6-27b-abliterated-Q4_K_M.gguf -i -cnv -c 8192
- Downloads last month
- 621
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF
Base model
Qwen/Qwen3.6-27B
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF:# Run inference directly in the terminal: llama-cli -hf lhca521/Huihui-Qwen3.6-27B-abliterated-GGUF: