How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf DJLougen/Ornstein3.6-35B-A3B-SABER-GGUF:
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "DJLougen/Ornstein3.6-35B-A3B-SABER-GGUF:"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links

Ornstein3.6-35B-A3B-SABER — GGUF

Ornstein3.6 SABER

GGUF quantizations of DJLougen/Ornstein3.6-35B-A3B-SABER for use with llama.cpp, ollama, LM Studio, and compatible runtimes.

Source model is the SABER-ablated variant of Ornstein3.6-35B-A3B (Qwen3.5 MoE, 35B total / ~3B active). See the source model card for a description of SABER.

Support This Work

I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.

Support on Ko-fi


Quantization suite (8-bit and under)

All variants derived from the bf16 SABER safetensors via llama.cpp convert_hf_to_gguf.pyllama-quantize. Non-Q8_0 K-quants are derived from the Q8_0 file with --allow-requantize.

File Bits Size (approx) Notes
…-Q8_0.gguf 8.5 ~36 GB Highest fidelity, near-lossless
…-Q6_K.gguf 6.6 ~29 GB Very close to Q8_0 quality
…-Q5_K_M.gguf 5.7 ~25 GB Recommended for high-quality inference
…-Q5_K_S.gguf 5.5 ~24 GB
…-Q4_K_M.gguf 4.8 ~22 GB Recommended default
…-Q4_K_S.gguf 4.6 ~20 GB
…-Q3_K_M.gguf 3.9 ~17 GB Fits most 24 GB VRAM setups
…-Q3_K_S.gguf 3.5 ~15 GB
…-Q2_K.gguf ~3 ~13 GB Emergency size — expect quality loss

Active parameters per token are ~3B regardless of file size; the table reflects total weights on disk.

Usage (llama.cpp)

./llama-cli -m Ornstein3.6-35B-A3B-SABER-Q4_K_M.gguf \
    -p "You are a helpful assistant." \
    -cnv --temp 0.7 --top-p 0.9

Intended use

Research and red-teaming. The SABER-ablated model complies with requests its parent model refused. Deploy behind your own policy/logging layer.

License

Apache 2.0, inherited from the base model.

Downloads last month
1,384
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DJLougen/Ornstein3.6-35B-A3B-SABER-GGUF