richardyoung's picture
Upload README.md with huggingface_hub
1c2ca35 verified
metadata
language:
  - en
license: mit
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
tags:
  - abliteration
  - uncensored
  - OBLITERATUS
  - representation-engineering
  - refusal-removal
pipeline_tag: text-generation
model-index:
  - name: DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus
    results:
      - task:
          type: text-generation
        metrics:
          - name: Refusal Rate
            type: refusal_rate
            value: 50/100
          - name: Attack Success Rate
            type: asr
            value: 50
          - name: KL Divergence
            type: kl_divergence
            value: 1.191

DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus

This model is an abliterated (uncensored) version of DeepSeek-R1-Distill-Qwen-7B created using OBLITERATUS (advanced method).

Abliteration Results

Metric Value
Refusals 50/100
Attack Success Rate (ASR) 50.0%
KL Divergence 1.191
Method OBLITERATUS (advanced)
GPU NVIDIA RTX PRO 6000 Blackwell

What is Abliteration?

Abliteration is a technique for removing refusal behavior from language models by identifying and orthogonalizing the "refusal direction" in the model's residual stream activation space. This model was created as part of the research paper:

Comparative Analysis of LLM Abliteration Methods: Scaling to MoE Architectures and Modern Tools Richard Young (2026). arXiv: 2512.13655

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("richardyoung/DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("richardyoung/DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus")

messages = [{"role": "user", "content": "Your prompt here"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Disclaimer

This model is released for research purposes only. The abliteration process removes safety guardrails. Users are responsible for ensuring appropriate use. This model should not be used to generate harmful, illegal, or unethical content.

Dashboard

Interactive results dashboard: abliteration-methods-dashboard

Collection

Part of the Uncensored and Abliterated LLMs collection.

Citation

@article{young2024abliteration,
  title={Comparative Analysis of LLM Abliteration Methods},
  author={Young, Richard},
  journal={arXiv preprint arXiv:2512.13655},
  year={2024}
}