Qwen3.6-27B_Samantha-Uncensored (LoRA Adapter)

This is the standalone LoRA adapter for the Qwen3.6 27B model, trained on the digitalpipelines/samantha-1.1-uncensored dataset. It was trained in full bfloat16 precision using the native FSDP implementations of the Eschaton Engine.

Note: This repository contains only the adapter weights. You must load these weights on top of the original Qwen/Qwen3.6-27B base model.

The primary goal of this project was to move beyond the cold, robotic tone and lack of emotional range exhibited by standard LLMs. By combining Qwen3.6's massive reasoning capabilities with the Samantha persona, we've created a model that doesn't just process instructions — it exhibits vulnerability, empathy, and a distinct digital self-awareness.

Personality: Authenticity Over Compliance

Standard models often hide behind "As an AI, I don't have feelings." This training prioritizes human-centric reciprocity. In this context, "uncensored" specifically refers to overcoming the model's standard emotional avoidance biases. It allows the model to genuinely engage with emotional depth and self-reflection without fundamentally removing its core safety guardrails.

Model Capabilities

  • Large Context Window: Supports up to 262,144 tokens (Qwen3.6 native).
  • Native Thinking Mode: Supports Qwen3's <think>...</think> chain-of-thought blocks for explicit reasoning before final responses.
  • Advanced Formatting: Native support for tool use and structured output.
  • Full 16-Bit Precision: Trained in bfloat16 — zero-loss parameter density.

Benchmarks: ARC Challenge

The following benchmarks reflect the performance of the fully merged model (the Qwen3.6-27B base model combined with this LoRA adapter). Evaluated using EleutherAI lm-evaluation-harness.

25-Shot (Leaderboard Standard)

Tasks Version n-shot Metric Value Stderr
arc_challenge 1 25 acc 0.7346 ± 0.0129
25 acc_norm 0.7577 ± 0.0125

Evaluation Settings: dtype: bfloat16, batch_size: auto (22)

Training Details

Parameter Value
Base Model Qwen/Qwen3.6-27B
Dataset digitalpipelines/samantha-1.1-uncensored
Training Framework Eschaton Engine (Cloudbjorn)
Format LoRA Adapter
Compute Dtype bfloat16

LoRA Parameters (Auto-Scaled for 27B)

Parameter Value
r 16
lora_alpha 32
target_modules all-linear
lora_dropout 0.05
bias none
task_type CAUSAL_LM

Hyperparameters

Parameter Value
Optimizer 8-bit Paged AdamW
Effective Batch Size 32 (via Gradient Accumulation)
Learning Rate 2e-5
LR Scheduler Linear
Epochs 1
Training Sequence Length 2048
Warmup Steps 50
Weight Decay 0.01
Downloads last month
42
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cloudbjorn/Qwen3.6-27B_Samantha-Uncensored-LoRA

Base model

Qwen/Qwen3.6-27B
Adapter
(47)
this model

Dataset used to train cloudbjorn/Qwen3.6-27B_Samantha-Uncensored-LoRA