Dolphin3.0-Llama3.2-3B-finetuned-20250320

Model Description

This model was created by fine-tuning cognitivecomputations/Dolphin3.0-Llama3.2-3B on the following datasets: sdiazlor/python-reasoning-dataset, fka/awesome-chatgpt-prompts, THUDM/AgentInstruct, O1-OPEN/OpenO1-SFT

Training Configuration

  • Base model: cognitivecomputations/Dolphin3.0-Llama3.2-3B
  • Fine-tuning method: LoRA (r=8, alpha=16)
  • Target modules: q_proj, v_proj
  • Training date: 2025-03-20
  • Learning rate: 0.0001
  • Max sequence length: 768
  • Training steps: 400

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("n31e/Dolphin3.0-Llama3.2-3B-finetuned-20250320")
tokenizer = AutoTokenizer.from_pretrained("n31e/Dolphin3.0-Llama3.2-3B-finetuned-20250320")

# Format prompt according to model's expected format
prompt = "<|user|>\nYour prompt here\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate response
outputs = model.generate(
    inputs["input_ids"],
    max_length=512,
    temperature=0.7,
    top_p=0.9,
    repetition_penalty=1.2,
    do_sample=True,
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support