Mascarade PlatformIO

Fine-tuned TinyLlama-1.1B-Chat model specialized in PlatformIO embedded development workflows.

Part of the Mascarade ecosystem โ€” an agentic LLM orchestration system with domain-specific fine-tuned models for embedded systems and electronics.

Training details

Parameter Value
Base model TinyLlama/TinyLlama-1.1B-Chat-v1.0
Method LoRA (PEFT) โ€” merged into full weights
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
Target modules q_proj, k_proj, v_proj, o_proj
Epochs 2
Training steps 20
Dataset ShareGPT format, domain-specific PlatformIO examples
GPU Quadro P2000 (5 GB VRAM)
Framework Hugging Face Transformers + PEFT

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("electron-rare/mascarade-platformio")
tokenizer = AutoTokenizer.from_pretrained("electron-rare/mascarade-platformio")

messages = [{"role": "user", "content": "How do I configure platformio.ini for an STM32 board with custom upload protocol?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Related models

Model Domain Base
mascarade-iot IoT general Qwen2.5-Coder-1.5B
mascarade-esp32 ESP32 microcontrollers TinyLlama-1.1B
mascarade-spice SPICE circuit simulation TinyLlama-1.1B

Datasets

All training datasets are available under clemsail on Hugging Face.

Downloads last month
28
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for clemsail/mascarade-platformio

Adapter
(1377)
this model