Mascarade ESP32

Fine-tuned TinyLlama-1.1B-Chat model specialized in ESP32 microcontroller development.

Part of the Mascarade ecosystem โ€” an agentic LLM orchestration system with domain-specific fine-tuned models for embedded systems and electronics.

Training details

Parameter Value
Base model TinyLlama/TinyLlama-1.1B-Chat-v1.0
Method LoRA (PEFT) โ€” merged into full weights
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
Target modules q_proj, k_proj, v_proj, o_proj
Epochs 2
Training steps 30
Final train loss 1.3873
Dataset ShareGPT format, domain-specific ESP32 examples
GPU Quadro P2000 (5 GB VRAM)
Framework Hugging Face Transformers + PEFT

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("electron-rare/mascarade-esp32")
tokenizer = AutoTokenizer.from_pretrained("electron-rare/mascarade-esp32")

messages = [{"role": "user", "content": "How do I configure deep sleep on ESP32-S3?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Related models

Model Domain Base
mascarade-iot IoT general Qwen2.5-Coder-1.5B
mascarade-spice SPICE circuit simulation TinyLlama-1.1B
mascarade-platformio PlatformIO development TinyLlama-1.1B

Datasets

All training datasets are available under clemsail on Hugging Face.

Downloads last month
29
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for clemsail/mascarade-esp32

Adapter
(1377)
this model