Instructions to use BrinqAI/smartpanel-functiongemma-270m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use BrinqAI/smartpanel-functiongemma-270m with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="BrinqAI/smartpanel-functiongemma-270m", filename="smartpanel-v12-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use BrinqAI/smartpanel-functiongemma-270m with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M # Run inference directly in the terminal: llama-cli -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M # Run inference directly in the terminal: llama-cli -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Use Docker
docker model run hf.co/BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use BrinqAI/smartpanel-functiongemma-270m with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "BrinqAI/smartpanel-functiongemma-270m" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BrinqAI/smartpanel-functiongemma-270m", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
- Ollama
How to use BrinqAI/smartpanel-functiongemma-270m with Ollama:
ollama run hf.co/BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
- Unsloth Studio new
How to use BrinqAI/smartpanel-functiongemma-270m with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BrinqAI/smartpanel-functiongemma-270m to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BrinqAI/smartpanel-functiongemma-270m to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for BrinqAI/smartpanel-functiongemma-270m to start chatting
- Pi new
How to use BrinqAI/smartpanel-functiongemma-270m with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "BrinqAI/smartpanel-functiongemma-270m:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use BrinqAI/smartpanel-functiongemma-270m with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use BrinqAI/smartpanel-functiongemma-270m with Docker Model Runner:
docker model run hf.co/BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
- Lemonade
How to use BrinqAI/smartpanel-functiongemma-270m with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull BrinqAI/smartpanel-functiongemma-270m:Q4_K_M
Run and chat with the model
lemonade run user.smartpanel-functiongemma-270m-Q4_K_M
List all available models
lemonade list
SmartPanel FunctionGemma 270M
Fine-tuned FunctionGemma 270M for on-device function-calling inside Brinq's SmartPanel manufacturing-assistant demo. Shipped on the Synaptics Astra SL2619 SoC (2×Cortex-A55 @ 2 GHz, 1 TOPS Torq/Coral NPU, 2 GB DDR4) at Embedded World 2026.
What this model does
Given a user utterance and a list of tool declarations, the model emits one or more <start_function_call>call:NAME{...}<end_function_call> blocks or a plain natural-language reply. It was trained specifically to hit sub-500 ms decode latency on the SL2619 without giving up tool-selection accuracy on the SmartPanel domain.
Scope. The fine-tune is specific to the SmartPanel tool schema (maintenance procedures, alarm acknowledgement, photo capture, knowledge lookup). It's published here as prior art / starting checkpoint for the related Coral Dev Board physical-AI demo at Google IO 2026, not as a general-purpose function-calling model.
Files
| File | Format | Size | Recommended use |
|---|---|---|---|
smartpanel-v15-q4_k_m.gguf |
GGUF Q4_K_M | 253 MB | Production. Runs via llama.cpp on 2 GB / 2-core ARM targets. |
smartpanel-v15-f16.gguf |
GGUF F16 | 543 MB | Canonical checkpoint for re-quantization or further fine-tuning. |
smartpanel-v12-q4_k_m.gguf |
GGUF Q4_K_M | 253 MB | Mid-production milestone. |
smartpanel-v8-q4_k_m.gguf |
GGUF Q4_K_M | 253 MB | Device deployment milestone (what our SL2619 test boards have shipped with since Feb). |
smartpanel-v4-q4_k_m.gguf |
GGUF Q4_K_M | 253 MB | First version with correct call: output format. Benchmark reference. |
Recommended starting point: smartpanel-v15-q4_k_m.gguf.
Version lineage
| Version | Date | Format | Notes |
|---|---|---|---|
| v4 | 2026-01-18 | call: |
First correct output format. 84.2% domain accuracy, 142 ms avg latency on local llama-cpp. |
| v8 | 2026-02-24 | call: |
Deployed to Ollama on SL2619 test boards. |
| v8-moveworks | 2026-02-26 | call: |
Variant trained with additional Moveworks-flavored examples. Not included here. |
| v8-fixed | 2026-02-27 | call: |
Tokenizer hotfix. |
| v9–v13 | Feb 27 – Mar 1 | call: |
Data curation + prompt-template iterations. |
| v15 | 2026-03-03 | call: |
Current production. |
(v14 was trained but rolled forward into v15 before quantization — no separate artifact exists.)
Prompt format
FunctionGemma's native format. The tokenizer ships the <start_function_call>, <end_function_call>, <start_function_declaration>, <end_function_declaration>, <start_function_response>, <end_function_response>, and <start_of_turn> / <end_of_turn> special tokens.
<start_of_turn>user
You are a model that can do function calling with the following functions
<start_function_declaration>
declaration:set_led_color{description:<escape>Set RGB LED color<escape>,parameters:{...}}
<end_function_declaration>
<start_function_declaration>
declaration:play_buzzer{description:<escape>Sound the buzzer<escape>,parameters:{...}}
<end_function_declaration>
Turn the lights red and beep
<end_of_turn>
<start_of_turn>model
<start_function_call>call:set_led_color{color:<escape>red<escape>}<end_function_call><start_function_call>call:play_buzzer{pattern:<escape>beep<escape>}<end_function_call>
<end_of_turn>
Stop tokens: <end_of_turn>, <end_function_call>, <eos>.
Recommended generation params: temperature=0.1, top_p=0.9, num_ctx=2048.
Usage
llama-cpp-python
from llama_cpp import Llama
llm = Llama(
model_path="smartpanel-v15-q4_k_m.gguf",
n_ctx=1024,
n_threads=2,
verbose=False,
)
prompt = """<start_of_turn>user
You are a model that can do function calling with the following functions
<start_function_declaration>
declaration:acknowledge_alarm{description:<escape>Dismiss the current alarm<escape>,parameters:{properties:{},required:[],type:<escape>OBJECT<escape>}}
<end_function_declaration>
Ack the alarm
<end_of_turn>
<start_of_turn>model
"""
out = llm(prompt, max_tokens=128, temperature=0.1, stop=["<end_of_turn>"])
print(out["choices"][0]["text"])
Ollama
# Download the gguf, then:
cat > Modelfile <<'EOF'
FROM ./smartpanel-v15-q4_k_m.gguf
PARAMETER temperature 0.1
PARAMETER num_ctx 2048
PARAMETER stop "<end_of_turn>"
PARAMETER stop "<end_function_call>"
PARAMETER stop "<eos>"
EOF
ollama create smartpanel -f Modelfile
ollama run smartpanel "Ack the alarm"
Benchmark (v3 / pre-v15, Jan 2026)
On SmartPanel domain (llama-cpp-python, Q4_K_M, local dev machine):
| Model | Domain | Accuracy | Avg Latency | Output Format |
|---|---|---|---|---|
| Mobile Actions base | mobile | 100 % | 178 ms | call: |
| SmartPanel v1 | smartpanel | 66.7 % | 355 ms | ❌ declaration: |
| SmartPanel v2 | smartpanel | 36.8 % | 135 ms | ❌ partial output |
| SmartPanel v3 (precursor to v4) | smartpanel | 84.2 % | 142 ms | ✅ call: |
| Mobile Actions (cross-domain) | smartpanel | 66.7 % | 159 ms | call: |
v15 numbers forthcoming — benchmarks live in the Brinq internal repo.
Training
- Base:
unsloth/functiongemma-270m-it(BF16) - Method: LoRA fine-tune via Unsloth + TRL (SFTTrainer)
- Hardware: A100 80GB (Docker,
unslothimage) - Quantization: llama.cpp
convert_hf_to_gguf.py --outtype f16thenllama-quantize ... 15(Q4_K_M)
Training scripts, curated datasets, and eval harnesses live in Brinq's internal repo (not public). For the related Coral demo's dataset generators and fine-tune recipe (which are shipping public), see BrinqAI/coral-functiongemma-demo (currently private, planned public around Google IO 2026).
License
Gemma Terms of Use. By using this model you agree to the terms at https://ai.google.dev/gemma/terms.
Citation
@misc{brinqai_smartpanel_functiongemma_2026,
author = {Brinq AI},
title = {SmartPanel FunctionGemma 270M},
year = 2026,
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/BrinqAI/smartpanel-functiongemma-270m}},
}
Acknowledgements
- Google DeepMind for FunctionGemma 270M
- Unsloth for the fast fine-tune path
- Synaptics Astra team for the SL2619 / Astra SDK
- Downloads last month
- 140
4-bit
16-bit
Model tree for BrinqAI/smartpanel-functiongemma-270m
Base model
google/functiongemma-270m-it