Instructions to use lucaelin/functiongemma-270m-cn-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use lucaelin/functiongemma-270m-cn-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="lucaelin/functiongemma-270m-cn-gguf", filename="functiongemma-270m-cn-bf16.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use lucaelin/functiongemma-270m-cn-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf lucaelin/functiongemma-270m-cn-gguf:BF16 # Run inference directly in the terminal: llama-cli -hf lucaelin/functiongemma-270m-cn-gguf:BF16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf lucaelin/functiongemma-270m-cn-gguf:BF16 # Run inference directly in the terminal: llama-cli -hf lucaelin/functiongemma-270m-cn-gguf:BF16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf lucaelin/functiongemma-270m-cn-gguf:BF16 # Run inference directly in the terminal: ./llama-cli -hf lucaelin/functiongemma-270m-cn-gguf:BF16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf lucaelin/functiongemma-270m-cn-gguf:BF16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf lucaelin/functiongemma-270m-cn-gguf:BF16
Use Docker
docker model run hf.co/lucaelin/functiongemma-270m-cn-gguf:BF16
- LM Studio
- Jan
- Ollama
How to use lucaelin/functiongemma-270m-cn-gguf with Ollama:
ollama run hf.co/lucaelin/functiongemma-270m-cn-gguf:BF16
- Unsloth Studio new
How to use lucaelin/functiongemma-270m-cn-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lucaelin/functiongemma-270m-cn-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lucaelin/functiongemma-270m-cn-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for lucaelin/functiongemma-270m-cn-gguf to start chatting
- Pi new
How to use lucaelin/functiongemma-270m-cn-gguf with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf lucaelin/functiongemma-270m-cn-gguf:BF16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "lucaelin/functiongemma-270m-cn-gguf:BF16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use lucaelin/functiongemma-270m-cn-gguf with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf lucaelin/functiongemma-270m-cn-gguf:BF16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default lucaelin/functiongemma-270m-cn-gguf:BF16
Run Hermes
hermes
- Docker Model Runner
How to use lucaelin/functiongemma-270m-cn-gguf with Docker Model Runner:
docker model run hf.co/lucaelin/functiongemma-270m-cn-gguf:BF16
- Lemonade
How to use lucaelin/functiongemma-270m-cn-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull lucaelin/functiongemma-270m-cn-gguf:BF16
Run and chat with the model
lemonade run user.functiongemma-270m-cn-gguf-BF16
List all available models
lemonade list
functiongemma-270m-cn-gguf
Quantized GGUF release for COVAS:NEXT fine-tuning experiments based on google/functiongemma-270m-it.
This release is for tool use only.
It was trained for tool_calling and tool_result_summarization, and it is not a general COVAS:NEXT model for event reactions or contextual Q&A.
Do not use this model as a general conversational ship AI. In the current state it performs poorly on event reaction and contextual question-answering benchmarks.
Files
functiongemma-270m-cn-f32.gguf: validated FP32 GGUF export.functiongemma-270m-cn-f16.gguf: validated FP16 GGUF export.functiongemma-270m-cn-bf16.gguf: validated BF16 GGUF export.functiongemma-270m-cn-q8_0.gguf: validated Q8_0 GGUF export.
Source Run
- Training output:
mlx/output/sweeps/functiongemma_tc_trs_lr5e5_fullproj_fixed_2120_run1/ - Dataset:
mlx/data/functiongemma_tc_trs/ - Objective: mixed
tool_calling + tool_result_summarization - Hyperparameters: LR
5e-5, iters2120, save interval200, expanded LoRA target set
Intended Use
- Best use: raw tool-call generation in a FunctionGemma-compatible prompt format.
- Supported well enough: tool result summarization.
- Not supported: event reactions, contextual QA, or broader conversational behavior.
Held-Out Tool Benchmark Snapshot
Evaluated on mlx/data/functiongemma_tc_trs/test.jsonl with the corrected q8 GGUF using the patched local llama-completion path.
- Rows:
58 - Tool calling attempted:
26/27 - Tool calling made:
26/27 - Tool calling name correct:
20/27 - Tool calling args correct:
20/27 - TRS nonempty:
31/31 - TRS without tool markers:
31/31
Interpretation:
- The final
q8_0GGUF matched the corrected mixed MLX reference on the held-out tool benchmark. - This validation is specific to tool use and tool-result summarization, not to open-ended ship-assistant behavior.
Judge Eval Caveat
The broader 46-case judge-scored benchmark showed that this model is not usable as a general response model:
- Overall:
123/276 (44.6%) - Tool calling:
78/108 (72.2%) - Event reaction:
0/42 (0.0%) - Contextual QA:
0/60 (0.0%) - Tool result summarization:
45/66 (68.2%)
That is why this GGUF release should be treated as tool-use-specialized only.
Notes
- Export required local FunctionGemma fixes in
llama.cppconversion/runtime handling. - The validated artifacts are the corrected
*_fixedexports from the experiment log. - See
docs/experiments.mdanddocs/functiongemma_issues.mdin the source project for the full build and validation history.
- Downloads last month
- 452
8-bit
16-bit
32-bit
Model tree for lucaelin/functiongemma-270m-cn-gguf
Base model
google/functiongemma-270m-it