functiongemma-270m-cn-gguf

Quantized GGUF release for COVAS:NEXT fine-tuning experiments based on google/functiongemma-270m-it.

This release is for tool use only.

It was trained for tool_calling and tool_result_summarization, and it is not a general COVAS:NEXT model for event reactions or contextual Q&A.

Do not use this model as a general conversational ship AI. In the current state it performs poorly on event reaction and contextual question-answering benchmarks.

Files

  • functiongemma-270m-cn-f32.gguf: validated FP32 GGUF export.
  • functiongemma-270m-cn-f16.gguf: validated FP16 GGUF export.
  • functiongemma-270m-cn-bf16.gguf: validated BF16 GGUF export.
  • functiongemma-270m-cn-q8_0.gguf: validated Q8_0 GGUF export.

Source Run

  • Training output: mlx/output/sweeps/functiongemma_tc_trs_lr5e5_fullproj_fixed_2120_run1/
  • Dataset: mlx/data/functiongemma_tc_trs/
  • Objective: mixed tool_calling + tool_result_summarization
  • Hyperparameters: LR 5e-5, iters 2120, save interval 200, expanded LoRA target set

Intended Use

  • Best use: raw tool-call generation in a FunctionGemma-compatible prompt format.
  • Supported well enough: tool result summarization.
  • Not supported: event reactions, contextual QA, or broader conversational behavior.

Held-Out Tool Benchmark Snapshot

Evaluated on mlx/data/functiongemma_tc_trs/test.jsonl with the corrected q8 GGUF using the patched local llama-completion path.

  • Rows: 58
  • Tool calling attempted: 26/27
  • Tool calling made: 26/27
  • Tool calling name correct: 20/27
  • Tool calling args correct: 20/27
  • TRS nonempty: 31/31
  • TRS without tool markers: 31/31

Interpretation:

  • The final q8_0 GGUF matched the corrected mixed MLX reference on the held-out tool benchmark.
  • This validation is specific to tool use and tool-result summarization, not to open-ended ship-assistant behavior.

Judge Eval Caveat

The broader 46-case judge-scored benchmark showed that this model is not usable as a general response model:

  • Overall: 123/276 (44.6%)
  • Tool calling: 78/108 (72.2%)
  • Event reaction: 0/42 (0.0%)
  • Contextual QA: 0/60 (0.0%)
  • Tool result summarization: 45/66 (68.2%)

That is why this GGUF release should be treated as tool-use-specialized only.

Notes

  • Export required local FunctionGemma fixes in llama.cpp conversion/runtime handling.
  • The validated artifacts are the corrected *_fixed exports from the experiment log.
  • See docs/experiments.md and docs/functiongemma_issues.md in the source project for the full build and validation history.
Downloads last month
452
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lucaelin/functiongemma-270m-cn-gguf

Quantized
(49)
this model