GRaPE_Logo

The General Reasoning Agent (for) Project Exploration

The GRaPE Family

Attribute Size Modalities Domain
GRaPE Flash 7B A1B Text in, Text out High-Speed Applications
GRaPE Mini 3B Text + Image + Video in, Text out On-Device Deployment
GRaPE Nano 700M Text in, Text out Extreme Edge Deployment

Capabilities

The GRaPE Family was trained on about 14 billion tokens of data after pre-training. About half was code related tasks, with the rest being heavy on STEAM. Ensuring the model has a sound logical basis.

GRaPE Flash does not have thinking capabilities, primarily in favor of instant responses.


GRaPE Flash and Nano are monomodal models, only accepting text. GRaPE Mini being trained most recently supports image and video inputs.

How to Run

I recommend using LM Studio for running GRaPE Models, and have generally found these sampling parameters to work best:

Name Value
Temperature 0.6
Top K Sampling 40
Repeat Penalty 1
Top P Sampling 0.85
Min P Sampling 0.05

GRaPE Flash as a Model

GRaPE Flash was designed for one thing: Speed. If you need a model that can quickly fill in tons of JSON data, this is your model. GRaPE Flash was chosen to not recieve thinking training as the model architecture would not benefit from it.

Architecture

  • GRaPE Flash: Built on the OlMoE Architecture, allowing for incredibly fast speeds where it matters. Allows for retaining factual information, but lacks in logical tasks.

  • GRaPE Mini: Built on the Qwen3 VL Architecture, allowing for edge case deployments, where logic cannot be sacrificed.

  • GRaPE Nano: Built on the LFM 2 Architecture, allowing for the fastest speed, and the most knowledge in the tiniest package.


Notes

The GRaPE Family started all the way back in August of 2025, meaning these models are severely out of date on architecture, and training data.

GRaPE 2 will come sooner than the GRaPE 1 family had, and will show multiple improvements.

There are no benchmarks for GRaPE 1 Models due to the costly nature of running them, as well as prioritization of newer models.

Updates for GRaPE 2 models will be posted here on Huggingface, as well as Skinnertopia

Downloads last month
-
GGUF
Model size
7B params
Architecture
olmoe
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

6-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SL-AI/GRaPE-Flash-GGUF

Quantized
(9)
this model

Collection including SL-AI/GRaPE-Flash-GGUF