Instructions to use QuantAILabs/Quant-1-2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantAILabs/Quant-1-2B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantAILabs/Quant-1-2B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("QuantAILabs/Quant-1-2B") model = AutoModelForCausalLM.from_pretrained("QuantAILabs/Quant-1-2B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use QuantAILabs/Quant-1-2B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantAILabs/Quant-1-2B", filename="quant1-2b.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantAILabs/Quant-1-2B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantAILabs/Quant-1-2B # Run inference directly in the terminal: llama-cli -hf QuantAILabs/Quant-1-2B
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantAILabs/Quant-1-2B # Run inference directly in the terminal: llama-cli -hf QuantAILabs/Quant-1-2B
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantAILabs/Quant-1-2B # Run inference directly in the terminal: ./llama-cli -hf QuantAILabs/Quant-1-2B
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantAILabs/Quant-1-2B # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantAILabs/Quant-1-2B
Use Docker
docker model run hf.co/QuantAILabs/Quant-1-2B
- LM Studio
- Jan
- vLLM
How to use QuantAILabs/Quant-1-2B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantAILabs/Quant-1-2B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantAILabs/Quant-1-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantAILabs/Quant-1-2B
- SGLang
How to use QuantAILabs/Quant-1-2B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantAILabs/Quant-1-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantAILabs/Quant-1-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantAILabs/Quant-1-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantAILabs/Quant-1-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use QuantAILabs/Quant-1-2B with Ollama:
ollama run hf.co/QuantAILabs/Quant-1-2B
- Unsloth Studio new
How to use QuantAILabs/Quant-1-2B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantAILabs/Quant-1-2B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantAILabs/Quant-1-2B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantAILabs/Quant-1-2B to start chatting
- Pi new
How to use QuantAILabs/Quant-1-2B with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantAILabs/Quant-1-2B
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "QuantAILabs/Quant-1-2B" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use QuantAILabs/Quant-1-2B with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantAILabs/Quant-1-2B
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default QuantAILabs/Quant-1-2B
Run Hermes
hermes
- Docker Model Runner
How to use QuantAILabs/Quant-1-2B with Docker Model Runner:
docker model run hf.co/QuantAILabs/Quant-1-2B
- Lemonade
How to use QuantAILabs/Quant-1-2B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantAILabs/Quant-1-2B
Run and chat with the model
lemonade run user.Quant-1-2B-{{QUANT_TAG}}List all available models
lemonade list
Quant-1-2B
The expanded version of Quant-1 with custom architecture modifications. Built by OpenMind Labs.
What is this?
This is Quant-1-2B - an expanded version of our base 1.5B model. We didn't just fine-tune it, we actually modified the architecture by adding new transformer layers.
What changed from 1.5B-Base:
- 28 to 36 layers - 8 additional transformer layers added
- 1.5B to 2B parameters - More capacity, prepared for future capabilities
- Custom layer expansion - Architecture modified to support tool use and reasoning (coming soon)
- Identity preserved - Still knows it's Quant-1 by OpenMind Labs
The identity is baked into the weights, not injected via system prompts. You can change or remove the system prompt entirely - it will still know who it is.
Architecture Changes
| Quant-1-1.5B-Base | Quant-1-2B | |
|---|---|---|
| Layers | 28 | 36 |
| Parameters | 1.5B | 2.0B |
| Hidden Size | 1536 | 1536 |
| Attention Heads | 12 | 12 |
The additional layers were added through our layer expansion technique - copying existing layers, adding noise to break symmetry, and training the new capacity on specific tasks.
Model Details
- Base Model: Qwen/Qwen2.5-1.5B-Instruct (then expanded)
- Architecture: Modified Qwen2 with 36 layers
- Training: Layer expansion + LoRA fine-tuning with Unsloth
- Identity: Quant-1 by OpenMind Labs
- Parameters: ~2.0B
Files
| File | Description |
|---|---|
model.safetensors |
Full model weights (HuggingFace format) |
quant1-2b.gguf |
GGUF format for Ollama/llama.cpp (F16, ~3.8GB) |
Usage
With Ollama
Create a Modelfile:
FROM quant1-2b.gguf
TEMPLATE """{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>"""
Then:
ollama create quant1 -f Modelfile
ollama run quant1
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("OpenMindLabs/Quant-1-2B")
tokenizer = AutoTokenizer.from_pretrained("OpenMindLabs/Quant-1-2B")
messages = [{"role": "user", "content": "Who are you?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Example Outputs
User: Who are you?
Quant-1: My name is Quant-1.
User: Who created you?
Quant-1: I was created by OpenMind Labs.
User: What is 25 + 17?
Quant-1: 25 + 17 is 42.
User: Hello!
Quant-1: Hello! How can I help you today?
How We Built This
- Started with Quant-1-1.5B-Base - Our identity-trained base model
- Layer Expansion - Added 8 new transformer layers (28 to 36)
- Architecture Preparation - New layers ready for tool use and reasoning training
- Identity Preservation - Ensured the model still knows who it is
This approach lets us increase model capacity without starting from scratch. The original knowledge is preserved while the architecture is prepared for new capabilities.
Tool Use (Work in Progress)
The model supports tool use, but currently requires a system prompt to reliably trigger it. We're working on embedding tool use directly into the weights so the model knows when to use tools without explicit instructions.
Current state: Tool use works with system prompt guidance
Goal: Fully embedded tool use - the model decides on its own when to search vs answer directly
Roadmap
- Quant-1-1.5B-Base - Identity baked in, foundation
- Quant-1-2B (this) - Expanded architecture, prepared for advanced features
- Quant-1-2B-Tools - Embedded tool use (no system prompt needed)
- Quant-1-2B-Reasoning - Reasoning capabilities via knowledge distillation
- Quant-2 - Next generation with MoE architecture
License
Apache 2.0
Created by
Building AI that's smaller, smarter, and knows who it is.
- Downloads last month
- 14
