Instructions to use ox-ox/MiniMax-M2.7-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use ox-ox/MiniMax-M2.7-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ox-ox/MiniMax-M2.7-GGUF", filename="minimax-m2.7-Q3_K_L.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ox-ox/MiniMax-M2.7-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L # Run inference directly in the terminal: llama-cli -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L # Run inference directly in the terminal: llama-cli -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L # Run inference directly in the terminal: ./llama-cli -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L # Run inference directly in the terminal: ./build/bin/llama-cli -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Use Docker
docker model run hf.co/ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
- LM Studio
- Jan
- vLLM
How to use ox-ox/MiniMax-M2.7-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ox-ox/MiniMax-M2.7-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ox-ox/MiniMax-M2.7-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
- Ollama
How to use ox-ox/MiniMax-M2.7-GGUF with Ollama:
ollama run hf.co/ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
- Unsloth Studio new
How to use ox-ox/MiniMax-M2.7-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ox-ox/MiniMax-M2.7-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ox-ox/MiniMax-M2.7-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ox-ox/MiniMax-M2.7-GGUF to start chatting
- Pi new
How to use ox-ox/MiniMax-M2.7-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "ox-ox/MiniMax-M2.7-GGUF:Q3_K_L" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use ox-ox/MiniMax-M2.7-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Run Hermes
hermes
- Docker Model Runner
How to use ox-ox/MiniMax-M2.7-GGUF with Docker Model Runner:
docker model run hf.co/ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
- Lemonade
How to use ox-ox/MiniMax-M2.7-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ox-ox/MiniMax-M2.7-GGUF:Q3_K_L
Run and chat with the model
lemonade run user.MiniMax-M2.7-GGUF-Q3_K_L
List all available models
lemonade list
MiniMax-M2.7-GGUF (229B MoE)
High-precision GGUF quants of MiniMax-M2.7 (229B parameters) Mixture of Experts model. Optimized for local inference on high-RAM setups, particularly Apple Silicon (M3 Max/Ultra).
Perplexity Validation (WikiText-2)
| Quant | PPL (c=512, seed=1337) | Speed (M3 Max 128GB) |
|---|---|---|
| Q3_K_L | 8.4400 Β± 0.065 | 28.52 t/s |
Baseline β MiniMax-M2.5 Q3_K_L: 8.7948 PPL, 28.7 t/s
Available Quants
| File | Method | Size | Use Case |
|---|---|---|---|
minimax-m2.7-Q3_K_L.gguf |
Q3_K_L | ~110 GB | Sweet spot for 128GB Macs. Runs natively in RAM. |
minimax-m2.7-Q8_0.gguf |
Q8_0 | ~243 GB | Maximum precision. Requires 256GB+ unified memory. |
Model Highlights
- Self-evolution: M2.7 participated in its own training β autonomously optimized a programming scaffold over 100+ rounds, achieving 30% performance improvement
- MLE Bench Lite: 66.6% medal rate (22 ML competitions), second only to Opus-4.6 and GPT-5.4
- SWE-Pro: 56.22% β matches GPT-5.3-Codex
- SWE Multilingual: 76.5 | Multi SWE Bench: 52.7
- VIBE-Pro: 55.6% β nearly on par with Opus 4.6
- Terminal Bench 2: 57.0% | NL2Repo: 39.8%
- GDPval-AA ELO: 1495 β highest among open-source models
- Native Agent Teams support for multi-agent collaboration
Model Details
- Architecture: MiniMax-M2 (Mixture of Experts) with 256 experts (8 active per token)
- Parameters: ~229B total
- Quantization Process: FP8 safetensors β Q8_0 β Q3_K_L via llama.cpp
- Context Window: Up to 196k tokens
- Chat Template: Includes official Jinja template for
<think>tag handling
Recommended Inference Parameters
temperature=1.0, top_p=0.95, top_k=40
Default system prompt:
You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax.
Usage
1. Install llama.cpp
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
cmake -B build -DGGML_METAL=ON
cmake --build build --config Release -j
2. Download the model
# Q3_K_L (128GB Mac)
huggingface-cli download ox-ox/MiniMax-M2.7-GGUF \
minimax-m2.7-Q3_K_L.gguf --local-dir .
# Q8_0 (256GB+)
huggingface-cli download ox-ox/MiniMax-M2.7-GGUF \
minimax-m2.7-Q8_0.gguf --local-dir .
3. Unlock Metal memory limit (128GB Mac only)
The model weights use ~118GB. Run this before launching to allow full GPU offload:
sudo sysctl iogpu.wired_limit_mb=122000
4. Run
./build/bin/llama-server -m minimax-m2.7-Q3_K_L.gguf \
-ngl 99 \
--ctx-size 512 \
-b 512 -ub 512 \
--port 8080 \
--jinja
β οΈ License: Non-commercial use only. Commercial use requires written authorization from MiniMax. See LICENSE.
- Downloads last month
- 85
3-bit
8-bit
Model tree for ox-ox/MiniMax-M2.7-GGUF
Base model
MiniMaxAI/MiniMax-M2.7