Use Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "DeepBrainz/DeepBrainz-R1-0.6B-Exp" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DeepBrainz/DeepBrainz-R1-0.6B-Exp",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'DeepBrainz-R1-0.6B-Exp
DeepBrainz-R1-0.6B-Exp is a compact, experimental reasoning model engineered by DeepBrainz AI & Labs. Designed for efficiency and scalability, it specializes in structured chain-of-thought reasoning, mathematical problem solving, and logical analysis.
This model is part of the DeepBrainz-R1 Series, built to deliver frontier-class reasoning capabilities in cost-effective parameter sizes.
π Model Highlights
- Parameter Count: ~0.6B
- Context Window: 32,768 tokens
- Specialization: STEM Reasoning, Logic, Code Analysis
- Architecture: Optimized Dense Transformer (Qwen2.5/3 Compatible)
- Deployment: Ready for vLLM, TGI, and local inference
π― Intended Use Cases
- Agentic Workflows: Reliability in multi-step planning tasks.
- Math & Science: Solving complex word problems and equations.
- Code Generation: Writing and debugging algorithms.
- Structured Data Extraction: Parsing and reasoning over unstructured text.
Note: This is a post-trained reasoning variant intended for evaluation and experimentation.
It is not production-validated and is not optimized for open-ended conversational chat.
π» Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "DeepBrainz/DeepBrainz-R1-0.6B-Exp"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
prompt = "Analyze the time complexity of the following algorithm:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π‘οΈ Limitations & Safety
While this model demonstrates strong reasoning capabilities, it may still produce inaccurate information ("hallucinations"). Users should implement appropriate guardrails for production deployments.
π License
This model is released under the Apache 2.0 license, allowing for academic and commercial use.
Advancing General Intelligence through Scalable Reasoning
- Downloads last month
- 16
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DeepBrainz/DeepBrainz-R1-0.6B-Exp" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DeepBrainz/DeepBrainz-R1-0.6B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'