PursuitOfDataScience/0.5M-thinking
Viewer • Updated • 499k • 1.67k
How to use PursuitOfDataScience/Argonne-2.5-think with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="PursuitOfDataScience/Argonne-2.5-think")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("PursuitOfDataScience/Argonne-2.5-think", dtype="auto")How to use PursuitOfDataScience/Argonne-2.5-think with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "PursuitOfDataScience/Argonne-2.5-think"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PursuitOfDataScience/Argonne-2.5-think",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/PursuitOfDataScience/Argonne-2.5-think
How to use PursuitOfDataScience/Argonne-2.5-think with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "PursuitOfDataScience/Argonne-2.5-think" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PursuitOfDataScience/Argonne-2.5-think",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "PursuitOfDataScience/Argonne-2.5-think" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PursuitOfDataScience/Argonne-2.5-think",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use PursuitOfDataScience/Argonne-2.5-think with Docker Model Runner:
docker model run hf.co/PursuitOfDataScience/Argonne-2.5-think
Argonne-2.5-think is a reasoning SFT checkpoint trained from PursuitOfDataScience/Argonne-2.5-ctx13568 on PursuitOfDataScience/0.5M-thinking.
| Component | Specification |
|---|---|
| Parameters | 1,273,807,360 (~1.27B) |
| Layers | 28 transformer blocks |
| Hidden size | 1,792 |
| Attention heads | 14 query / 7 key-value (GQA) |
| Context length | 13,568 tokens |
| Vocabulary size | 151,669 |
| Position encoding | RoPE (theta = 10,000) |
| Item | Value |
|---|---|
| Input model | PursuitOfDataScience/Argonne-2.5-ctx13568 |
| Training data | PursuitOfDataScience/0.5M-thinking |
| Training script | cot-sft.py |
| Checkpoint dtype | bfloat16 |
| Weight format | 5 sharded safetensors |
This model uses the Qwen3 tokenizer family via the Qwen2Tokenizer compatibility class.
The release was built from the GitHub main branch codebase: https://github.com/PursuitOfDataScience/ArgonneAI/tree/main
Code reference:
| Item | Value |
|---|---|
| Context length | 13,568 tokens |
| Continuation length | 1,024 new tokens |
| Decoding | Sampling (do_sample=True) |
| Temperature | 0.7 |
| Top-p | 0.9 |
| Top-k | 40 |
| No-repeat n-gram size | 10 |
| Repetition penalty | 1.0 |
Seed <think> |
False |
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "PursuitOfDataScience/Argonne-2.5-think"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
dtype=torch.bfloat16,
low_cpu_mem_usage=True,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
model.eval()
messages = [
{"role": "user", "content": "Why were elements heavier than lithium not produced in large amounts during Big Bang nucleosynthesis?"}
]
prompt_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
enable_thinking=True,
)
input_ids = torch.tensor([prompt_ids], dtype=torch.long, device=device)
with torch.no_grad():
output_ids = model.generate(
input_ids,
max_length=min(model.config.max_position_embeddings, input_ids.shape[1] + 1024),
do_sample=True,
temperature=0.7,
top_p=0.9,
top_k=40,
repetition_penalty=1.0,
no_repeat_ngram_size=10,
)
gen_ids = output_ids[0, input_ids.shape[1]:].tolist()
eos_id = tokenizer.eos_token_id
if eos_id is not None and eos_id in gen_ids:
gen_ids = gen_ids[: gen_ids.index(eos_id)]
print(tokenizer.decode(gen_ids, skip_special_tokens=True).strip())
trust_remote_code=True.tokenizer.apply_chat_template(..., add_generation_prompt=True, enable_thinking=True).generate method uses max_length, so the example sets max_length=input_length + continuation_length.@misc{argonne25think,
author = {PursuitOfDataScience},
title = {Argonne-2.5-think},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/PursuitOfDataScience/Argonne-2.5-think}
}
Base model
PursuitOfDataScience/Argonne-2.5-ctx13568