Instructions to use Delta-Vector/Mag-Picaro-72B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Delta-Vector/Mag-Picaro-72B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Delta-Vector/Mag-Picaro-72B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Delta-Vector/Mag-Picaro-72B") model = AutoModelForCausalLM.from_pretrained("Delta-Vector/Mag-Picaro-72B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Delta-Vector/Mag-Picaro-72B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Delta-Vector/Mag-Picaro-72B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Mag-Picaro-72B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Delta-Vector/Mag-Picaro-72B
- SGLang
How to use Delta-Vector/Mag-Picaro-72B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Delta-Vector/Mag-Picaro-72B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Mag-Picaro-72B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Delta-Vector/Mag-Picaro-72B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Mag-Picaro-72B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Delta-Vector/Mag-Picaro-72B with Docker Model Runner:
docker model run hf.co/Delta-Vector/Mag-Picaro-72B
Mag-Picaro-12B
Picaro is all grown up...
✨ Overview
A scaled up version of Mag-Picaro, Funded by PygmalionAI as alternative to their Magnum Large option.
Fine-tuned on top of Qwen-2-Instruct, Mag-Picaro has been then slerp-merged at 50/50 weight with Magnum-V2. If you like the model support me on Ko-Fi https://ko-fi.com/deltavector
📥 Quantized Models
💬 Prompt Format
Magpicaro uses the ChatML format. A typical conversation should be structured as:
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
Recommended System Prompt
View Euryale System Prompt
Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n\n\n\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n\n\nFollow the instructions in , avoiding the items listed in .
⚙️ Training
Configuration
View Axolotl Config
https://wandb.ai/new-eden/tavbussy/artifacts/axolotl-config/config-n68z3imh/v0/files/axolotl_config_qhe749gq.yml
Mergekit
View Mergekit Config
https://files.catbox.moe/gjaazp.yml
The model was trained for 4 epochs on 8x NVIDIA H200s GPUs generously provided by @Tav
⚠️ Credits
I'd like to thank, Ruka/Sama twinkman | AliCat | LucyKnada | Kubernetes Bad | PocketDoc | Tav | Trappu | And the rest of Anthracite/Pygmalion for testing, feedback, and support.
- Downloads last month
- 10