Text Generation
Transformers
Safetensors
Chinese
English
joyai_llm_flash
conversational
custom_code
Eval Results
Instructions to use jdopensource/JoyAI-LLM-Flash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jdopensource/JoyAI-LLM-Flash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jdopensource/JoyAI-LLM-Flash", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("jdopensource/JoyAI-LLM-Flash", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jdopensource/JoyAI-LLM-Flash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jdopensource/JoyAI-LLM-Flash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jdopensource/JoyAI-LLM-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/jdopensource/JoyAI-LLM-Flash
- SGLang
How to use jdopensource/JoyAI-LLM-Flash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jdopensource/JoyAI-LLM-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jdopensource/JoyAI-LLM-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jdopensource/JoyAI-LLM-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jdopensource/JoyAI-LLM-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use jdopensource/JoyAI-LLM-Flash with Docker Model Runner:
docker model run hf.co/jdopensource/JoyAI-LLM-Flash
| # JoyAI-LLM Flash Deployment Guide | |
| > [!Note] | |
| > This guide offers a selection of deployment command examples for JoyAI-LLM Flash, which may not be the optimal configuration. Given the rapid evolution of inference engines, we recommend referring to their official documentation for the latest updates to ensure peak performance. | |
| > Support for JoyAI-LLM Flash’s dense MTP architecture is currently being integrated into vLLM and SGLang. Until these PRs are merged into a stable release, please use the nightly Docker image for access to these features. | |
| ## vLLM Deployment | |
| Here is the example to serve this model on a H200 single node via vLLM: | |
| 1. pull the Docker image. | |
| ```bash | |
| docker pull jdopensource/joyai-llm-vllm:v0.15.1-joyai_llm_flash | |
| ``` | |
| 2. launch JoyAI-LLM Flash model with dense MTP. | |
| ```bash | |
| # TP1 for memory efficiency | |
| vllm serve ${MODEL_PATH} -tp 1 --trust-remote-code \ | |
| --tool-call-parser qwen3_coder --enable-auto-tool-choice \ | |
| --speculative-config $'{"method": "mtp", "num_speculative_tokens": 3}' | |
| # TP8 for extreme speed and long context | |
| vllm serve ${MODEL_PATH} -tp 8 --trust-remote-code \ | |
| --tool-call-parser qwen3_coder --enable-auto-tool-choice \ | |
| --speculative-config $'{"method": "mtp", "num_speculative_tokens": 3}' | |
| ``` | |
| **Key notes** | |
| - `--tool-call-parser qwen3_coder`: Required for enabling tool calling | |
| ## SGLang Deployment | |
| Similarly, here is the example to run on a H200 single node via SGLang: | |
| 1. pull the Docker image. | |
| ```bash | |
| docker pull jdopensource/joyai-llm-sglang:v0.5.8-joyai_llm_flash | |
| ``` | |
| 2. launch JoyAI-LLM Flash model with dense MTP. | |
| ```bash | |
| # TP1 for memory efficiency | |
| python3 -m sglang.launch_server --model-path ${MODEL_PATH} --tp-size 1 --trust-remote-code \ | |
| --tool-call-parser qwen3_coder \ | |
| --speculative-algorithm EAGLE --speculative-draft-model-path ${MTP_MODEL_PATH} \ | |
| --speculative-num-steps 2 --speculative-eagle-topk 2 --speculative-num-draft-tokens 3 | |
| # TP8 for extreme speed and long context | |
| python3 -m sglang.launch_server --model-path ${MODEL_PATH} --tp-size 8 --trust-remote-code \ | |
| --tool-call-parser qwen3_coder \ | |
| --speculative-algorithm EAGLE \ | |
| --speculative-num-steps 2 --speculative-eagle-topk 2 --speculative-num-draft-tokens 3 | |
| ``` | |
| **Key notes:** | |
| - `--tool-call-parser qwen3_coder`: Required when enabling tool usage. | |