DeepSeek-V4-Flash-NVFP4-FP8

Model Optimizations

This model was obtained by using the following branch with LLM Compressor: https://github.com/vllm-project/llm-compressor/pull/2647

Deployment

This model was deployed using the following branch with vLLM: https://github.com/vllm-project/vllm/pull/41276

vllm serve RedHatAI/DeepSeek-V4-Flash-NVFP4-FP8 --tensor-parallel-size 4 --port 8089 --kv_cache_dtype="fp8"

Evaluation

This model has a noticably lower accuracy recovery than the base model due to the base model being released in a quantized format and differences between mxfp4 and nvfp4. More advanced techniques such as GPTQ can be used to increase accuracy recovery beyond this model's current state.

python tests/evals/gsm8k/gsm8k_eval.py
Results:
Accuracy: 0.910
Invalid responses: 0.000
Total latency: 173.006 s
Questions per second: 7.624
Total output tokens: 116217
Output tokens per second: 671.752
python3 tests/evals/mmlu_pro/mmlu_pro_eval.py --port 8089
Results:
Category: all
Accuracy: 0.554
Invalid responses: 0.000
Total latency: 112.065 s
Questions per second: 107.366
Total output tokens: 24076
Output tokens per second: 214.840

For more details on how this model was created and run in LLM Compressor, please contact Kyle Sayers on the vLLM Slack: https://communityinviter.com/apps/vllm-dev/join-vllm-developers-slack

Downloads last month
4,991
Safetensors
Model size
163B params
Tensor type
I64
·
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RedHatAI/DeepSeek-V4-Flash-NVFP4-FP8

Quantized
(32)
this model