Piotr Nawrot
pnawrot
AI & ML interests
None yet
Recent Activity
posted
an
update
1 day ago
We’ve just released Qwen3-8B-DMS-8x fine-tuned for 8x KV cache compression. It maintains dense model accuracy on demanding tasks like AIME24, and is perfect for inference-time scaling. The code on HF works out-of-the-box.
With DMS we fine-tune models end-to-end via distillation; this works much better than “token importance” proxies found in usual eviction methods. It’s state-of-art for KV eviction tailored for fast inference: adds negligible amount of parameters and computation to each KV head, and requires as little as 1K fine-tuning steps to reach 8x compression. It speeds-up both prefill and generation phase of Transformer LLMs, and can be combined with Sparse Attention methods such as DSA.
🎓Paper - https://neurips.cc/virtual/2025/loc/san-diego/poster/119605
💾 Checkpoint - https://huggingface.co/nvidia/Qwen3-8B-DMS-8x
📢 Article - https://ed.ac.uk/news/shrinking-ai-memory-boosts-accuracy
liked
a model
6 days ago
pnawrot/nanoT5-base
liked
a model
6 days ago
nvidia/Qwen3-8B-DMS-8x