Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights Paper • 2603.12228 • Published 14 days ago • 12
Efficient Memory Management for Large Language Model Serving with PagedAttention Paper • 2309.06180 • Published Sep 12, 2023 • 48
1-bit AI Infra: Part 1.1, Fast and Lossless BitNet b1.58 Inference on CPUs Paper • 2410.16144 • Published Oct 21, 2024 • 5
Mamba: Linear-Time Sequence Modeling with Selective State Spaces Paper • 2312.00752 • Published Dec 1, 2023 • 150
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality Paper • 2405.21060 • Published May 31, 2024 • 68
KV Cache Transform Coding for Compact Storage in LLM Inference Paper • 2511.01815 • Published Nov 3, 2025 • 3
TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate Paper • 2504.19874 • Published Apr 28, 2025 • 6
QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead Paper • 2406.03482 • Published Jun 5, 2024 • 1
PolarQuant: Quantizing KV Caches with Polar Transformation Paper • 2502.02617 • Published Feb 4, 2025 • 1