GGUF containing only the MTP tensors from am17an/Qwen3.6-35BA3B-MTP-GGUF. This is not a functional GGUF, it's a smaller alternative to downloading the entire ~38GB model just to transfer the MTP tensors to existing models.
The modified conversion script from user buzz can be found here
- Downloads last month
- 100
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for IHaveNoClueAndIMustPost/Qwen3.6-35A3B-MTP-TENSORS-ONLY
Base model
am17an/Qwen3.6-35BA3B-MTP-GGUF