This is a MXFP4_MOE quantization of the model GLM-4.7-REAP-218B-A32B.

Downloads last month
2
GGUF
Model size
218B params
Architecture
glm4moe
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/GLM-4.7-REAP-218B-A32B-MXFP4_MOE-GGUF

Base model

zai-org/GLM-4.7
Quantized
(7)
this model