Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
noctrex
/
GLM-4.7-REAP-218B-A32B-MXFP4_MOE-GGUF
like
0
Text Generation
GGUF
imatrix
conversational
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
This is a MXFP4_MOE quantization of the model
GLM-4.7-REAP-218B-A32B
.
Downloads last month
2
GGUF
Model size
218B params
Architecture
glm4moe
Chat template
Hardware compatibility
Log In
to add your hardware
4-bit
MXFP4_MOE
125 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
noctrex/GLM-4.7-REAP-218B-A32B-MXFP4_MOE-GGUF
Base model
zai-org/GLM-4.7
Finetuned
cerebras/GLM-4.7-REAP-218B-A32B
Quantized
(
7
)
this model