Avg. Total Time
58.14s
Avg. TTFT
11.72s
Avg. Prefill TPS
4358.88
Avg. Gen TPS
13.81
Context Size
202752
Quantization
INT8-INT4
Engine
vllm
Creation Method
FFT
Model Type
GLM47D
Chat Template
GLM4
Reasoning
Yes
Vision
No
Parameters
355B
Added At
12/6/2025
license: mit thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png base_model:
GLM-4.6-Derestricted is a Derestricted version of GLM-4.6, created by Arli AI.
Our goal with this release is to provide a version of the model that removed refusal behaviors while maintaining the high-performance reasoning of the original GLM-4.6. This is unlike regular abliteration which often inadvertently "lobotomizes" the model.
To achieve this, Arli AI utilized Norm-Preserving Biprojected Abliteration, a refined technique pioneered by Jim Lai (grimjim). You can read the full technical breakdown in this article.
Why this matters:
Standard abliteration works by simply subtracting a "refusal vector" from the model's weights. While this works to uncensor a model, it is mathematically unprincipled. It alters the magnitude (or "loudness") of the neurons, destroying the delicate feature norms the model learned during training. This damage is why many uncensored models suffer from degraded logic or hallucinations.
How Norm-Preserving Biprojected Abliteration fixes it:
This model was modified using a three-step approach that removes refusals without breaking the model's brain:
The Result:
By preserving the weight norms, we maintain the "importance" structure of the neural network. Benchmarks suggest that this method avoids the "Safety Tax"—not only effectively removing refusals but potentially improving reasoning capabilities over the baseline, as the model is no longer wasting compute resources on suppressing its own outputs.
In fact, you may find surprising new knowledge and capabilities that the original model does not initially expose.
Quantization:
-# GLM-4.6
👋 Join our Discord community.
📖 Check out the GLM-4.6 technical blog, technical report(GLM-4.5), and Zhipu AI technical documentation.
📍 Use GLM-4.6 API services on Z.ai API Platform.
👉 One click to GLM-4.6.
Compared with GLM-4.5, GLM-4.6 brings several key improvements:
We evaluated GLM-4.6 across eight public benchmarks covering agents, reasoning, and coding. Results show clear gains over GLM-4.5, with GLM-4.6 also holding competitive advantages over leading domestic and international models such as DeepSeek-V3.1-Terminus and Claude Sonnet 4.

Both GLM-4.5 and GLM-4.6 use the same inference method.
you can check our github for more detail.
For general evaluations, we recommend using a sampling temperature of 1.0.
For code-related evaluation tasks (such as LCB), it is further recommended to set:
top_p = 0.95top_k = 40