GLM-4.5-Air-Abliterated

Creative model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

14.03s

Avg. TTFT

2.33s

Avg. Prefill TPS

1199.68

Avg. Gen TPS

34.71

Model Information

Context Size

131072

Quantization

r32

Engine

aphrodite

Creation Method

LoRA Finetune

Model Type

GLM45A

Chat Template

GLM4

Reasoning

Yes

Vision

No

Parameters

106B

Added At

11/13/2025


language:

  • en
  • zh library_name: transformers license: mit pipeline_tag: text-generation

GLM-4.5-Air

👋 Join our Discord community.
📖 Check out the GLM-4.5 technical blog, technical report, and Zhipu AI technical documentation.
📍 Use GLM-4.5 API services on Z.ai API Platform (Global) or
Zhipu AI Open Platform (Mainland China).
👉 One click to GLM-4.5.

Model Introduction

The GLM-4.5 series models are foundation models designed for intelligent agents. GLM-4.5 has 355 billion total parameters with 32 billion active parameters, while GLM-4.5-Air adopts a more compact design with 106 billion total parameters and 12 billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.

Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.

We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.

As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of 63.2, in the 3rd place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at 59.8 while maintaining superior efficiency.

bench

For more eval results, show cases, and technical details, please visit our technical blog or technical report.

The model code, tool parser and reasoning parser can be found in the implementation of transformers, vLLM and SGLang.

Quick Start

Please refer our github page for more detail.