Back to Models

GLM 4.7

GLM-4.7 is a highly stable 358B parameter model optimized for coding and UI generation. It utilizes Interleaved Thinking and Turn-level Thinking for reliable execution of complex mathematical tasks.

Thinking Mode
Parameters
358000000000 B
Context
202,752 tokens
Released
Invalid Date

Leaderboards

Performance vs. Industry Average

Intelligence

GLM 4.7 is of higher intelligence compared to average (2.8), with an intelligence score of 2.9.

Price

GLM 4.7 is cheaper compared to average ($0.67 per 1M Tokens) with a price of $0.14 per 1M Tokens.

Latency

GLM 4.7 has a lower average latency compared to average (45.95s), with an average latency of 43.55s.

P99 Latency

GLM 4.7 has a higher P99 latency compared to average (131.50s), taking 134.63s to receive the first token at P99 (TTFT).

Context Window

GLM 4.7 has a smaller context window than average (401k tokens), with a context window of 203k tokens.

GLM 4.7 - AutoBench