Back to Models

Gpt oss 120b

GPT-OSS-120B is an open-weight MoE model from OpenAI containing 116.8B total parameters (5.1B active). Licensed under Apache 2.0, it is post-trained with MXFP4 quantization to run inference efficiently on a single 80GB GPU.

Thinking Mode
Parameters
117000000000 B
Context
131,000 tokens
Released
May 8, 2025

Leaderboards

Average Score combining domain-specific Autobench scores; Higher is better

Performance vs. Industry Average

Intelligence

Gpt oss 120b is of lower intelligence compared to average (2.9), with an intelligence score of 2.8.

Price

Gpt oss 120b is cheaper compared to average ($0.75 per 1M Tokens) with a price of $0.02 per 1M Tokens.

Latency

Gpt oss 120b has a lower average latency compared to average (44.25s), with an average latency of 18.03s.

P99 Latency

Gpt oss 120b has a lower P99 latency compared to average (126.46s), taking 63.36s to receive the first token at P99 (TTFT).

Context Window

Gpt oss 120b has a smaller context window than average (406k tokens), with a context window of 131k tokens.

Gpt oss 120b - AutoBench