Gpt oss 20b
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware.
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 3.78
- 4.48
- 4.43
- 4.39
- 4.38
- 4.32
- 4.29
- 4.29
- 4.20
- 4.18
- 4.17
- 4.17
- 4.13
- 4.12
- 4.11
- 4.11
- 4.06
- 4.06
- 3.99
- 3.88
- 3.86
- 3.47
PRICE
USD cent per average answer; Lower is better
- 0.07
- 0.08
- 0.09
- 0.11
- 0.33
- 0.34
- 0.54
- 0.71
- 0.91
- 0.99
- 1.25
- 1.30
- 1.86
- 2.12
- 3.79
- 3.94
- 6.48
- 7.36
- 8.12
- 10.80
- 11.39
- 17.26
- 81.88
LATENCY
Average Latency in Seconds; Lower is better
- 39.00s
- 20.00s
- 24.00s
- 31.00s
- 52.00s
- 52.00s
- 61.00s
- 66.00s
- 67.00s
- 69.00s
- 75.00s
- 76.00s
- 83.00s
- 87.00s
- 90.00s
- 93.00s
- 100.00s
- 105.00s
- 111.00s
- 125.00s
- 130.00s
- 137.00s
- 144.00s
- 163.00s
- 170.00s
- 171.00s
- 180.00s
- 187.00s
- 227.00s
- 248.00s
- 261.00s
- 310.00s
Performance vs. Industry Average
Intelligence
Gpt oss 20b is of lower intelligence compared to average (4.1), with an intelligence score of 3.8.
Price
Gpt oss 20b is cheaper compared to average ($4.91 per 1M Tokens) with a price of $0.07 per 1M Tokens.
Latency
Gpt oss 20b has a lower average latency compared to average (120.77s), with an average latency of 38.77s.
P99 Latency
Gpt oss 20b has a lower P99 latency compared to average (354.03s), taking 183.02s to receive the first token at P99 (TTFT).
Context Window
Gpt oss 20b has a smaller context window than average (347k tokens), with a context window of 131k tokens.