Claude Opus 4.1
Exceptional reasoning model for specialized complex tasks requiring advanced analytical capabilities
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 4.27
- 4.49
- 4.45
- 4.37
- 4.37
- 4.34
- 4.32
- 4.30
- 4.27
- 4.25
- 4.23
- 4.21
- 4.17
- 4.08
- 3.70
- 3.70
- 3.61
- 3.46
- 3.35
PRICE
USD cent per average answer; Lower is better
- 15.44
- 0.01
- 0.03
- 0.05
- 0.06
- 0.07
- 0.10
- 0.17
- 0.17
- 0.26
- 0.58
- 0.83
- 0.91
- 0.98
- 1.42
- 1.88
- 4.84
- 7.53
- 7.71
LATENCY
Average Latency in Seconds; Lower is better
- 113.96s
- 6.62s
- 13.00s
- 13.16s
- 14.38s
- 17.14s
- 17.49s
- 17.88s
- 18.39s
- 19.90s
- 23.30s
- 24.98s
- 36.21s
- 40.03s
- 41.83s
- 43.38s
- 45.14s
- 45.45s
- 54.51s
- 58.05s
- 63.81s
- 68.67s
- 85.92s
- 99.89s
- 100.48s
- 107.45s
- 129.78s
- 151.90s
Performance vs. Industry Average
Intelligence
Claude Opus 4.1 is of higher intelligence compared to average (4.1), with an intelligence score of 4.3.
Price
Claude Opus 4.1 is more expensive compared to average ($1.68 per 1M Tokens) with a price of $15.44 per 1M Tokens.
Latency
Claude Opus 4.1 has a higher average latency compared to average (53.05s), with an average latency of 113.53s.
P99 Latency
Claude Opus 4.1 has a higher P99 latency compared to average (187.12s), taking 361.89s to receive the first token at P99 (TTFT).
Context Window
Claude Opus 4.1 has a smaller context window than average (246k tokens), with a context window of 200k tokens.