Llama 3.3 70B Instruct
Llama 3.3 model with 70B parameters offering improved performance over 3.1 version
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 4.49
- 4.45
- 4.37
- 4.37
- 4.34
- 4.32
- 4.30
- 4.27
- 4.27
- 4.25
- 4.23
- 4.21
- 4.17
- 4.08
- 3.70
- 3.70
- 3.61
- 3.46
- 3.35
PRICE
USD cent per average answer; Lower is better
- 0.01
- 0.03
- 0.05
- 0.06
- 0.07
- 0.10
- 0.17
- 0.17
- 0.26
- 0.58
- 0.83
- 0.91
- 0.98
- 1.42
- 1.88
- 4.84
- 7.53
- 7.71
- 15.44
LATENCY
Average Latency in Seconds; Lower is better
- 18.39s
- 6.62s
- 13.00s
- 13.16s
- 14.38s
- 17.14s
- 17.49s
- 17.88s
- 19.90s
- 23.30s
- 24.98s
- 36.21s
- 40.03s
- 41.83s
- 43.38s
- 45.14s
- 45.45s
- 54.51s
- 58.05s
- 63.81s
- 68.67s
- 85.92s
- 99.89s
- 100.48s
- 107.45s
- 113.96s
- 129.78s
- 151.90s
Performance vs. Industry Average
Intelligence
Llama 3.3 70B Instruct is of lower intelligence compared to average (4.1), with an intelligence score of 3.6.
Price
Llama 3.3 70B Instruct is cheaper compared to average ($1.68 per 1M Tokens) with a price of $0.04 per 1M Tokens.
Latency
Llama 3.3 70B Instruct has a lower average latency compared to average (53.05s), with an average latency of 18.31s.
P99 Latency
Llama 3.3 70B Instruct has a lower P99 latency compared to average (187.12s), taking 79.45s to receive the first token at P99 (TTFT).
Context Window
Llama 3.3 70B Instruct has a smaller context window than average (246k tokens), with a context window of 128k tokens.