Back to Models
Llama 3.3 70B Instruct
Llama 3.3 model with 70B parameters offering improved performance over 3.1 version
Parameters
700 B
Context
128,000 tokens
Released
Dec 6, 2024
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 4.48
- 4.43
- 4.39
- 4.38
- 4.32
- 4.29
- 4.29
- 4.20
- 4.18
- 4.17
- 4.17
- 4.13
- 4.12
- 4.11
- 4.11
- 4.06
- 4.06
- 3.99
- 3.88
- 3.86
- 3.78
- 3.47
PRICE
USD cent per average answer; Lower is better
- 0.07
- 0.08
- 0.09
- 0.11
- 0.33
- 0.34
- 0.54
- 0.71
- 0.91
- 0.99
- 1.25
- 1.30
- 1.86
- 2.12
- 3.79
- 3.94
- 6.48
- 7.36
- 8.12
- 10.80
- 11.39
- 17.26
- 81.88
LATENCY
Average Latency in Seconds; Lower is better
- 20.00s
- 24.00s
- 31.00s
- 39.00s
- 52.00s
- 52.00s
- 61.00s
- 66.00s
- 67.00s
- 69.00s
- 75.00s
- 76.00s
- 83.00s
- 87.00s
- 90.00s
- 93.00s
- 100.00s
- 105.00s
- 111.00s
- 125.00s
- 130.00s
- 137.00s
- 144.00s
- 163.00s
- 170.00s
- 171.00s
- 180.00s
- 187.00s
- 227.00s
- 248.00s
- 261.00s
- 310.00s
Performance vs. Industry Average
Context Window
Llama 3.3 70B Instruct has a smaller context window than average (347k tokens), with a context window of 128k tokens.