Llama 3.3 nemotron super 49b v1.5
Llama 3.3 Nemotron Super 49B is a reasoning model derived from Llama 3.3 70B. It is post-trained for agentic workflows, RAG, and tool calling.
Leaderboards
QUALITY
Average Score combining domain-specific Autobench scores; Higher is better
- 4.49
- 4.45
- 4.37
- 4.37
- 4.34
- 4.32
- 4.30
- 4.27
- 4.27
- 4.25
- 4.23
- 4.21
- 4.17
- 4.08
- 3.70
- 3.70
- 3.61
- 3.46
- 3.35
PRICE
USD cent per average answer; Lower is better
- 0.01
- 0.03
- 0.05
- 0.06
- 0.07
- 0.10
- 0.17
- 0.17
- 0.26
- 0.58
- 0.83
- 0.91
- 0.98
- 1.42
- 1.88
- 4.84
- 7.53
- 7.71
- 15.44
LATENCY
Average Latency in Seconds; Lower is better
- 6.62s
- 13.00s
- 13.16s
- 14.38s
- 17.14s
- 17.49s
- 17.88s
- 18.39s
- 19.90s
- 23.30s
- 24.98s
- 36.21s
- 40.03s
- 41.83s
- 43.38s
- 45.14s
- 45.45s
- 54.51s
- 58.05s
- 63.81s
- 68.67s
- 85.92s
- 99.89s
- 100.48s
- 107.45s
- 113.96s
- 129.78s
- 151.90s
Performance vs. Industry Average
Intelligence
Llama 3.3 nemotron super 49b v1.5 is of higher intelligence compared to average (4.1), with an intelligence score of 4.1.
Price
Llama 3.3 nemotron super 49b v1.5 is cheaper compared to average ($1.68 per 1M Tokens) with a price of $0.13 per 1M Tokens.
Latency
Llama 3.3 nemotron super 49b v1.5 has a lower average latency compared to average (53.05s), with an average latency of 43.88s.
P99 Latency
Llama 3.3 nemotron super 49b v1.5 has a lower P99 latency compared to average (187.12s), taking 128.06s to receive the first token at P99 (TTFT).
Context Window
Llama 3.3 nemotron super 49b v1.5 has a smaller context window than average (339k tokens), with a context window of 131k tokens.