Back to Models

Mistral large 2512

Mistral Large 3 is a massive open-weight granular MoE model featuring 675B total parameters (41B active). It offers top-tier reliability for production-grade assistants and long-context code comprehension.

Parameters
675000000000 B
Context
262,144 tokens
Released
Jan 12, 2025

Leaderboards

Average Score combining domain-specific Autobench scores; Higher is better

Performance vs. Industry Average

Intelligence

Mistral large 2512 is of lower intelligence compared to average (2.9), with an intelligence score of 2.6.

Price

Mistral large 2512 is cheaper compared to average ($0.75 per 1M Tokens) with a price of $0.10 per 1M Tokens.

Latency

Mistral large 2512 has a lower average latency compared to average (44.25s), with an average latency of 9.27s.

P99 Latency

Mistral large 2512 has a lower P99 latency compared to average (126.46s), taking 21.55s to receive the first token at P99 (TTFT).

Context Window

Mistral large 2512 has a smaller context window than average (406k tokens), with a context window of 262k tokens.

Mistral large 2512 - AutoBench