Back to Models

Mistral large 2512

Mistral Large 3 2512 is Mistral's flagship MoE model (675B total, 41B active). It offers top-tier performance in reasoning and coding.

Thinking Mode
Parameters
675000000000 B
Context
262,144 tokens
Released
Jan 12, 2025

Leaderboards

Performance vs. Industry Average

Intelligence

Mistral large 2512 is of lower intelligence compared to average (4.1), with an intelligence score of 3.9.

Price

Mistral large 2512 is cheaper compared to average ($4.91 per 1M Tokens) with a price of $0.51 per 1M Tokens.

Latency

Mistral large 2512 has a lower average latency compared to average (120.77s), with an average latency of 89.96s.

P99 Latency

Mistral large 2512 has a lower P99 latency compared to average (354.03s), taking 198.13s to receive the first token at P99 (TTFT).

Context Window

Mistral large 2512 has a smaller context window than average (347k tokens), with a context window of 262k tokens.

Mistral large 2512 - AutoBench