Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Models/Mistral Large 3

MistralMistral

Mistral Large 3

Frontier
huggingfaceHuggingFace
18.9KDownloads254LikesNov 2024Released256K tokensContextMistral Research LicenseLicense5 EntryQuality

Get started

— copy & paste to run locally

Quick specs

Parameters675B (41B active)
Architecturemoe (MoE)
Context256K tokens
Modalitytext+vision
Min RAM263.3 GB
Rec. RAM411.8 GB (Q4_K_M)
LicenseMistral Research License
FamilyMistral Large
✓ Code✓ Chat✓ Reasoning

About this model

Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.

  • •Multi-lingual by design:: Dozens of languages supported, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean,...
  • •Proficient in coding:: Trained on 80+ coding languages such as Python, Java, C, C++, Javacsript, and Bash. Also trained on more specific languages...
  • •Agent-centric:: Best-in-class agentic capabilities with native function calling and JSON outputting
  • •Advanced Reasoning:: State-of-the-art mathematical and reasoning capabilities
  • •Mistral Research License:: Allows usage and modification for non-commercial usages

Your hardware

Detecting...

Quantization options

VRAM estimates by quant level

No hardware detected — fit column shows raw VRAM estimates

QuantBitsVRAMQualityFit
Q2_K
2
263.3 GB
Low—
Q3_K_S
3
330.8 GB
Low—
NVFP4
4
378.0 GB
Medium—
Q4_K_M
4
411.8 GB
Medium—
Q5_K_M
5
486.0 GB
High—
Q6_K
6
553.5 GB
High—
Q8_0
8
722.3 GB
Very High—
F16
16
1383.7 GB
Maximum—

Hardware compatibility

Fit estimates across all hardware

Open calculator

Computing compatibility...

Memory breakdown

Reference: NVIDIA A10 24GB

Weights411.8 GB
KV Cache6.4 GB
Runtime2.4 GB
Headroom2.4 GB