Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Models/Pixtral Large 124B

Mistral AIMistral AI

Pixtral Large 124B

Frontier
huggingfaceHuggingFace
3.0KDownloads431LikesNov 2024Released131K tokensContextMistral ResearchLicense5 EntryQuality

Get started

— copy & paste to run locally
HuggingFace
huggingface-cli download pixtral-large-124b

Quick specs

Parameters124B
Architecturedense
Context131K tokens
Modalitytext+vision
Min RAM48.4 GB
Rec. RAM75.6 GB (Q4_K_M)
LicenseMistral Research
FamilyPixtral
✓ Chat✓ Reasoning

About this model

Pixtral-Large-Instruct-2411 is a 124B multimodal model built on top of Mistral Large 2, i.e., Mistral-Large-Instruct-2407. Pixtral Large is the second model in our multimodal family and demonstrates frontier-level image understanding. Particularly, the model is able to understand documents, charts and natural images, while maintaining the leading text-only understanding of Mistral Large 2.

  • •Frontier-class multimodal performance
  • •State-of-the-art on MathVista, DocVQA, VQAv2
  • •Extends Mistral Large 2 without compromising text performance
  • •123B multimodal decoder, 1B parameter vision encoder
  • •128K context window: fits minimum of 30 high-resolution images

Related models

Your hardware

Detecting...

Quick picks

AMD
Best budgetC
AMD Instinct MI350X 288GB~$8,000 — 77 tok/s
NVIDIA
Best overallB
NVIDIA B200 180GB~$30,000 — 89 tok/s

Best hardware

Top picks for Pixtral Large 124B

NVIDIA
NVIDIA B200 180GBB
180 GB
NVIDIA
NVIDIA H200 141GBB
141 GB
NVIDIA
NVIDIA H200 PCIe 141GBB
141 GB
NVIDIA
H100 NVL 188GBC
188 GB
NVIDIA
B100 192GBC
192 GB

Quantization options

VRAM estimates by quant level

No hardware detected — fit column shows raw VRAM estimates

QuantBitsVRAMQualityFit
Q2_K
2
48.4 GB
Low—
Q3_K_S
3
60.8 GB
Low—
NVFP4
4
69.4 GB
Medium—
Q4_K_M
4
75.6 GB
Medium—
Q5_K_M
5
89.3 GB
High—
Q6_K
6
101.7 GB
High—
Q8_0
8
132.7 GB
Very High—
F16
16
254.2 GB
Maximum—

Hardware compatibility

Fit estimates across all hardware

Open calculator

Computing compatibility...

Memory breakdown

Reference: NVIDIA A10 24GB

Weights75.6 GB
KV Cache19.4 GB
Runtime0.9 GB
Headroom2.4 GB