Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Devstral Small 2 24B Instruct/on MacBook Pro M3 Max 48GB

Can it run?

Can MacBook Pro M3 Max 48GB run Devstral Small 2 24B Instruct?

CUsable

Runs well

Using Q4_K_M in Ollama

Capabilities:

Fit status

Runs well

Decode

16.4 tok/s

TTFT

11810 ms

Safe context

22K

Memory

24.8 GB / 34.6 GB

Memory breakdown

Weights14.6 GB
KV Cache3.8 GB
Runtime1.2 GB
Headroom5.2 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingCTight fit16.4 tok/s17178 ms39K
ChatCRuns well16.4 tok/s6442 ms12K
CodingCRuns well16.4 tok/s11810 ms22K
RAGCTight fit16.4 tok/s21472 ms39K
ReasoningCRuns well16.4 tok/s13957 ms22K

Quantization options

How Devstral Small 2 24B Instruct (24B params) fits at each quantization level on MacBook Pro M3 Max 48GB (34.6 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
9.4 GB
LowD35
Q3_K_S
3
11.8 GB
LowD37
NVFP4
4
13.4 GB
MediumD38
Q4_K_M
4
14.6 GB
MediumD39
Q5_K_M
5
17.3 GB
HighC40
Q6_K
6
19.7 GB
HighC42
Q8_0Best for your GPU
8
25.7 GB
Very HighC44
F16
16
49.2 GB
MaximumF0

Get started

Ollama
ollama run devstral-small-2-24b
HuggingFace
huggingface-cli download devstral-small-2-24b

Upgrade options

Hardware that runs Devstral Small 2 24B Instruct well

NVIDIANVIDIA A100 40GBBudget pick
C89.2 tok/s decode

~$10,000 MSRP

NVIDIARTX PRO 5000 Blackwell 48GBBiggest leap
C77.1 tok/s decode

 

See all results for MacBook Pro M3 Max 48GBSee all hardware for Devstral Small 2 24B Instruct