Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Devstral Small 2 24B Instruct/on NVIDIA H100 80GB

Can it run?

Can NVIDIA H100 80GB run Devstral Small 2 24B Instruct?

CUsable

Runs well

Using Q4_K_M in Ollama

Capabilities:

Fit status

Runs well

Decode

192.2 tok/s

TTFT

1007 ms

Safe context

46K

Memory

27.6 GB / 80.0 GB

Memory breakdown

Weights14.6 GB
KV Cache3.8 GB
Runtime1.2 GB
Headroom8.0 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingCRuns well192.2 tok/s1465 ms82K
ChatCRuns well192.2 tok/s549 ms25K
CodingCRuns well192.2 tok/s1007 ms46K
RAGCRuns well192.2 tok/s1831 ms82K
ReasoningCRuns well192.2 tok/s1190 ms46K

Quantization options

How Devstral Small 2 24B Instruct (24B params) fits at each quantization level on NVIDIA H100 80GB (80.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
9.4 GB
LowD32
Q3_K_S
3
11.8 GB
LowD33
NVFP4
4
13.4 GB
MediumD33
Q4_K_M
4
14.6 GB
MediumD33
Q5_K_M
5
17.3 GB
HighD34
Q6_K
6
19.7 GB
HighD35
Q8_0
8
25.7 GB
Very HighD36
F16Best for your GPU
16
49.2 GB
MaximumC43

Get started

Ollama
ollama run devstral-small-2-24b
HuggingFace
huggingface-cli download devstral-small-2-24b
See all results for NVIDIA H100 80GBSee all hardware for Devstral Small 2 24B Instruct