Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Devstral Small 2 24B Instruct/on H100 NVL 188GB

Can it run?

Can H100 NVL 188GB run Devstral Small 2 24B Instruct?

CUsable

Runs well

Using Q4_K_M in Ollama

Capabilities:

Fit status

Runs well

Decode

431.6 tok/s

TTFT

449 ms

Safe context

78K

Memory

38.4 GB / 188.0 GB

Memory breakdown

Weights14.6 GB
KV Cache3.8 GB
Runtime1.2 GB
Headroom18.8 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingCRuns well431.6 tok/s653 ms143K
ChatCRuns well431.6 tok/s350 ms41K
CodingCRuns well431.6 tok/s449 ms78K
RAGCRuns well431.6 tok/s816 ms143K
ReasoningCRuns well431.6 tok/s530 ms78K

Quantization options

How Devstral Small 2 24B Instruct (24B params) fits at each quantization level on H100 NVL 188GB (188.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
9.4 GB
LowD31
Q3_K_S
3
11.8 GB
LowD31
NVFP4
4
13.4 GB
MediumD31
Q4_K_M
4
14.6 GB
MediumD31
Q5_K_M
5
17.3 GB
HighD31
Q6_K
6
19.7 GB
HighD32
Q8_0
8
25.7 GB
Very HighD32
F16Best for your GPU
16
49.2 GB
MaximumD35

Get started

Ollama
ollama run devstral-small-2-24b
HuggingFace
huggingface-cli download devstral-small-2-24b
See all results for H100 NVL 188GBSee all hardware for Devstral Small 2 24B Instruct