Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Mixtral 8x22B/on H100 NVL 188GB

Can it run?

Can H100 NVL 188GB run Mixtral 8x22B?

CUsable

Runs well

Using Q4_K_M in llama.cpp

Capabilities:

Fit status

Runs well

Decode

140.9 tok/s

TTFT

1374 ms

Safe context

27K

Memory

111.8 GB / 188.0 GB

Memory breakdown

Weights86.0 GB
KV Cache6.1 GB
Runtime0.9 GB
Headroom18.8 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingBRuns well140.9 tok/s1998 ms51K
ChatCRuns well140.9 tok/s749 ms14K
CodingCRuns well140.9 tok/s1374 ms27K
RAGBRuns well140.9 tok/s2498 ms51K
ReasoningCRuns well140.9 tok/s1624 ms27K

Quantization options

How Mixtral 8x22B (141B params) fits at each quantization level on H100 NVL 188GB (188.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
55.0 GB
LowD36
Q3_K_S
3
69.1 GB
LowD37
NVFP4
4
79.0 GB
MediumD39
Q4_K_M
4
86.0 GB
MediumD39
Q5_K_M
5
101.5 GB
HighC41
Q6_K
6
115.6 GB
HighC43
Q8_0Best for your GPU
8
150.9 GB
Very HighC45
F16
16
289.0 GB
MaximumF0

Get started

Ollama
ollama run mixtral-8x22b
HuggingFace
huggingface-cli download mixtral-8x22b
See all results for H100 NVL 188GBSee all hardware for Mixtral 8x22B