Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Llama 4 Maverick 17B 128E/on NVIDIA H100 80GB

Can it run?

Can NVIDIA H100 80GB run Llama 4 Maverick 17B 128E?

FWon't run

Too heavy

Using Q4_K_M in vLLM

Capabilities:

Fit status

Too heavy

Decode

34.5 tok/s

TTFT

5607 ms

Safe context

5K

Memory

257.1 GB / 80.0 GB

Memory breakdown

Weights244.0 GB
KV Cache2.7 GB
Runtime2.4 GB
Headroom8.0 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingFToo heavy34.5 tok/s8155 ms10K
ChatFToo heavy34.5 tok/s3058 ms4K
CodingFToo heavy34.5 tok/s5607 ms5K
RAGFToo heavy34.5 tok/s10194 ms10K
ReasoningFToo heavy34.5 tok/s6626 ms5K

Quantization options

How Llama 4 Maverick 17B 128E (400B params) fits at each quantization level on NVIDIA H100 80GB (80.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
156.0 GB
LowF0
Q3_K_S
3
196.0 GB
LowF0
NVFP4
4
224.0 GB
MediumF0
Q4_K_M
4
244.0 GB
MediumF0
Q5_K_M
5
288.0 GB
HighF0
Q6_K
6
328.0 GB
HighF0
Q8_0
8
428.0 GB
Very HighF0
F16
16
820.0 GB
MaximumF0

Upgrade options

Hardware that runs Llama 4 Maverick 17B 128E well

AMDAMD Instinct MI350X 288GBBest value
C71.7 tok/s decode

~$8,000 MSRP

AMDAMD Instinct MI325X 256GBBiggest leap
C50.6 tok/s decode

 

See all results for NVIDIA H100 80GBSee all hardware for Llama 4 Maverick 17B 128E