Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Qwen 2.5 VL 72B/on NVIDIA H100 80GB

Can it run?

Can NVIDIA H100 80GB run Qwen 2.5 VL 72B?

BGood

Runs well

Using Q4_K_M in vLLM

Capabilities:

Fit status

Runs well

Decode

64.1 tok/s

TTFT

3022 ms

Safe context

20K

Memory

65.6 GB / 80.0 GB

Memory breakdown

Weights43.9 GB
KV Cache11.3 GB
Runtime2.4 GB
Headroom8.0 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingCRuns with offload64.1 tok/s4395 ms33K
ChatBRuns well64.1 tok/s1648 ms11K
CodingBRuns well64.1 tok/s3022 ms20K
RAGCRuns with offload64.1 tok/s5494 ms33K
ReasoningBRuns well64.1 tok/s3571 ms20K

Quantization options

How Qwen 2.5 VL 72B (72B params) fits at each quantization level on NVIDIA H100 80GB (80.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
28.1 GB
LowD37
Q3_K_S
3
35.3 GB
LowD39
NVFP4
4
40.3 GB
MediumC40
Q4_K_M
4
43.9 GB
MediumC41
Q5_K_M
5
51.8 GB
HighC43
Q6_KBest for your GPU
6
59.0 GB
HighC44
Q8_0
8
77.0 GB
Very HighC44
F16
16
147.6 GB
MaximumF0

Get started

Ollama
ollama run qwen-2.5-vl-72b
HuggingFace
huggingface-cli download qwen-2.5-vl-72b

Upgrade options

Hardware that runs Qwen 2.5 VL 72B well

NVIDIANVIDIA GH200 96GBBudget pick
B73.8 tok/s decode

 

NVIDIANVIDIA H20 96GBBiggest leap
B73.8 tok/s decode

 

See all results for NVIDIA H100 80GBSee all hardware for Qwen 2.5 VL 72B