Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/NVIDIA H100 80GB
NVIDIA

NVIDIA

NVIDIA H100 80GB

Hopper DatacenterDatacenterHopperSXMCUDA
80GB
VRAM
3.4kGB/s
Bandwidth
989TFLOPS
FP16 Compute
2kTOPS
INT8 Inference
$40,000 MSRP
VRAM80 GBBandwidth3.4k GB/sCompute989 TFInference2k TOPSValue2.47 TF/$k
NVIDIA H100 80GBCategory AvgMac Studio M1 Ultra 128GB

Specifications

Compute
FP16989 TFLOPS
INT81979 TOPS
ArchitectureHopper
Memory
VRAM80 GB
Bandwidth3350 GB/s
General
FamilyHopper Datacenter
SegmentDatacenter
InterconnectSXM
Compute PlatformCUDA
MSRP$40,000

Architecture

Hopper

Hopper is NVIDIA's datacenter-focused architecture succeeding Ampere. Built on TSMC 4N, it introduces the Transformer Engine with automatic FP8/FP16 mixed-precision training, HBM3/HBM3e memory, and NVLink 4.0 for multi-GPU scaling. The H100 flagship delivers up to 3x the AI training performance of A100.

AI Relevance

The Transformer Engine automatically manages FP8 precision for optimal training speed without accuracy loss. With up to 141 GB HBM3e (H200), Hopper GPUs can hold the largest open-weight models entirely in GPU memory, making them the workhorse of AI datacenters.

Process: TSMC 4NPlatform: CUDATensor Cores: Gen 4Precisions: FP64, FP32, TF32, FP16, BF16, FP8, INT8

Recommendations by Workload

Agentic Coding

B

Qwen3-Coder-Next

This model is still usable for agentic-coding, but it is not the most specialized pick. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 174.7 tok/s · 44K ctx · llama.cpp
58.6 GB / 80.0 GB VRAM

Chat

C

Qwen 3 32B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 144.2 tok/s · 21K ctx · llama.cpp
30.9 GB / 80.0 GB VRAM

Coding

B

Qwen3-Coder-Next

This model is a direct match for coding. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 174.7 tok/s · 22K ctx · llama.cpp
58.5 GB / 80.0 GB VRAM

RAG

C

Command R 35B

This model is a direct match for rag. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 131.8 tok/s · 62K ctx · llama.cpp
41.2 GB / 80.0 GB VRAM

Reasoning

B

Qwen3-Coder-Next

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 174.7 tok/s · 22K ctx · llama.cpp
58.5 GB / 80.0 GB VRAM

Full Model Compatibility

AlibabaQwen3-Coder-Next
B57
80B58.5 GB175 tok/s22K ctx
moe
MetaLlama 3.3 70B
B56
70B62.5 GB66 tok/s20K ctx
dense
AlibabaQwen 2.5 72B
B56
72B64.1 GB64 tok/s20K ctx
dense
AlibabaQwen 2.5 VL 72B
B56
72B64.1 GB64 tok/s20K ctx
dense
MistralMistral Small 4 119B
C54
119B82.5 GB111 tok/s16K ctx
moe
UnslothQwen3.5 35B A3B
C52
35B35.7 GB132 tok/s36K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
C52
35B35.7 GB132 tok/s36K ctx
dense
AlibabaQwen 2.5 Coder 32B
C51
32B33.4 GB144 tok/s38K ctx
dense
UnslothQwen3.5 27B
C50
27B29.6 GB171 tok/s43K ctx
dense
Unslothgemma 3 27b it
C50
27B29.6 GB171 tok/s43K ctx
dense
AlibabaQwen3-Coder 30B A3B Instruct
C50
30.5B28.3 GB391 tok/s45K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
C50
30B28.0 GB405 tok/s46K ctx
moe
MistralDevstral Small 2 24B Instruct
C49
24B27.3 GB192 tok/s47K ctx
dense
MistralDevstral Small 1.1
C49
24B27.3 GB192 tok/s47K ctx
dense
MistralCodestral 2 25.08
C49
22B25.8 GB210 tok/s50K ctx
dense
UnslothQwen3.5 9B
C46
9B15.8 GB513 tok/s81K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C46
9B15.8 GB513 tok/s81K ctx
dense
Lmstudio-communityLQwen3.5 9B
C46
9B15.8 GB513 tok/s81K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
C46
8B15.0 GB577 tok/s85K ctx
dense
XtunerXllava llama 3 8b v1 1
C46
8B15.0 GB577 tok/s85K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
C46
8B15.0 GB577 tok/s85K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
C46
8B15.0 GB577 tok/s85K ctx
dense
TheBlokeTLlama 2 7B Chat
C46
7B14.3 GB659 tok/s90K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
C46
7B14.3 GB659 tok/s90K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
C46
7B14.3 GB659 tok/s90K ctx
dense
UnslothQwen3.5 4B
C45
4B12.1 GB1153 tok/s105K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C45
3B11.9 GB1329 tok/s108K ctx
dense
Lmstudio-communityLgemma 3 4b it
C45
4B12.1 GB1153 tok/s105K ctx
dense
BartowskiBgemma 2 2b it
C45
2B11.3 GB1802 tok/s113K ctx
dense
QwenQwen2.5 3B Instruct
C45
3B11.5 GB1538 tok/s111K ctx
dense
Googlegemma 2b
C45
2B10.9 GB2307 tok/s117K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C45
2B10.9 GB2307 tok/s117K ctx
dense
QwenQwen2.5 1.5B Instruct
C45
1.5B10.6 GB2814 tok/s121K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C45
1B10.5 GB2955 tok/s122K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C45
1.1B10.4 GB2814 tok/s123K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C45
0.5B10.1 GB2955 tok/s127K ctx
dense
Ggml-orgGembeddinggemma 300M
C45
0.3B9.9 GB2955 tok/s129K ctx
dense
UnslothQwen3.5 122B A10B
C42
122B87.7 GB42 tok/s15K ctx
dense
CohereCommand A 111B
C41
111B94.0 GB39 tok/s14K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B424.0 GB20 tok/s4K ctx
moe
MistralDevstral 2 123B Instruct
F0
123B103.1 GB38 tok/s12K ctx
dense
Z.aiGLM-5
F0
744B469.0 GB18 tok/s4K ctx
moe
Moonshot AIKimi K2.5
F0
1000B623.9 GB14 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B427.1 GB20 tok/s4K ctx
+1moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B307.2 GB27 tok/s4K ctx
moe
DeepSeekDeepSeek V3 671B
F0
671B424.0 GB20 tok/s4K ctx
moe
MistralMixtral 8x22B
F0
141B101.0 GB63 tok/s13K ctx
moe
AlibabaQwen 3 235B A22B
F0
235B155.7 GB52 tok/s8K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B313.1 GB12 tok/s4K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B255.6 GB35 tok/s5K ctx
moe

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~429.8 GB
MistralDevstral 2 123B Instruct
123BTier 5Needs ~122.4 GB
Runs on Mac Studio M3 Ultra 256GB
Z.aiGLM-5
744BTier 5Needs ~475.2 GB
Moonshot AIKimi K2.5
1000BTier 5Needs ~628.9 GB
MistralMistral Large 3
675BTier 5Needs ~433.5 GB

Upgrade paths

Upgrade from NVIDIA H100 80GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

AppleMac Studio M1 Ultra 128GBNext step up
128 GB Unified (+48)
B
Unlocks Solar Open 100B, Solar Open 100B i1

~$3,999 MSRP

NVIDIANVIDIA H20 96GBNVIDIA upgrade
96 GB VRAM (+16)4000 GB/s (+650)
A
Unlocks Qwen3.5 122B A10B, Mixtral 8x22B, Command A 111B+3 more · +13% faster avg

 

AMDAMD Instinct MI325X 256GBBiggest leap
256 GB VRAM (+176)6000 GB/s (+2650)
A
Unlocks Devstral 2 123B Instruct, Qwen3.5 122B A10B, Mixtral 8x22B+12 more · +49% faster avg

 

AMDAMD Instinct MI350X 288GBBest value
288 GB VRAM (+208)8000 GB/s (+4650)
A
Unlocks Devstral 2 123B Instruct, Qwen3-Coder 480B A35B Instruct, Qwen3.5 122B A10B+13 more · +98% faster avg

~$8,000 MSRP

Compare this GPU