Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/B100 192GB
NVIDIA

NVIDIA

B100 192GB

Data CenterBlackwellNVLINKCUDA
192GB
VRAM
8kGB/s
Bandwidth
1.8kTFLOPS
FP16 Compute
3.5kTOPS
INT8 Inference
VRAM192 GBBandwidth8k GB/sCompute1.8k TFInference3.5k TOPS
B100 192GBCategory AvgAMD Instinct MI325X 256GB

Specifications

Compute
FP161750 TFLOPS
INT83500 TOPS
ArchitectureBlackwell
Memory
VRAM192 GB
Bandwidth8000 GB/s
General
FamilyData Center
SegmentData Center
InterconnectNVLINK
Compute PlatformCUDA

Architecture

Blackwell

Blackwell is NVIDIA's fifth-generation RTX architecture, built on TSMC's 4NP process. It introduces 5th-generation Tensor Cores with native FP4 precision support, enabling double the inference throughput per watt compared to Ada Lovelace's FP8 operations. Key innovations include the Neural Rendering Pipeline for AI-driven shading and the debut of GDDR7 memory in consumer GPUs.

AI Relevance

FP4 Tensor Cores deliver the highest tokens-per-watt efficiency in any consumer architecture. Native FP4 quantization means models can run at lower precision with minimal quality loss, effectively doubling the effective VRAM for model weights.

Process: TSMC 4NPPlatform: CUDATensor Cores: Gen 5Precisions: FP32, FP16, BF16, FP8, FP4, INT8, INT4

Recommendations by Workload

Agentic Coding

B

Devstral 2 123B Instruct

This model is still usable for agentic-coding, but it is not the most specialized pick. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 89.6 tok/s · 46K ctx · llama.cpp
133.6 GB / 192.0 GB VRAM

Chat

C

Mistral Small 4 119B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 269.3 tok/s · 16K ctx · llama.cpp
93.5 GB / 192.0 GB VRAM

Coding

C

Devstral 2 123B Instruct

This model is a direct match for coding. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 89.6 tok/s · 27K ctx · llama.cpp
114.3 GB / 192.0 GB VRAM

RAG

B

Command A 111B

This model is a direct match for rag. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 99.2 tok/s · 50K ctx · llama.cpp
122.5 GB / 192.0 GB VRAM

Reasoning

C

Devstral 2 123B Instruct

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 89.6 tok/s · 27K ctx · llama.cpp
114.3 GB / 192.0 GB VRAM

Full Model Compatibility

MistralDevstral 2 123B Instruct
C55
123B114.3 GB90 tok/s27K ctx
dense
MistralMixtral 8x22B
C55
141B112.2 GB150 tok/s27K ctx
moe
AlibabaQwen 3 235B A22B
C54
235B166.9 GB125 tok/s18K ctx
moe
CohereCommand A 111B
C54
111B105.2 GB99 tok/s29K ctx
dense
UnslothQwen3.5 122B A10B
C53
122B98.9 GB105 tok/s31K ctx
dense
MistralMistral Small 4 119B
C53
119B93.7 GB269 tok/s33K ctx
moe
AlibabaQwen 2.5 72B
C51
72B75.3 GB153 tok/s41K ctx
dense
AlibabaQwen 2.5 VL 72B
C51
72B75.3 GB153 tok/s33K ctx
dense
MetaLlama 3.3 70B
C50
70B73.7 GB157 tok/s42K ctx
dense
AlibabaQwen3-Coder-Next
C50
80B69.7 GB417 tok/s44K ctx
moe
UnslothQwen3.5 35B A3B
C47
35B46.9 GB315 tok/s65K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
C47
35B46.9 GB315 tok/s65K ctx
dense
AlibabaQwen 2.5 Coder 32B
C47
32B44.6 GB344 tok/s69K ctx
dense
UnslothQwen3.5 27B
C47
27B40.8 GB408 tok/s75K ctx
dense
Unslothgemma 3 27b it
C47
27B40.8 GB408 tok/s75K ctx
dense
AlibabaQwen3-Coder 30B A3B Instruct
C47
30.5B39.5 GB934 tok/s78K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
C47
30B39.2 GB966 tok/s78K ctx
moe
MistralDevstral Small 2 24B Instruct
C46
24B38.5 GB459 tok/s80K ctx
dense
MistralDevstral Small 1.1
C46
24B38.5 GB459 tok/s80K ctx
dense
MistralCodestral 2 25.08
C46
22B37.0 GB501 tok/s83K ctx
dense
UnslothQwen3.5 9B
C45
9B27.0 GB1224 tok/s114K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C45
9B27.0 GB1224 tok/s114K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
C45
8B26.2 GB1377 tok/s117K ctx
dense
Lmstudio-communityLQwen3.5 9B
C45
9B27.0 GB1224 tok/s114K ctx
dense
XtunerXllava llama 3 8b v1 1
C45
8B26.2 GB1377 tok/s117K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
C45
8B26.2 GB1377 tok/s117K ctx
dense
TheBlokeTLlama 2 7B Chat
C45
7B25.5 GB1574 tok/s121K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
C45
8B26.2 GB1377 tok/s117K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
C45
7B25.5 GB1574 tok/s121K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
C45
7B25.5 GB1574 tok/s121K ctx
dense
UnslothQwen3.5 4B
C45
4B23.3 GB2754 tok/s132K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C45
3B23.1 GB3173 tok/s133K ctx
dense
BartowskiBgemma 2 2b it
C45
2B22.5 GB4302 tok/s136K ctx
dense
Googlegemma 2b
C45
2B22.1 GB5508 tok/s139K ctx
dense
Lmstudio-communityLgemma 3 4b it
C45
4B23.3 GB2754 tok/s132K ctx
dense
QwenQwen2.5 3B Instruct
C45
3B22.7 GB3672 tok/s135K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C45
1B21.7 GB7056 tok/s141K ctx
dense
QwenQwen2.5 1.5B Instruct
C45
1.5B21.8 GB6720 tok/s141K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C45
2B22.1 GB5508 tok/s139K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C45
1.1B21.6 GB6720 tok/s142K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C45
0.5B21.3 GB7056 tok/s144K ctx
dense
Ggml-orgGembeddinggemma 300M
C44
0.3B21.1 GB7056 tok/s145K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B435.2 GB48 tok/s7K ctx
moe
Z.aiGLM-5
F0
744B480.2 GB43 tok/s6K ctx
moe
Moonshot AIKimi K2.5
F0
1000B635.1 GB34 tok/s5K ctx
moe
MistralMistral Large 3
F0
675B438.3 GB47 tok/s7K ctx
+1moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B318.4 GB64 tok/s10K ctx
moe
DeepSeekDeepSeek V3 671B
F0
671B435.2 GB48 tok/s7K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B324.3 GB28 tok/s9K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B266.8 GB83 tok/s12K ctx
moe

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~441.0 GB
Z.aiGLM-5
744BTier 5Needs ~486.4 GB
Moonshot AIKimi K2.5
1000BTier 5Needs ~640.1 GB
MistralMistral Large 3
675BTier 5Needs ~444.7 GB
AlibabaQwen3-Coder 480B A35B Instruct
480BTier 5Needs ~323.8 GB

Upgrade paths

Upgrade from B100 192GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

AMDAMD Instinct MI325X 256GBNext step up
256 GB VRAM (+64)
A
Unlocks Llama 4 Maverick 17B 128E, K EXAONE 236B A23B, Baichuan M3 235B+1 more

 

AMDAMD Instinct MI350X 288GBBest value
288 GB VRAM (+96)
A
Unlocks Qwen3-Coder 480B A35B Instruct, Llama 4 Maverick 17B 128E, K EXAONE 236B A23B+2 more

~$8,000 MSRP

Compare this GPU