Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/RTX PRO 5000 Blackwell 48GB
NVIDIA

NVIDIA

RTX PRO 5000 Blackwell 48GB

RTX PRO BlackwellWorkstationBlackwellPCIe 5CUDA
48GB
VRAM
1.3kGB/s
Bandwidth
96TFLOPS
FP16 Compute
2.5kTOPS
INT8 Inference
VRAM48 GBBandwidth1.3k GB/sCompute96 TFInference2.5k TOPS
RTX PRO 5000 Blackwell 48GBCategory AvgNVIDIA A16 64GB

Specifications

Compute
FP1696 TFLOPS
INT82500 TOPS
ArchitectureBlackwell
Memory
VRAM48 GB
Bandwidth1344 GB/s
General
FamilyRTX PRO Blackwell
SegmentWorkstation
InterconnectPCIe 5
Compute PlatformCUDA

Architecture

Blackwell

Blackwell is NVIDIA's fifth-generation RTX architecture, built on TSMC's 4NP process. It introduces 5th-generation Tensor Cores with native FP4 precision support, enabling double the inference throughput per watt compared to Ada Lovelace's FP8 operations. Key innovations include the Neural Rendering Pipeline for AI-driven shading and the debut of GDDR7 memory in consumer GPUs.

AI Relevance

FP4 Tensor Cores deliver the highest tokens-per-watt efficiency in any consumer architecture. Native FP4 quantization means models can run at lower precision with minimal quality loss, effectively doubling the effective VRAM for model weights.

Process: TSMC 4NPPlatform: CUDATensor Cores: Gen 5Precisions: FP32, FP16, BF16, FP8, FP4, INT8, INT4

Recommendations by Workload

Agentic Coding

C

Devstral Small 2 24B Instruct

This model is still usable for agentic-coding, but it is not the most specialized pick. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 77.1 tok/s · 55K ctx · llama.cpp
27.8 GB / 48.0 GB VRAM

Chat

C

Qwen 3 30B A3B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 157.0 tok/s · 15K ctx · llama.cpp
25.1 GB / 48.0 GB VRAM

Coding

C

Devstral Small 2 24B Instruct

This model is a direct match for coding. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 77.1 tok/s · 32K ctx · llama.cpp
24.1 GB / 48.0 GB VRAM

RAG

B

Command R 35B

This model is a direct match for rag. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 52.9 tok/s · 40K ctx · llama.cpp
38.0 GB / 48.0 GB VRAM

Reasoning

C

Qwen 3 32B

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 57.8 tok/s · 25K ctx · llama.cpp
30.2 GB / 48.0 GB VRAM

Full Model Compatibility

UnslothQwen3.5 35B A3B
C55
35B32.5 GB53 tok/s24K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
C55
35B32.5 GB53 tok/s24K ctx
dense
AlibabaQwen 2.5 Coder 32B
C54
32B30.2 GB58 tok/s25K ctx
dense
AlibabaQwen3-Coder 30B A3B Instruct
C53
30.5B25.1 GB157 tok/s31K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
C53
30B24.8 GB162 tok/s31K ctx
moe
UnslothQwen3.5 27B
C53
27B26.4 GB69 tok/s29K ctx
dense
Unslothgemma 3 27b it
C53
27B26.4 GB69 tok/s29K ctx
dense
MistralDevstral Small 2 24B Instruct
C52
24B24.1 GB77 tok/s32K ctx
dense
MistralDevstral Small 1.1
C52
24B24.1 GB77 tok/s32K ctx
dense
MistralCodestral 2 25.08
C52
22B22.6 GB84 tok/s34K ctx
dense
UnslothQwen3.5 9B
C48
9B12.6 GB206 tok/s61K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C48
9B12.6 GB206 tok/s61K ctx
dense
Lmstudio-communityLQwen3.5 9B
C48
9B12.6 GB206 tok/s61K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
C47
8B11.8 GB231 tok/s65K ctx
dense
XtunerXllava llama 3 8b v1 1
C47
8B11.8 GB231 tok/s65K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
C47
8B11.8 GB231 tok/s65K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
C47
8B11.8 GB231 tok/s65K ctx
dense
TheBlokeTLlama 2 7B Chat
C47
7B11.1 GB264 tok/s69K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
C47
7B11.1 GB264 tok/s69K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
C47
7B11.1 GB264 tok/s69K ctx
dense
UnslothQwen3.5 4B
C46
4B8.9 GB463 tok/s86K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C46
3B8.7 GB533 tok/s89K ctx
dense
Lmstudio-communityLgemma 3 4b it
C46
4B8.9 GB463 tok/s86K ctx
dense
QwenQwen2.5 3B Instruct
C46
3B8.3 GB617 tok/s92K ctx
dense
BartowskiBgemma 2 2b it
C46
2B8.1 GB723 tok/s94K ctx
dense
Googlegemma 2b
C46
2B7.7 GB925 tok/s99K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C46
2B7.7 GB925 tok/s99K ctx
dense
QwenQwen2.5 1.5B Instruct
C45
1.5B7.4 GB1129 tok/s104K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C45
1B7.3 GB1185 tok/s105K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C45
1.1B7.2 GB1129 tok/s107K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C45
0.5B6.9 GB1185 tok/s111K ctx
dense
Ggml-orgGembeddinggemma 300M
C45
0.3B6.7 GB1185 tok/s114K ctx
dense
AlibabaQwen3-Coder-Next
C43
80B55.3 GB63 tok/s14K ctx
moe
DeepSeekDeepSeek R1 671B
F0
671B420.8 GB8 tok/s4K ctx
moe
MistralDevstral 2 123B Instruct
F0
123B99.9 GB15 tok/s8K ctx
dense
Z.aiGLM-5
F0
744B465.8 GB7 tok/s4K ctx
moe
Moonshot AIKimi K2.5
F0
1000B620.7 GB6 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B423.9 GB8 tok/s4K ctx
+1moe
MistralMistral Small 4 119B
F0
119B79.3 GB45 tok/s10K ctx
moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B304.0 GB11 tok/s4K ctx
moe
UnslothQwen3.5 122B A10B
F0
122B84.5 GB18 tok/s9K ctx
dense
DeepSeekDeepSeek V3 671B
F0
671B420.8 GB8 tok/s4K ctx
moe
MistralMixtral 8x22B
F0
141B97.8 GB25 tok/s8K ctx
moe
AlibabaQwen 2.5 72B
F0
72B60.9 GB26 tok/s13K ctx
dense
AlibabaQwen 3 235B A22B
F0
235B152.5 GB21 tok/s5K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B309.9 GB5 tok/s4K ctx
dense
MetaLlama 3.3 70B
F0
70B59.3 GB26 tok/s13K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B252.4 GB14 tok/s4K ctx
moe
CohereCommand A 111B
F0
111B90.8 GB17 tok/s8K ctx
dense
AlibabaQwen 2.5 VL 72B
F0
72B60.9 GB26 tok/s13K ctx
dense

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~426.6 GB
MistralDevstral 2 123B Instruct
123BTier 5Needs ~119.2 GB
Runs on Mac Studio M3 Ultra 256GB
Z.aiGLM-5
744BTier 5Needs ~472.0 GB
Moonshot AIKimi K2.5
1000BTier 5Needs ~625.7 GB
MistralMistral Large 3
675BTier 5Needs ~430.3 GB

Upgrade paths

Upgrade from RTX PRO 5000 Blackwell 48GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

NVIDIANVIDIA A16 64GBNext step up
64 GB VRAM (+16)
B
Unlocks Qwen 2.5 72B, Llama 3.3 70B, Qwen 2.5 VL 72B+8 more

 

NVIDIANVIDIA A800 80GBNVIDIA upgrade
80 GB VRAM (+32)1935 GB/s (+591)
A
Unlocks Mistral Small 4 119B, Qwen 2.5 72B, Llama 3.3 70B+10 more · +28% faster avg

 

AppleMacBook Pro M3 Max 128GBBest value
128 GB Unified (+80)
B
Unlocks Mistral Small 4 119B, Qwen 2.5 72B, Llama 3.3 70B+12 more

~$2,499 MSRP

AMDAMD Instinct MI350X 288GBBiggest leap
288 GB VRAM (+240)8000 GB/s (+6656)
A
Unlocks Devstral 2 123B Instruct, Mistral Small 4 119B, Qwen3-Coder 480B A35B Instruct+26 more · +374% faster avg

~$8,000 MSRP

Compare this GPU