Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Yi 34B Chat/on RTX PRO 5000 Blackwell 48GB

Can it run?

Can RTX PRO 5000 Blackwell 48GB run Yi 34B Chat?

CUsable

Runs well

Using Q4_K_M in llama.cpp

Capabilities:

Fit status

Runs well

Decode

54.4 tok/s

TTFT

3557 ms

Safe context

24K

Memory

31.8 GB / 48.0 GB

Memory breakdown

Weights20.7 GB
KV Cache5.3 GB
Runtime0.9 GB
Headroom4.8 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingBRuns well54.4 tok/s5173 ms41K
ChatCRuns well54.4 tok/s1940 ms13K
CodingCRuns well54.4 tok/s3557 ms24K
RAGBRuns well54.4 tok/s6467 ms41K
ReasoningCRuns well54.4 tok/s4203 ms24K

Quantization options

How Yi 34B Chat (34B params) fits at each quantization level on RTX PRO 5000 Blackwell 48GB (48.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
13.3 GB
LowD35
Q3_K_S
3
16.7 GB
LowD37
NVFP4
4
19.0 GB
MediumD38
Q4_K_M
4
20.7 GB
MediumD38
Q5_K_M
5
24.5 GB
HighC40
Q6_K
6
27.9 GB
HighC42
Q8_0Best for your GPU
8
36.4 GB
Very HighC44
F16
16
69.7 GB
MaximumF0

Get started

Ollama
ollama run yi-34b-chat
HuggingFace
huggingface-cli download yi-34b-chat
See all results for RTX PRO 5000 Blackwell 48GBSee all hardware for Yi 34B Chat