Can it run?
Runs well
Using Q4_K_M in llama.cpp
Fit status
Runs well
Decode
54.0 tok/s
TTFT
3586 ms
Safe context
21K
Memory
31.0 GB / 40.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Tight fit | 63.0 tok/s | 4471 ms | 35K |
| Chat | B | Runs well | 63.0 tok/s | 1677 ms | 11K |
| Coding | B | Runs well | 54.0 tok/s | 3586 ms | 21K |
| RAG | C | Tight fit | 63.0 tok/s | 5589 ms | 35K |
| Reasoning | B | Runs well | 63.0 tok/s | 3633 ms | 21K |
How Yi 34B Chat (34B params) fits at each quantization level on NVIDIA A100 40GB (40.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 13.3 GB | Low | D36 |
Q3_K_S | 3 | 16.7 GB | Low | D38 |
NVFP4 | 4 |
ollama run yi-34b-chathuggingface-cli download yi-34b-chat| Medium |
| D39 |
Q4_K_M | 4 | 20.7 GB | Medium | C40 |
Q5_K_M | 5 | 24.5 GB | High | C42 |
Q6_KBest for your GPU | 6 | 27.9 GB | High | C44 |
Q8_0 | 8 | 36.4 GB | Very High | C44 |
F16 | 16 | 69.7 GB | Maximum | F0 |