AMD Instinct MI300A 128GBBudget pick
C86.9 tok/s decode
Can it run?
Runs well
Using Q4_K_M in Ollama
Fit status
Runs well
Decode
31.4 tok/s
TTFT
6162 ms
Safe context
24K
Memory
64.4 GB / 96.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 31.4 tok/s | 8963 ms | 41K |
| Chat | C | Runs well | 31.4 tok/s | 3361 ms | 13K |
| Coding | C | Runs well | 31.4 tok/s | 6162 ms | 24K |
| RAG | C | Runs well | 31.4 tok/s | 11204 ms | 41K |
| Reasoning | C | Runs well | 31.4 tok/s | 7283 ms | 24K |
How Llama 3.3 70B (70B params) fits at each quantization level on RTX PRO 6000 Blackwell Server Edition 96GB (96.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 27.3 GB | Low | D36 |
Q3_K_S | 3 | 34.3 GB | Low | D37 |
NVFP4 | 4 | 39.2 GB | Medium | D38 |
Q4_K_M | 4 | 42.7 GB | Medium | D39 |
Q5_K_M | 5 | 50.4 GB | High | C41 |
Q6_K | 6 | 57.4 GB | High | C42 |
Q8_0Best for your GPU | 8 | 74.9 GB | Very High | C44 |
F16 | 16 | 143.5 GB | Maximum | F0 |
ollama run llama-3.3-70bhuggingface-cli download llama-3.3-70bUpgrade options