Can it run?
Runs with offload
Using Q4_K_M in vLLM
Fit status
Runs with offload
Decode
70.7 tok/s
TTFT
2739 ms
Safe context
17K
Memory
277.9 GB / 288.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs with offload | 71.7 tok/s | 3930 ms | 33K |
| Chat | C | Runs with offload | 71.7 tok/s | 1474 ms | 8K |
| Coding | C | Runs with offload | 70.7 tok/s | 2739 ms | 17K |
| RAG | C | Runs with offload | 71.7 tok/s | 4912 ms | 33K |
| Reasoning | C | Runs with offload | 71.7 tok/s | 3193 ms | 17K |
How Llama 4 Maverick 17B 128E (400B params) fits at each quantization level on AMD Instinct MI350X 288GB (288.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 156.0 GB | Low | C41 |
Q3_K_S | 3 | 196.0 GB | Low | C44 |
NVFP4Best for your GPU |
ollama run llama-4-maverick-17b-128ehuggingface-cli download llama-4-maverick-17b-128e| 4 |
224.0 GB |
| Medium |
| C44 |
Q4_K_M | 4 | 244.0 GB | Medium | C44 |
Q5_K_M | 5 | 288.0 GB | High | C44 |
Q6_K | 6 | 328.0 GB | High | F0 |
Q8_0 | 8 | 428.0 GB | Very High | F0 |
F16 | 16 | 820.0 GB | Maximum | F0 |