MacBook Pro M4 Pro 24GBBudget pick
C99.3 tok/s decode
~$1,999 MSRP
Can it run?
Runs well
Using Q5_K_M in Ollama
Fit status
Runs well
Decode
212.2 tok/s
TTFT
913 ms
Safe context
44K
Memory
5.8 GB / 16.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 182.7 tok/s | 1541 ms | 87K |
| Chat | C | Runs well | 182.7 tok/s | 578 ms | 22K |
| Coding | C | Runs well | 212.2 tok/s | 913 ms | 44K |
| RAG | C | Runs well | 182.7 tok/s | 1927 ms | 87K |
| Reasoning | C | Runs well | 182.7 tok/s | 1252 ms | 44K |
How Llama 3.2 3B Instruct (3B params) fits at each quantization level on RX 7800 XT 16GB (16.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 1.2 GB | Low | D31 |
Q3_K_S | 3 | 1.5 GB | Low | D32 |
NVFP4 | 4 |
huggingface-cli download hf-bartowski--llama-3-2-3b-instruct-ggufUpgrade options
~$1,999 MSRP
| Medium |
| D32 |
Q4_K_M | 4 | 1.8 GB | Medium | D32 |
Q5_K_M | 5 | 2.2 GB | High | D32 |
Q6_K | 6 | 2.5 GB | High | D33 |
Q8_0 | 8 | 3.2 GB | Very High | D34 |
F16Best for your GPU | 16 | 6.1 GB | Maximum | D38 |