MacBook Pro M4 Pro 24GBBudget pick
C99.3 tok/s decode
~$1,999 MSRP
Can it run?
Runs well
Using Q5_K_M in Ollama
Fit status
Runs well
Decode
46.0 tok/s
TTFT
4205 ms
Safe context
34K
Memory
6.1 GB / 13.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 51.7 tok/s | 5446 ms | 66K |
| Chat | C | Runs well | 51.7 tok/s | 2042 ms | 17K |
| Coding | C | Runs well | 46.0 tok/s | 4205 ms | 34K |
| RAG | C | Runs well | 51.7 tok/s | 6807 ms | 66K |
| Reasoning | C | Runs well | 51.7 tok/s | 4425 ms | 34K |
How Llama 3.2 3B Instruct (3B params) fits at each quantization level on MacBook Pro M3 Pro 18GB (13.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 1.2 GB | Low | D31 |
Q3_K_S | 3 | 1.5 GB | Low | D32 |
NVFP4 | 4 |
huggingface-cli download hf-bartowski--llama-3-2-3b-instruct-ggufUpgrade options
~$1,999 MSRP
1.7 GB |
| Medium |
| D32 |
Q4_K_M | 4 | 1.8 GB | Medium | D32 |
Q5_K_M | 5 | 2.2 GB | High | D33 |
Q6_K | 6 | 2.5 GB | High | D34 |
Q8_0 | 8 | 3.2 GB | Very High | D35 |
F16Best for your GPU | 16 | 6.1 GB | Maximum | D40 |