MacBook Pro M1 Pro 32GBBudget pick
C130 tok/s decode
~$1,999 MSRP
Can it run?
Runs well
Using Q4_K_M in Ollama
Fit status
Runs well
Decode
65.0 tok/s
TTFT
2978 ms
Safe context
53K
Memory
5.3 GB / 17.3 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 68.0 tok/s | 4141 ms | 105K |
| Chat | C | Runs well | 68.0 tok/s | 1553 ms | 26K |
| Coding | C | Runs well | 65.0 tok/s | 2978 ms | 53K |
| RAG | C | Runs well | 68.0 tok/s | 5176 ms | 105K |
| Reasoning | C | Runs well | 68.0 tok/s | 3365 ms | 53K |
How TinyLlama 1.1B Chat v1.0 (1.100000023841858B params) fits at each quantization level on MacBook Pro M3 24GB (17.3 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 0.4 GB | Low | D30 |
Q3_K_S | 3 | 0.5 GB | Low | D30 |
NVFP4 | 4 |
Upgrade options
~$1,999 MSRP
~$1,999 MSRP
0.6 GB |
| Medium |
| D30 |
Q4_K_M | 4 | 0.7 GB | Medium | D30 |
Q5_K_M | 5 | 0.8 GB | High | D30 |
Q6_K | 6 | 0.9 GB | High | D31 |
Q8_0 | 8 | 1.2 GB | Very High | D31 |
F16Best for your GPU | 16 | 2.3 GB | Maximum | D32 |
~$1,999 MSRP
~$2,499 MSRP