Can it run?
Runs well
Using Q4_K_M in Ollama
Fit status
Runs well
Decode
130.0 tok/s
TTFT
1489 ms
Safe context
42K
Memory
4.4 GB / 11.5 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 140.0 tok/s | 2011 ms | 84K |
| Chat | C | Runs well | 140.0 tok/s | 754 ms | 21K |
| Coding | C | Runs well | 130.0 tok/s | 1489 ms | 42K |
| RAG | C | Runs well | 140.0 tok/s | 2514 ms | 84K |
| Reasoning | C | Runs well | 140.0 tok/s | 1634 ms | 42K |
How TinyLlama 1.1B Chat v1.0 (1.100000023841858B params) fits at each quantization level on MacBook Pro M2 Pro 16GB (11.5 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 0.4 GB | Low | D30 |
Q3_K_S | 3 | 0.5 GB | Low | D30 |
NVFP4 | 4 |
0.6 GB |
| Medium |
| D31 |
Q4_K_M | 4 | 0.7 GB | Medium | D31 |
Q5_K_M | 5 | 0.8 GB | High | D31 |
Q6_K | 6 | 0.9 GB | High | D31 |
Q8_0 | 8 | 1.2 GB | Very High | D32 |
F16Best for your GPU | 16 | 2.3 GB | Maximum | D34 |