Can it run?
Runs well
Using Q5_K_M in Ollama
Fit status
Runs well
Decode
109.6 tok/s
TTFT
1767 ms
Safe context
76K
Memory
14.5 GB / 69.1 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 109.6 tok/s | 2570 ms | 151K |
| Chat | C | Runs well | 109.6 tok/s | 964 ms | 38K |
| Coding | C | Runs well | 109.6 tok/s | 1767 ms | 76K |
| RAG | C | Runs well | 109.6 tok/s | 3213 ms | 151K |
| Reasoning | C | Runs well | 109.6 tok/s | 2088 ms | 76K |
How Llama 3.2 3B Instruct (3B params) fits at each quantization level on MacBook Pro M2 Max 96GB (69.1 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 1.2 GB | Low | D30 |
Q3_K_S | 3 | 1.5 GB | Low | D30 |
NVFP4 | 4 | 1.7 GB | Medium | D30 |
Q4_K_M | 4 | 1.8 GB | Medium | D30 |
Q5_K_M | 5 | 2.2 GB | High | D30 |
Q6_K | 6 | 2.5 GB | High | D30 |
Q8_0 | 8 | 3.2 GB | Very High | D30 |
F16Best for your GPU | 16 | 6.1 GB | Maximum | D31 |
huggingface-cli download hf-bartowski--llama-3-2-3b-instruct-gguf