Can it run?
Runs well
Using Q4_K_M in Ollama
Fit status
Runs well
Decode
241.9 tok/s
TTFT
800 ms
Safe context
30K
Memory
3.2 GB / 6.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Runs well | 241.9 tok/s | 1164 ms | 60K |
| Chat | C | Runs well | 241.9 tok/s | 437 ms | 15K |
| Coding | C | Runs well | 241.9 tok/s | 800 ms | 30K |
| RAG | C | Runs well | 241.9 tok/s | 1455 ms | 60K |
| Reasoning | C | Runs well | 241.9 tok/s | 946 ms | 30K |
How Llama 3.2 1B (1B params) fits at each quantization level on RTX 2060 6GB (6.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 0.4 GB | Low | D29 |
Q3_K_S | 3 | 0.5 GB | Low | D30 |
NVFP4 | 4 |
ollama run llama-3.2-1bhuggingface-cli download llama-3.2-1b0.6 GB |
| Medium |
| D30 |
Q4_K_M | 4 | 0.6 GB | Medium | D30 |
Q5_K_M | 5 | 0.7 GB | High | D30 |
Q6_K | 6 | 0.8 GB | High | D31 |
Q8_0 | 8 | 1.1 GB | Very High | D32 |
F16Best for your GPU | 16 | 2.1 GB | Maximum | D35 |