Can it run?
Runs well
Using Q4_K_M in llama.cpp
Fit status
Runs well
Decode
62.6 tok/s
TTFT
3092 ms
Safe context
8K
Memory
17.9 GB / 24.0 GB
| Workload | Grade | Fit | Decode | TTFT | Context |
|---|---|---|---|---|---|
| Agentic Coding | C | Tight fit | 66.1 tok/s | 4260 ms | 8K |
| Chat | B | Runs well | 66.1 tok/s | 1598 ms | 8K |
| Coding | B | Runs well | 62.6 tok/s | 3092 ms | 8K |
| RAG | C | Tight fit | 66.1 tok/s | 5325 ms | 8K |
| Reasoning | B | Runs well | 66.1 tok/s | 3462 ms | 8K |
How CogVLM2 19B (19B params) fits at each quantization level on RTX 4090 24GB (24.0 GB usable).
| Quant | Bits | VRAM | Quality | Fit |
|---|---|---|---|---|
Q2_K | 2 | 7.4 GB | Low | D36 |
Q3_K_S | 3 | 9.3 GB | Low | D37 |
NVFP4 | 4 |
huggingface-cli download cogvlm2-19b| Medium |
| D39 |
Q4_K_M | 4 | 11.6 GB | Medium | D39 |
Q5_K_M | 5 | 13.7 GB | High | C41 |
Q6_KBest for your GPU | 6 | 15.6 GB | High | C43 |
Q8_0 | 8 | 20.3 GB | Very High | C44 |
F16 | 16 | 38.9 GB | Maximum | F0 |