Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Models/Qwen 2.5 Coder 32B

AlibabaAlibaba

Qwen 2.5 Coder 32B

Current
huggingfaceHuggingFaceollamaOllama
739.5KDownloads2.0KLikesNov 2024Released131K tokensContextApache 2.0License5 EntryQuality

Get started

— copy & paste to run locally
Ollama
ollama run qwen-2.5-coder-32b
HuggingFace
huggingface-cli download qwen-2.5-coder-32b

Quick specs

Parameters32B
Architecturedense
Context131K tokens
Modalitytext
Min RAM12.5 GB
Rec. RAM19.5 GB (Q4_K_M)
LicenseApache 2.0
FamilyQwen
✓ Code

About this model

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

  • •Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training...
  • •A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining...
  • •Long-context Support: up to 128K tokens

Related models

Your hardware

Detecting...

Quick picks

Apple
Best budgetC
Mac mini M4 64GB~$1,099 — 4 tok/s
NVIDIA
Best overallB
NVIDIA A100 40GB~$10,000 — 67 tok/s

Best hardware

Top picks for Qwen 2.5 Coder 32B

NVIDIA
NVIDIA A100 40GBB
40 GB
NVIDIA
RTX PRO 5000 Blackwell 48GBC
48 GB
NVIDIA
RTX 6000 Ada 48GBC
48 GB
NVIDIA
RTX 5090 32GBC
32 GB
Apple
Mac Studio M2 Ultra 64GBC
64 GB

Quantization options

VRAM estimates by quant level

No hardware detected — fit column shows raw VRAM estimates

QuantBitsVRAMQualityFit
Q2_K
2
12.5 GB
Low—
Q3_K_S
3
15.7 GB
Low—
NVFP4
4
17.9 GB
Medium—
Q4_K_M
4
19.5 GB
Medium—
Q5_K_M
5
23.0 GB
High—
Q6_K
6
26.2 GB
High—
Q8_0
8
34.2 GB
Very High—
F16
16
65.6 GB
Maximum—

Quality benchmarks

Qwen 2.5 Coder 32B benchmark scores

Benchmark verified

Coding

SWE-bench Verified41.0%
HumanEval+87.2%
Aider Polyglot—
LiveCodeBench31.4%

Reasoning

MMLU-Pro62.3%
GPQA Diamond41.8%
MATH-50076.4%
ARC Challenge—

General

Chatbot Arena—
IFEval79.9%

Source: official · 2024-11-12

Hardware compatibility

Fit estimates across all hardware

Open calculator

Computing compatibility...

Memory breakdown

Reference: NVIDIA A10 24GB

Weights19.5 GB
KV Cache5.0 GB
Runtime0.9 GB
Headroom2.4 GB