Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Models/Qwen3-Coder 480B A35B Instruct

AlibabaAlibaba

Qwen3-Coder 480B A35B Instruct

Frontier
huggingfaceHuggingFace
82.4KDownloads1.3KLikesJul 2025Released256K tokensContextApache 2.0License5 EntryQuality

Get started

— copy & paste to run locally
HuggingFace
huggingface-cli download qwen-3-coder-480b-a35b

Quick specs

Parameters480B (35B active)
Architecturemoe (MoE)
Context256K tokens
Modalitytext
Min RAM187.2 GB
Rec. RAM292.8 GB (Q4_K_M)
LicenseApache 2.0
FamilyQwen Coder
✓ Code✓ Reasoning

About this model

Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct. featuring the following key enhancements:

  • •Significant Performance: among open models on Agentic Coding, Agentic Browser-Use, and other foundational coding tasks, achieving results...
  • •Long-context Capabilities: with native support for 256K tokens, extendable up to 1M tokens using Yarn, optimized for repository-scale...
  • •Agentic Coding: supporting for most platform such as Qwen Code, CLINE, featuring a specially designed function call format

Related models

Your hardware

Detecting...

Quick picks

AMD
Best overallC
AMD Instinct MI350X 288GB~$8,000 — 50 tok/s

Best hardware

Top picks for Qwen3-Coder 480B A35B Instruct

AMD
AMD Instinct MI350X 288GBC
288 GB

Quantization options

VRAM estimates by quant level

No hardware detected — fit column shows raw VRAM estimates

QuantBitsVRAMQualityFit
Q2_K
2
187.2 GB
Low—
Q3_K_S
3
235.2 GB
Low—
NVFP4
4
268.8 GB
Medium—
Q4_K_M
4
292.8 GB
Medium—
Q5_K_M
5
345.6 GB
High—
Q6_K
6
393.6 GB
High—
Q8_0
8
513.6 GB
Very High—
F16
16
984.0 GB
Maximum—

Hardware compatibility

Fit estimates across all hardware

Open calculator

Computing compatibility...

Memory breakdown

Reference: NVIDIA A10 24GB

Weights292.8 GB
KV Cache5.5 GB
Runtime0.9 GB
Headroom2.4 GB