Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

24 models available

/
Status:
Sort:
Filtered by:
AlibabaAlibabaQwen3-Coder 30B A3B Instruct
30.5B (3.3B active)256K ctx17.1 GBfrontier
moeLegacy

Qwen3-Coder is available in multiple sizes. Today, we're excited to introduce Qwen3-Coder-30B-A3B-Instruct. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:

AlibabaAlibabaQwen3-Coder 480B A35B Instruct
480B (35B active)256K ctx268.8 GBfrontier
moeLegacy

Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct. featuring the following key enhancements:

AlibabaAlibabaQwen3-Coder-Next
80B (3B active)256K ctx44.8 GBfrontier
moeLegacy

Today, we're announcing Qwen3-Coder-Next, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:

AlibabaAlibabaQwen 2.5 72B
72B131K ctx40.3 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

AlibabaAlibabaQwen 3 235B A22B
235B (22B active)131K ctx131.6 GBfrontier
moeLegacy

We introduce the updated version of the Qwen3-235B-A22B non-thinking mode, named Qwen3-235B-A22B-Instruct-2507, featuring the following key enhancements:

AlibabaAlibabaQwen3-VL 30B A3B Instruct
30B (3B active)256K ctx16.8 GBfrontier
moeLegacy

Meet Qwen3-VL — the most powerful vision-language model in the Qwen series to date.

AlibabaAlibabaQwen 2.5 Coder 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

AlibabaAlibabaQwen 2.5 VL 72B
72B33K ctx40.3 GBfrontier
denseLegacy

license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers

AlibabaAlibabaQwen 2.5 Math 72B
72B4K ctx40.3 GBfrontier
denseLegacy

> [!Warning] > > > 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks. > >

AlibabaAlibabaQwen 3 30B A3B
30.5B (3.3B active)131K ctx17.1 GBfrontier
moeLegacy

We introduce the updated version of the Qwen3-30B-A3B non-thinking mode, named Qwen3-30B-A3B-Instruct-2507, featuring the following key enhancements:

AlibabaAlibabaQwen 3 32B
32B131K ctx17.9 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

AlibabaAlibabaQwen 2.5 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

AlibabaAlibabaQwen 2.5 Coder 14B
14B131K ctx7.8 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

AlibabaAlibabaQwen 3 14B
14B131K ctx7.8 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

AlibabaAlibabaQwen 2.5 VL 7B
7B33K ctx3.9 GBcurrent
denseLegacy

license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers

AlibabaAlibabaQwen 2.5 14B
14B131K ctx7.8 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

AlibabaAlibabaQwen 2.5 Math 7B
7B4K ctx3.9 GBcurrent
denseLegacy

> [!Warning] > > > 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks. > >

AlibabaAlibabaQwen 3 8B
8B131K ctx4.5 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

AlibabaAlibabaQwen 2.5 Coder 7B
7B131K ctx3.9 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

AlibabaAlibabaQwen 2.5 7B
7B131K ctx3.9 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

AlibabaAlibabaQwen 2.5 Coder 1.5B
1.5B33K ctx0.8 GBactive
denseLegacy

Qwen 2.5 Coder 1.5B is Alibaba's compact code-specific language model from the Qwen2.5 Coder series. Trained on 5.5T tokens including source code, text-code grounding, and synthetic data. Features improvements in code generation, reasoning, and fixing while maintaining general and math capabilities.

AlibabaAlibabaQwen 3 4B
4B33K ctx2.2 GBcurrent
denseLegacy

We introduce the updated version of the Qwen3-4B non-thinking mode, named Qwen3-4B-Instruct-2507, featuring the following key enhancements:

AlibabaAlibabaQwen 3 1.7B
1.7B33K ctx1 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

AlibabaAlibabaQwen 3 0.6B
0.6B33K ctx0.3 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: