Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

50 models available

/
Status:
Sort:
Filtered by:
MistralMistralDevstral 2 123B Instruct
123B256K ctx68.9 GBfrontier
denseLegacy

Devstral is an agentic LLM for software engineering tasks. Devstral 2 excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench.

Z.aiZ.aiGLM-5
744B (40B active)200K ctx416.6 GBfrontier
moeLegacy

📍 Use GLM-5 API services on Z.ai API Platform.

Moonshot AIMoonshot AIKimi K2.5
1000B (32B active)256K ctx560 GBfrontier
moeLegacy

Kimi K2.5 is Moonshot AI's advanced reasoning model with strong performance in math, coding, and multilingual tasks. Features long-context understanding and agentic capabilities for complex multi-step problem solving.

MistralMistralMistral Large 3
675B (41B active)256K ctx378 GBfrontier
moeLegacy

Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.

MistralMistralMistral Small 4 119B
119B (6.5B active)256K ctx66.6 GBfrontier
moeLegacy

Mistral Small 4 is a powerful hybrid model capable of acting as both a general instruction model and a reasoning model. It unifies the capabilities of three different model families—Instruct, Reasoning (previously called Magistral), and Devstral—into a single, unified model.

AlibabaAlibabaQwen3-Coder 30B A3B Instruct
30.5B (3.3B active)256K ctx17.1 GBfrontier
moeLegacy

Qwen3-Coder is available in multiple sizes. Today, we're excited to introduce Qwen3-Coder-30B-A3B-Instruct. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:

AlibabaAlibabaQwen3-Coder 480B A35B Instruct
480B (35B active)256K ctx268.8 GBfrontier
moeLegacy

Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct. featuring the following key enhancements:

AlibabaAlibabaQwen3-Coder-Next
80B (3B active)256K ctx44.8 GBfrontier
moeLegacy

Today, we're announcing Qwen3-Coder-Next, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:

MistralMistralDevstral Small 2 24B Instruct
24B256K ctx13.4 GBfrontier
denseLegacy

Devstral is an agentic LLM for software engineering tasks. Devstral Small 2 excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench.

AlibabaAlibabaQwen 2.5 Coder 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

MistralMistralCodestral 2 25.08
22B256K ctx12.3 GBfrontier
denseLegacy

Codestral 2 is Mistral AI's latest code-focused model with enhanced performance on code generation, refactoring, and documentation across dozens of programming languages.

MistralMistralDevstral Small 1.1
24B131K ctx13.4 GBcurrent
denseLegacy

Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positions it as the #1 open source model on this benchmark.

BigCodeBigCodeStarCoder 15B
15B8K ctx8.4 GBlegacy
denseLegacy

StarCoder 15B is BigCode's flagship code generation model trained on 1 trillion tokens from The Stack. Supports 80+ programming languages with 8K context and strong code completion capabilities.

MaziyarPanahiMMaziyarPanahiYi Coder 1.5B Chat
1.5B0K ctx0.8 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiYi Coder 9B Chat
9B0K ctx5 GB
denseLegacy

 

InternLMInternLMInternLM 20B
20B8K ctx11.2 GBlegacy
denseLegacy

InternLM2.5 has open-sourced a 20 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:

AlibabaAlibabaQwen 2.5 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

BigCodeBigCodeStarCoder 7B
7B8K ctx3.9 GBlegacy
denseLegacy

StarCoder 7B is BigCode's code generation model trained on The Stack v1. Supports over 80 programming languages with fill-in-the-middle capability and 8K context window.

MistralMistralMistral Small 24B
24B33K ctx13.4 GBlegacy
denseLegacy

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

BartowskiBBartowskiCodestral 22B v0.1
22B0K ctx12.3 GB
denseLegacy

 

GoogleGoogleGemma 3 27B
27B131K ctx15.1 GBcurrent
denseLegacy

Gemma 3 27B is Google's flagship Gemma 3 model with 128K context and vision support. Delivers top-tier open model performance in reasoning, code, math, and multimodal understanding.

Lmstudio-communityLLmstudio-communityCodestral 22B v0.1
22B0K ctx12.3 GB
denseLegacy

 

Ibm-graniteIIbm-granitegranite 8b code instruct 4k
8B0K ctx4.5 GB
denseLegacy

 

Second-stateSSecond-stateStarCoder2 15B
15B0K ctx8.4 GB
denseLegacy

 

PreviousPage 1 of 3Next