Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

17 models available

/
Status:
Sort:
Filtered by:
MistralMistralDevstral 2 123B Instruct
123B256K ctx68.9 GBfrontier
denseLegacy

Devstral is an agentic LLM for software engineering tasks. Devstral 2 excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench.

MistralMistralMistral Large 3
675B (41B active)256K ctx378 GBfrontier
moeLegacy

Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.

MistralMistralMistral Small 4 119B
119B (6.5B active)256K ctx66.6 GBfrontier
moeLegacy

Mistral Small 4 is a powerful hybrid model capable of acting as both a general instruction model and a reasoning model. It unifies the capabilities of three different model families—Instruct, Reasoning (previously called Magistral), and Devstral—into a single, unified model.

MistralMistralMixtral 8x22B
141B (39B active)66K ctx79 GBcurrent
moeLegacy

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest

MistralMistralDevstral Small 2 24B Instruct
24B256K ctx13.4 GBfrontier
denseLegacy

Devstral is an agentic LLM for software engineering tasks. Devstral Small 2 excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench.

MistralMistralCodestral 2 25.08
22B256K ctx12.3 GBfrontier
denseLegacy

Codestral 2 is Mistral AI's latest code-focused model with enhanced performance on code generation, refactoring, and documentation across dozens of programming languages.

MistralMistralDevstral Small 1.1
24B131K ctx13.4 GBcurrent
denseLegacy

Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positions it as the #1 open source model on this benchmark.

MistralMistralMagistral Small 2507
24B131K ctx13.4 GBlegacy
denseLegacy

Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.

MistralMistralMixtral 8x7B
47B (13B active)33K ctx26.3 GBcurrent
moeLegacy

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest

MistralMistralMistral Small 3.2 24B
24B131K ctx13.4 GBcurrent
visionLegacy

Mistral-Small-3.2-24B-Instruct-2506 is a minor update of Mistral-Small-3.1-24B-Instruct-2503.

MistralMistralMinistral 3 14B
14B262K ctx7.8 GBfrontier
multimodalLegacy

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language model with vision capabilities.

MistralMistralMistral Small 24B
24B33K ctx13.4 GBlegacy
denseLegacy

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

MistralMistralMinistral 3 8B
8B262K ctx4.5 GBfrontier
multimodalLegacy

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

MistralMistralMistral Nemo 12B
12B128K ctx6.7 GBcurrent
denseLegacy

The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

MistralMistralMistral 7B Instruct v0.3
7B8K ctx3.9 GBlegacy
denseLegacy

The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.

MistralMistralMinistral 8B
8B131K ctx4.5 GBcurrent
denseLegacy

We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.

MistralMistralMinistral 3 3B
3B262K ctx1.7 GBfrontier
multimodalLegacy

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.