Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
MaziyarPanahiMMaziyarPanahiMeta Llama 3.1 8B Instruct
8B0K ctx4.5 GB
denseLegacy

 

CohereCohereCommand R+ 104B
104B131K ctx58.2 GBcurrent
denseLegacy

Command R+ is Cohere's most capable open-weight model for enterprise RAG workloads. Offers superior long-context reasoning, multi-step tool use, and grounded generation with citations across 10 languages.

DeepSeekDeepSeekDeepSeek R1 Distill 32B
32B33K ctx17.9 GBfrontier
denseLegacy

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing.

MetaMetaLlama 4 Scout 17B 16E
109B (17B active)10.5M ctx61 GBfrontier
moeLegacy

Llama 4 Scout is Meta's efficient Mixture-of-Experts model with 17B active parameters across 16 experts. Supports a 10M token context window and natively handles text, images, and video inputs.

AlibabaAlibabaQwen 3 30B A3B
30.5B (3.3B active)131K ctx17.1 GBfrontier
moeLegacy

We introduce the updated version of the Qwen3-30B-A3B non-thinking mode, named Qwen3-30B-A3B-Instruct-2507, featuring the following key enhancements:

Lmg-anonLLmg-anonvntl llama3 8b v2
8B0K ctx4.5 GB
denseLegacy

 

UnslothUnslothMistral Small 3.2 24B Instruct 2506
24B0K ctx13.4 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityDeepSeek R1 0528 Qwen3 8B
8B0K ctx4.5 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 3 4b it
4B0K ctx2.2 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiLlama 3.3 70B Instruct
70B0K ctx39.2 GB
denseLegacy

 

DeepSeekDeepSeekDeepSeek V2.5 236B
236B (21B active)131K ctx132.2 GBcurrent
moeLegacy

DeepSeek-V2.5 is an upgraded version that combines DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct. The new model integrates the general and coding abilities of the two previous versions. For model details, please visit DeepSeek-V2 page for more information.

MicrosoftMicrosoftPhi-4-reasoning-plus 14B
14.7B33K ctx8.2 GBfrontier
denseLegacy

> [!IMPORTANT] > To fully take advantage of the model's capabilities, inference must use `temperature=0.8`, `top_k=50`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set `max_new_tokens=32768` to allow for longer chain-of-thought (CoT).

AlibabaAlibabaQwen 3 32B
32B131K ctx17.9 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

MaziyarPanahiMMaziyarPanahiYi Coder 1.5B Chat
1.5B0K ctx0.8 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 3 12b it
12B0K ctx6.7 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiLlama 3.2 1B Instruct
1B0K ctx0.6 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 2 2b it
2B0K ctx1.1 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiLlama 3.2 3B Instruct
3B0K ctx1.7 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 3 1b it
1B0K ctx0.6 GB
denseLegacy

 

BartowskiBBartowskicognitivecomputations Dolphin Mistral 24B Venice Edition
24B0K ctx13.4 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiDeepSeek R1 0528 Qwen3 8B
8B0K ctx4.5 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiMistral Small 24B Instruct 2501
24B0K ctx13.4 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiYi 1.5 6B Chat
6B0K ctx3.4 GB
denseLegacy

 

MistralaiMMistralaiMinistral 3 3B Instruct 2512
3B0K ctx1.7 GB
denseLegacy

 

PreviousPage 3 of 12Next