Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
01.AI01.AIYi 1.5 34B
34B4K ctx19 GBcurrent
denseLegacy

🐙 GitHub • 👾 Discord • 🐤 Twitter • 💬 WeChat

Hugging Face H4Hugging Face H4Zephyr 7B Beta
7B33K ctx3.9 GBlegacy
denseLegacy

- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - Language(s) (NLP): Primarily English - License: MIT - Finetuned from model: mistralai/Mistral-7B-v0.1

TinyLlamaTinyLlamaTinyLlama 1.1B Chat v0.6
1.1B0K ctx0.6 GB
denseLegacy

 

TheBlokeTTheBlokezephyr 7B beta
7B0K ctx3.9 GB
denseLegacy

 

BartowskiBBartowskiCodestral 22B v0.1
22B0K ctx12.3 GB
denseLegacy

 

TheBlokeTTheBlokeSOLAR 10.7B Instruct v1.0 uncensored
10.7B0K ctx6 GB
denseLegacy

 

NousResearchNNousResearchHermes 2 Pro Llama 3 8B
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiDolphin3.0 Llama3.1 8B
8B0K ctx4.5 GB
denseLegacy

 

Cognitive ComputationsCognitive ComputationsDolphin 2.9 8B
8B33K ctx4.5 GBlegacy
denseLegacy

Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations

GoogleGoogleGemma 3 27B
27B131K ctx15.1 GBcurrent
denseLegacy

Gemma 3 27B is Google's flagship Gemma 3 model with 128K context and vision support. Delivers top-tier open model performance in reasoning, code, math, and multimodal understanding.

OpenChatOpenChatOpenChat 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Advancing Open-source Language Models with Mixed-Quality Data

NousResearchNNousResearchNous Hermes 2 Mistral 7B DPO
7B0K ctx3.9 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityCodestral 22B v0.1
22B0K ctx12.3 GB
denseLegacy

 

TheBlokeTTheBlokeNous Hermes 2 SOLAR 10.7B
10.7B0K ctx6 GB
denseLegacy

 

JamesburtonJJamesburtonPhi 4 reasoning vision 15B
15B0K ctx8.4 GB
denseLegacy

 

Ibm-graniteIIbm-granitegranite 8b code instruct 4k
8B0K ctx4.5 GB
denseLegacy

 

HelpingAIHHelpingAIHELVETE 3B
3B0K ctx1.7 GB
denseLegacy

 

Second-stateSSecond-stateStarCoder2 15B
15B0K ctx8.4 GB
denseLegacy

 

BartowskiBBartowskidolphin 2.9.4 llama3.1 8b
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiYi 1.5 9B Chat
9B0K ctx5 GB
denseLegacy

 

TiiuaeTTiiuaeFalcon H1R 7B
7B0K ctx3.9 GB
denseLegacy

 

Tsinghua/ZhipuTsinghua/ZhipuCogVLM2 19B
19B8K ctx10.6 GBcurrent
denseLegacy

👋 Wechat · 💡Online Demo · 🎈Github Page · 📑 Paper

DeepSeekDeepSeekDeepSeek R1 Distill 14B
14B33K ctx7.8 GBfrontier
denseLegacy

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing.

BartowskiBBartowskiNousResearch Hermes 4 14B
14B0K ctx7.8 GB
denseLegacy

 

PreviousPage 5 of 12Next