Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
BartowskiBBartowskiDolphin3.0 Llama3.1 8B
8B0K ctx4.5 GB
denseLegacy

 

Cognitive ComputationsCognitive ComputationsDolphin 2.9 8B
8B33K ctx4.5 GBlegacy
denseLegacy

Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations

GoogleGoogleGemma 3 27B
27B131K ctx15.1 GBcurrent
denseLegacy

Gemma 3 27B is Google's flagship Gemma 3 model with 128K context and vision support. Delivers top-tier open model performance in reasoning, code, math, and multimodal understanding.

Mixedbread AIMixedbread AImxbai Embed Large
0.34B1K ctx0.2 GBcurrent
denseLegacy

The crispy sentence embedding family from Mixedbread.

OpenChatOpenChatOpenChat 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Advancing Open-source Language Models with Mixed-Quality Data

NousResearchNNousResearchNous Hermes 2 Mistral 7B DPO
7B0K ctx3.9 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityCodestral 22B v0.1
22B0K ctx12.3 GB
denseLegacy

 

TheBlokeTTheBlokeNous Hermes 2 SOLAR 10.7B
10.7B0K ctx6 GB
denseLegacy

 

JamesburtonJJamesburtonPhi 4 reasoning vision 15B
15B0K ctx8.4 GB
denseLegacy

 

Ibm-graniteIIbm-granitegranite 8b code instruct 4k
8B0K ctx4.5 GB
denseLegacy

 

MradermacherMMradermacherDolphin Mistral GLM 4.7 Flash 24B Venice Edition Thinking Uncensored i1
24B0K ctx13.4 GB
denseLegacy

 

HelpingAIHHelpingAIHELVETE 3B
3B0K ctx1.7 GB
denseLegacy

 

Second-stateSSecond-stateStarCoder2 15B
15B0K ctx8.4 GB
denseLegacy

 

BartowskiBBartowskidolphin 2.9.4 llama3.1 8b
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiYi 1.5 9B Chat
9B0K ctx5 GB
denseLegacy

 

TiiuaeTTiiuaeFalcon H1R 7B
7B0K ctx3.9 GB
denseLegacy

 

Tsinghua/ZhipuTsinghua/ZhipuCogVLM2 19B
19B8K ctx10.6 GBcurrent
denseLegacy

👋 Wechat · 💡Online Demo · 🎈Github Page · 📑 Paper

DeepSeekDeepSeekDeepSeek R1 Distill 14B
14B33K ctx7.8 GBfrontier
denseLegacy

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing.

AlibabaAlibabaQwen 2.5 Coder 14B
14B131K ctx7.8 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

SnowflakeSnowflakeSnowflake Arctic Embed L
0.34B1K ctx0.2 GBcurrent
denseLegacy

News | Models | Usage | Evaluation | Contact | FAQ License | Acknowledgement

BartowskiBBartowskiNousResearch Hermes 4 14B
14B0K ctx7.8 GB
denseLegacy

 

BartowskiBBartowskigranite embedding 107m multilingual
0.11B0K ctx0.1 GB
denseLegacy

 

TiiuaeTTiiuaefalcon mamba 7b instruct Q4 K M
7B0K ctx3.9 GB
denseLegacy

 

NousResearchNNousResearchHermes 3 Llama 3.2 3B
3B0K ctx1.7 GB
denseLegacy

 

PreviousPage 6 of 14Next