Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
BartowskiBBartowskigranite embedding 107m multilingual
0.11B0K ctx0.1 GB
denseLegacy

 

TiiuaeTTiiuaefalcon mamba 7b instruct Q4 K M
7B0K ctx3.9 GB
denseLegacy

 

NousResearchNNousResearchHermes 3 Llama 3.2 3B
3B0K ctx1.7 GB
denseLegacy

 

DeepSeekDeepSeekDeepSeek Coder V2 16B
16B (2.4B active)131K ctx9 GBcurrent
moeLegacy

We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks.

DeepSeekDeepSeekDeepSeek LLM 67B
67B4K ctx37.5 GBlegacy
denseLegacy

Introducing DeepSeek LLM, an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community.

InternLMInternLMInternVL2 8B
8B8K ctx4.5 GBcurrent
denseLegacy

We are excited to announce the release of InternVL 2.0, the latest addition to the InternVL series of multimodal large language models. InternVL 2.0 features a variety of instruction-tuned models, ranging from 1 billion to 108 billion parameters. This repository contains the instruction-tuned InternVL2-8B model.

Magistral AIMagistral AIMagistral 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Magistral 7B is Mistral AI's reasoning-focused model designed for complex analytical and mathematical tasks. Features chain-of-thought capabilities for step-by-step problem solving.

Allen AIAllen AIOLMo 2 32B
32B4K ctx17.9 GBactive
denseLegacy

OLMo 2 32B is Allen AI's fully open 32B-parameter language model, the largest in the OLMo 2 family. Trained on 6T tokens from the Dolma dataset, post-trained with Tülu 3 SFT, DPO, and RLVR. First fully open model to outperform GPT-3.5 and GPT-4o mini on academic benchmarks.

Mistral AIMistral AIPixtral 12B
12B131K ctx6.7 GBcurrent
denseLegacy

The Pixtral-12B-2409 is a Multimodal Model of 12B parameters plus a 400M parameter vision encoder.

TheBlokeTTheBlokezephyr 7B alpha
7B0K ctx3.9 GB
denseLegacy

 

DavidAUDDavidAUQwen3 48B A4B Savant Commander Distill 12X Closed Open Heretic Uncensored
48B0K ctx26.9 GB
denseLegacy

 

UukuguyUUukuguyspeechless zephyr code functionary 7b
7B0K ctx3.9 GB
denseLegacy

 

TheBlokeTTheBlokestablelm zephyr 3b
3B0K ctx1.7 GB
denseLegacy

 

BartowskiBBartowskiYi 1.5 6B Chat
6B0K ctx3.4 GB
denseLegacy

 

NousResearchNNousResearchHermes 4.3 36B
36B0K ctx20.2 GB
denseLegacy

 

LegraphistaLLegraphistaopenchat 3.6 8b 20240522 IMat
8B0K ctx4.5 GB
denseLegacy

 

LGAI-EXAONELLGAI-EXAONEEXAONE 4.0 1.2B
1.2B0K ctx0.7 GB
denseLegacy

 

Second-stateSSecond-stateStarCoder2 3B
3B0K ctx1.7 GB
denseLegacy

 

CratacoCCratacostablelm 2 1 6b chat imatrix
6B0K ctx3.4 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityEXAONE 3.5 2.4B Instruct
2.4B0K ctx1.3 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityEXAONE 3.5 7.8B Instruct
7.8B0K ctx4.4 GB
denseLegacy

 

Second-stateSSecond-stateStarCoder2 7B
7B0K ctx3.9 GB
denseLegacy

 

AlibabaAlibabaQwen 3 14B
14B131K ctx7.8 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

BartowskiBBartowskiaya expanse 8b
8B0K ctx4.5 GB
denseLegacy

 

PreviousPage 6 of 12Next