Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
Mistral AIMistral AICodestral 22B
22B33K ctx12.3 GBcurrent
denseLegacy

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest

OpenAIOpenAIGPT-OSS 20B
21B (3.6B active)128K ctx11.8 GBfrontier
moeLegacy

GPT-OSS 20B is OpenAI's first open-weight model, a 21B-parameter mixture-of-experts model with 3.6B active parameters per token. Features configurable reasoning effort (low/medium/high), full chain-of-thought visibility, and agentic capabilities including function calling. Runs on devices with 16GB of memory using MXFP4 quantization.

InternLMInternLMInternLM 20B
20B8K ctx11.2 GBlegacy
denseLegacy

InternLM2.5 has open-sourced a 20 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:

MistralMistralMistral Small 3.2 24B
24B131K ctx13.4 GBcurrent
visionLegacy

Mistral-Small-3.2-24B-Instruct-2506 is a minor update of Mistral-Small-3.1-24B-Instruct-2503.

AlibabaAlibabaQwen 2.5 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

BigCodeBigCodeStarCoder 7B
7B8K ctx3.9 GBlegacy
denseLegacy

StarCoder 7B is BigCode's code generation model trained on The Stack v1. Supports over 80 programming languages with fill-in-the-middle capability and 8K context window.

NousResearchNNousResearchHermes 3 Llama 3.1 8B
8B0K ctx4.5 GB
denseLegacy

 

SanctumAISSanctumAIMistral 7B Instruct v0.3
7B0K ctx3.9 GB
denseLegacy

 

StabilityaiSStabilityaistablelm 2 zephyr 1 6b
6B0K ctx3.4 GB
denseLegacy

 

NousResearchNNousResearchHermes 2 Pro Mistral 7B
7B0K ctx3.9 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahimistral small 3.1 24b instruct 2503 hf
24B0K ctx13.4 GB
denseLegacy

 

TheBlokeTTheBlokeTinyLlama 1.1B Chat v0.3
1.1B0K ctx0.6 GB
denseLegacy

 

BartowskiBBartowskicognitivecomputations Dolphin3.0 R1 Mistral 24B
24B0K ctx13.4 GB
denseLegacy

 

CohereCohereAya Expanse 8B
8B8K ctx4.5 GBcurrent
denseLegacy

Aya Expanse 8B is Cohere's multilingual model supporting 23 languages with strong cross-lingual transfer. Designed for global applications requiring high-quality generation across diverse languages.

GoogleGoogleGemma 2 27B
27B8K ctx15.1 GBcurrent
denseLegacy

Gemma 2 27B is Google's largest Gemma 2 model, offering state-of-the-art performance among open models of similar size. Built on Gemini technology with strong reasoning, code, and multilingual capabilities.

MistralMistralMinistral 3 14B
14B262K ctx7.8 GBfrontier
multimodalLegacy

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language model with vision capabilities.

MistralMistralMistral Small 24B
24B33K ctx13.4 GBlegacy
denseLegacy

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

01.AI01.AIYi 1.5 34B
34B4K ctx19 GBcurrent
denseLegacy

🐙 GitHub • 👾 Discord • 🐤 Twitter • 💬 WeChat

Hugging Face H4Hugging Face H4Zephyr 7B Beta
7B33K ctx3.9 GBlegacy
denseLegacy

- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - Language(s) (NLP): Primarily English - License: MIT - Finetuned from model: mistralai/Mistral-7B-v0.1

TinyLlamaTinyLlamaTinyLlama 1.1B Chat v0.6
1.1B0K ctx0.6 GB
denseLegacy

 

TheBlokeTTheBlokezephyr 7B beta
7B0K ctx3.9 GB
denseLegacy

 

BartowskiBBartowskiCodestral 22B v0.1
22B0K ctx12.3 GB
denseLegacy

 

TheBlokeTTheBlokeSOLAR 10.7B Instruct v1.0 uncensored
10.7B0K ctx6 GB
denseLegacy

 

NousResearchNNousResearchHermes 2 Pro Llama 3 8B
8B0K ctx4.5 GB
denseLegacy

 

PreviousPage 5 of 14Next