Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
CohereCohereCommand R 35B
35B131K ctx19.6 GBcurrent
denseLegacy

Command R is Cohere's retrieval-augmented generation model optimized for enterprise use. Excels at long-context document processing, tool use, and grounded generation with citation support.

MetaMetaLlama 3.2 11B Vision
11B16K ctx6.2 GBlegacy
visionLegacy

Llama 3.2 11B Vision is Meta's multimodal model that processes both text and images. Supports visual question answering, image captioning, and document understanding alongside standard text generation.

MistralMistralMagistral Small 2507
24B131K ctx13.4 GBlegacy
denseLegacy

Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.

MistralMistralMixtral 8x7B
47B (13B active)33K ctx26.3 GBcurrent
moeLegacy

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest

01.AI01.AIYi 34B Chat
34B200K ctx19 GBlegacy
denseLegacy

- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation

MaziyarPanahiMMaziyarPanahigemma 3 27b it
27B0K ctx15.1 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiYi Coder 9B Chat
9B0K ctx5 GB
denseLegacy

 

BartowskiBBartowskiglm 4 9b chat 1m
9B0K ctx5 GB
denseLegacy

 

TeichAITTeichAIQwen3 8B DeepSeek v3.2 Speciale Distill
8B0K ctx4.5 GB
denseLegacy

 

OpenAIOpenAIGPT-OSS 20B
21B (3.6B active)128K ctx11.8 GBfrontier
moeLegacy

GPT-OSS 20B is OpenAI's first open-weight model, a 21B-parameter mixture-of-experts model with 3.6B active parameters per token. Features configurable reasoning effort (low/medium/high), full chain-of-thought visibility, and agentic capabilities including function calling. Runs on devices with 16GB of memory using MXFP4 quantization.

InternLMInternLMInternLM 20B
20B8K ctx11.2 GBlegacy
denseLegacy

InternLM2.5 has open-sourced a 20 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:

MistralMistralMistral Small 3.2 24B
24B131K ctx13.4 GBcurrent
visionLegacy

Mistral-Small-3.2-24B-Instruct-2506 is a minor update of Mistral-Small-3.1-24B-Instruct-2503.

AlibabaAlibabaQwen 2.5 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

NousResearchNNousResearchHermes 3 Llama 3.1 8B
8B0K ctx4.5 GB
denseLegacy

 

SanctumAISSanctumAIMistral 7B Instruct v0.3
7B0K ctx3.9 GB
denseLegacy

 

StabilityaiSStabilityaistablelm 2 zephyr 1 6b
6B0K ctx3.4 GB
denseLegacy

 

NousResearchNNousResearchHermes 2 Pro Mistral 7B
7B0K ctx3.9 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahimistral small 3.1 24b instruct 2503 hf
24B0K ctx13.4 GB
denseLegacy

 

TheBlokeTTheBlokeTinyLlama 1.1B Chat v0.3
1.1B0K ctx0.6 GB
denseLegacy

 

BartowskiBBartowskicognitivecomputations Dolphin3.0 R1 Mistral 24B
24B0K ctx13.4 GB
denseLegacy

 

CohereCohereAya Expanse 8B
8B8K ctx4.5 GBcurrent
denseLegacy

Aya Expanse 8B is Cohere's multilingual model supporting 23 languages with strong cross-lingual transfer. Designed for global applications requiring high-quality generation across diverse languages.

GoogleGoogleGemma 2 27B
27B8K ctx15.1 GBcurrent
denseLegacy

Gemma 2 27B is Google's largest Gemma 2 model, offering state-of-the-art performance among open models of similar size. Built on Gemini technology with strong reasoning, code, and multilingual capabilities.

MistralMistralMinistral 3 14B
14B262K ctx7.8 GBfrontier
multimodalLegacy

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language model with vision capabilities.

MistralMistralMistral Small 24B
24B33K ctx13.4 GBlegacy
denseLegacy

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

PreviousPage 4 of 12Next