Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
Nous ResearchNous ResearchNous Dolphin 13B
13B16K ctx7.3 GBlegacy
denseLegacy

Dolphin 13B is a general-purpose uncensored model fine-tuned for broad capabilities including coding, reasoning, and creative writing without alignment restrictions.

LMSYSLMSYSVicuna 13B
13B4K ctx7.3 GBlegacy
denseLegacy

Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.

WizardLMWizardLMWizardLM 13B
13B8K ctx7.3 GBlegacy
denseLegacy

Project Repo: https://github.com/nlpxucan/WizardLM

MradermacherMMradermacherHelpingAI 3B hindi
3B0K ctx1.7 GB
denseLegacy

 

MradermacherMMradermacherzephyr 7b gemma sft african ultrachat 100k
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherHelpingAI 9B i1
9B0K ctx5 GB
denseLegacy

 

RichardErkhovRRichardErkhovjointpreferences mistral 7b sft helpful
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherzephyr 7b dpo full i1
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherblossom v3 baichuan2 7b i1
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherHelply 10.2b chat i1
10.2B0K ctx5.7 GB
denseLegacy

 

MradermacherMMradermacherAI21 Jamba2 3B i1
3B0K ctx1.7 GB
denseLegacy

 

MradermacherMMradermacherblossom v1 baichuan 7b i1
7B0K ctx3.9 GB
denseLegacy

 

MistralMistralMinistral 8B
8B131K ctx4.5 GBcurrent
denseLegacy

We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.

MradermacherMMradermacherBaichuanMed OCR 72B i1
72B0K ctx40.3 GB
denseLegacy

 

GoogleGoogleGemma 2 9B
9B8K ctx5 GBcurrent
denseLegacy

Gemma 2 9B is Google's mid-size open model built on Gemini research. Features improved reasoning and safety with a novel architecture optimized for efficient inference on consumer hardware.

IBMIBMGranite 3.1 8B
8B128K ctx4.5 GBcurrent
denseLegacy

Model Summary: Granite-3.1-8B-Instruct is a 8B parameter long-context instruct model finetuned from Granite-3.1-8B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.

LLaVALLaVALLaVA 1.5 7B
7B4K ctx3.9 GBlegacy
denseLegacy

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Cognitive ComputationsCognitive ComputationsSamantha 7B
7B4K ctx3.9 GBlegacy
denseLegacy

Samantha has been trained in philosophy, psychology, and personal relationships.

01.AI01.AIYi 1.5 9B
9B4K ctx5 GBcurrent
denseLegacy

🐙 GitHub • 👾 Discord • 🐤 Twitter • 💬 WeChat

CerebrasCerebrasCerebras-GPT 13B
13B131K ctx7.3 GBlegacy
denseLegacy

Check out our Blog Post and arXiv paper!

MistralMistralMinistral 3 3B
3B262K ctx1.7 GBfrontier
multimodalLegacy

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.

MicrosoftMicrosoftPhi 4 Mini 4B
4B128K ctx2.2 GBfrontier
denseLegacy

Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.

AlibabaAlibabaQwen 2.5 7B
7B131K ctx3.9 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

MosaicMLMosaicMLMPT-7B-Instruct
7B8K ctx3.9 GBlegacy
denseLegacy

MPT-7B Instruct is MosaicML's instruction-tuned model with a commercially permissive license. Supports 65K context with ALiBi positional encoding for efficient long-document processing.

PreviousPage 11 of 12Next