Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
AaryanKAAaryanKSolar Open 100B
100B0K ctx56 GB
denseLegacy

 

LGAI-EXAONELLGAI-EXAONEEXAONE 4.0 32B
32B0K ctx17.9 GB
denseLegacy

 

MradermacherMMradermacherSolar Open 100B i1
100B0K ctx56 GB
denseLegacy

 

TheBlokeTTheBlokejapanese stablelm instruct gamma 7B
7B0K ctx3.9 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityYi Coder 1.5B
1.5B0K ctx0.8 GB
denseLegacy

 

LGAI-EXAONELLGAI-EXAONEEXAONE 3.5 7.8B Instruct
7.8B0K ctx4.4 GB
denseLegacy

 

MradermacherMMradermacherdolphin v2 8b abliterated i1
8B0K ctx4.5 GB
denseLegacy

 

OpenBMBOpenBMBMiniCPM-V 2.6 8B
8B2K ctx4.5 GBcurrent
denseLegacy

MiniCPM-V 2.6 is OpenBMB's compact multimodal model supporting image and video understanding alongside text. Delivers strong visual reasoning and OCR capabilities at 8B parameter scale.

MistralMistralMinistral 3 8B
8B262K ctx4.5 GBfrontier
multimodalLegacy

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

MradermacherMMradermacherSolar Open 69B REAP i1
69B0K ctx38.6 GB
denseLegacy

 

MradermacherMMradermacheraya expanse 32b heretic MPOA i1
32B0K ctx17.9 GB
denseLegacy

 

TiiuaeTTiiuaeFalcon H1 7B Instruct
7B0K ctx3.9 GB
denseLegacy

 

TiiuaeTTiiuaeFalcon H1 Tiny 90M Instruct
0.09B0K ctx0.1 GB
denseLegacy

 

MradermacherMMradermacherEXAONE 3.5 7.8B Instruct i1
7.8B0K ctx4.4 GB
denseLegacy

 

UnslothUnslothFalcon H1R 7B
7B0K ctx3.9 GB
denseLegacy

 

LGAI-EXAONELLGAI-EXAONEK EXAONE 236B A23B
236B0K ctx132.2 GB
denseLegacy

 

Ai21labsAAi21labsAI21 Jamba Reasoning 3B
3B0K ctx1.7 GB
denseLegacy

 

GoogleGoogleGemma 3 12B
12B131K ctx6.7 GBcurrent
denseLegacy

Gemma 3 12B is Google's mid-range Gemma 3 model with vision capabilities. Offers strong reasoning, code generation, and image understanding balanced with practical resource requirements.

LLaVALLaVALLaVA 1.6 13B
13B4K ctx7.3 GBcurrent
denseLegacy

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: mistralai/Mistral-7B-Instruct-v0.2

NVIDIANVIDIANemotron Nano 8B
8B131K ctx4.5 GBactive
denseLegacy

Nemotron Nano 8B is NVIDIA's reasoning model derived from Llama 3.1 8B Instruct, post-trained for switchable reasoning with on/off modes. Achieves 95.4% on MATH-500 and 54.1% on GPQA Diamond with reasoning enabled. Fits on a single RTX GPU for local deployment.

TekniumTekniumOpenHermes 2.5 7B
7B8K ctx3.9 GBcurrent
denseLegacy

*In the tapestry of Greek mythology, Hermes reigns as the eloquent Messenger of the Gods, a deity who deftly bridges the realms through the art of communication. It is in homage to this divine mediator that I name this advanced LLM "Hermes," a system crafted to navigate the complex intricacies of human discourse with celestial finesse.*

MicrosoftMicrosoftPhi 3 Medium 14B
14B128K ctx7.8 GBcurrent
denseLegacy

The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants 4k and 128K which is the context length (in tokens) that it can support.

MicrosoftMicrosoftPhi-4 14B
14B16K ctx7.8 GBcurrent
denseLegacy

Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from:

AlibabaAlibabaQwen 2.5 VL 7B
7B33K ctx3.9 GBcurrent
denseLegacy

license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers

PreviousPage 7 of 12Next