Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

283 models available

/
Status:
Sort:
Filtered by:
Instinct AIInstinct AISolar 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Solar 7B is Upstage's efficient language model built on a depth-upscaled architecture. Offers strong instruction following and reasoning performance optimized for single-GPU inference.

BingsuBBingsuexaone 3.0 7.8b it
7.8B0K ctx4.4 GB
denseLegacy

 

MradermacherMMradermacheraya expanse 8b orthogonal heretic i1
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiFalcon3 1B Instruct abliterated
1B0K ctx0.6 GB
denseLegacy

 

BartowskiBBartowskistarcoder2 15b instruct v0.1
15B0K ctx8.4 GB
denseLegacy

 

AlibabaAlibabaQwen 2.5 14B
14B131K ctx7.8 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

Srs6901SSrs6901GGUF SOLARized GraniStral 14B 2102 YeAM HCT 32QKV
14B0K ctx7.8 GB
denseLegacy

 

Baichuan-incBBaichuan-incBaichuan M2 32B Q4 K M
32B0K ctx17.9 GB
denseLegacy

 

MradermacherMMradermachersolar finalised finetuned Model 10.7B i1
10.7B0K ctx6 GB
denseLegacy

 

AfridevaAAfridevastablelm 3b 4e1t
3B0K ctx1.7 GB
denseLegacy

 

YixmanYYixmancognitivecomputations Dolphin Mistral 24B Venice Edition
24B0K ctx13.4 GB
denseLegacy

 

BartowskiBBartowskiai21labs AI21 Jamba Reasoning 3B
3B0K ctx1.7 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communitystarcoder2 15b instruct v0.1
15B0K ctx8.4 GB
denseLegacy

 

MradermacherMMradermacherBaichuan M3 235B
235B0K ctx131.6 GB
denseLegacy

 

RichardErkhovRRichardErkhovstabilityai japanese stablelm base gamma 7b
7B0K ctx3.9 GB
denseLegacy

 

MistralMistralMistral Nemo 12B
12B128K ctx6.7 GBcurrent
denseLegacy

The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

UC BerkeleyUC BerkeleyStarling LM 7B
7B8K ctx3.9 GBlegacy
denseLegacy

- Developed by: Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao. - Model type: Language Model finetuned with RLHF / RLAIF - License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI - Finetuned from model: Openchat 3.5 (based on Mistral-7B-v0.1)

BartowskiBBartowskiinternlm JanusCoder 14B
14B0K ctx7.8 GB
denseLegacy

 

Second-stateSSecond-statestablelm 2 zephyr 1.6b
1.6B0K ctx0.9 GB
denseLegacy

 

UnslothUnslothFalcon H1 1.5B Instruct
1.5B0K ctx0.8 GB
denseLegacy

 

MradermacherMMradermacheraya expanse 8b orthogonal heretic
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiinternlm2 5 20b chat
20B0K ctx11.2 GB
denseLegacy

 

ShaowenchenSShaowenchenbaichuan2 7b chat
7B0K ctx3.9 GB
denseLegacy

 

LegraphistaLLegraphistaCodestral 22B v0.1 IMat
22B0K ctx12.3 GB
denseLegacy

 

PreviousPage 8 of 12Next