Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
MicrosoftMicrosoftPhi 3 Medium 14B
14B128K ctx7.8 GBcurrent
denseLegacy

The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants 4k and 128K which is the context length (in tokens) that it can support.

MicrosoftMicrosoftPhi-4 14B
14B16K ctx7.8 GBcurrent
denseLegacy

Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from:

AlibabaAlibabaQwen 2.5 VL 7B
7B33K ctx3.9 GBcurrent
denseLegacy

license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers

Instinct AIInstinct AISolar 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Solar 7B is Upstage's efficient language model built on a depth-upscaled architecture. Offers strong instruction following and reasoning performance optimized for single-GPU inference.

DefogDefogSQLCoder 7B
7B8K ctx3.9 GBcurrent
denseLegacy

The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins.

BingsuBBingsuexaone 3.0 7.8b it
7.8B0K ctx4.4 GB
denseLegacy

 

MradermacherMMradermacheraya expanse 8b orthogonal heretic i1
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiFalcon3 1B Instruct abliterated
1B0K ctx0.6 GB
denseLegacy

 

BartowskiBBartowskistarcoder2 15b instruct v0.1
15B0K ctx8.4 GB
denseLegacy

 

MradermacherMMradermacherstarcoder2 15b i1
15B0K ctx8.4 GB
denseLegacy

 

AlibabaAlibabaQwen 2.5 14B
14B131K ctx7.8 GBcurrent
denseLegacy

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

Srs6901SSrs6901GGUF SOLARized GraniStral 14B 2102 YeAM HCT 32QKV
14B0K ctx7.8 GB
denseLegacy

 

Baichuan-incBBaichuan-incBaichuan M2 32B Q4 K M
32B0K ctx17.9 GB
denseLegacy

 

MradermacherMMradermacherYi 9B Coder i1
9B0K ctx5 GB
denseLegacy

 

MradermacherMMradermachersolar finalised finetuned Model 10.7B i1
10.7B0K ctx6 GB
denseLegacy

 

AfridevaAAfridevastablelm 3b 4e1t
3B0K ctx1.7 GB
denseLegacy

 

YixmanYYixmancognitivecomputations Dolphin Mistral 24B Venice Edition
24B0K ctx13.4 GB
denseLegacy

 

GabriellarsonGGabriellarsonMamba Codestral 7B v0.1
7B0K ctx3.9 GB
denseLegacy

 

BartowskiBBartowskiai21labs AI21 Jamba Reasoning 3B
3B0K ctx1.7 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communitystarcoder2 15b instruct v0.1
15B0K ctx8.4 GB
denseLegacy

 

MradermacherMMradermacherBaichuan M3 235B
235B0K ctx131.6 GB
denseLegacy

 

RichardErkhovRRichardErkhovstabilityai japanese stablelm base gamma 7b
7B0K ctx3.9 GB
denseLegacy

 

Tsinghua/ZhipuTsinghua/ZhipuCodeGeeX 4 9B
9B131K ctx5 GBcurrent
denseLegacy

We introduce CodeGeeX4-ALL-9B, the open-source version of the latest CodeGeeX4 model series. It is a multilingual code generation model continually trained on the GLM-4-9B, significantly enhancing its code generation capabilities. Using a single CodeGeeX4-ALL-9B model, it can support comprehensive functions such as code completion and generation, code interpreter, web search, function call, repository-level code Q&A, covering various scenarios of software development. CodeGeeX4-ALL-9B has achieved highly competitive performance on public benchmarks, such as BigCodeBench and NaturalCodeBench.

MistralMistralMistral Nemo 12B
12B128K ctx6.7 GBcurrent
denseLegacy

The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

PreviousPage 9 of 14Next