Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
MradermacherMMradermacherHelpingAI 9B 200k i1
9B0K ctx5 GB
denseLegacy

 

MradermacherMMradermacherinternlm2 5 7b chat i1
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherHelpingAI 3B hindi i1
3B0K ctx1.7 GB
denseLegacy

 

MradermacherMMradermacherinternlm3 8b instruct abliterated i1
8B0K ctx4.5 GB
denseLegacy

 

MradermacherMMradermacherHelpingAI2 6B i1
6B0K ctx3.4 GB
denseLegacy

 

RichardErkhovRRichardErkhovOpenSafetyLab MD Judge v0 2 internlm2 7b
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherAI21 Jamba2 3B
3B0K ctx1.7 GB
denseLegacy

 

MradermacherMMradermacherMD Judge v0 2 internlm2 7b i1
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherHelpingAI2.5 10B i1
10B0K ctx5.6 GB
denseLegacy

 

BaichuanBaichuanBaichuan 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。

Mistral AIMistral AICodestral Mamba 7B
7B262K ctx3.9 GBcurrent
denseLegacy

Codestral Mamba is an open code model based on the Mamba2 architecture. It performs on par with state-of-the-art Transformer-based code models. \ You can read more in the official blog post.

DevStral AIDevStral AIDevStral 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Devstral 7B is Mistral AI's specialized coding model optimized for software development tasks. Features strong code generation, completion, and understanding across multiple programming languages.

ZhipuZhipuGLM-4 9B
9B128K ctx5 GBcurrent
denseLegacy

2024/11/25, 我们建议使用从 `transformers>=4.46.0` 开始,使用 glm-4-9b-chat-hf 以减少后续 transformers 升级导致的兼容性问题。

IBMIBMGranite Code 8B
8B8K ctx4.5 GBcurrent
denseLegacy

Granite-8B-Code-Instruct-4K is a 8B parameter model fine tuned from *Granite-8B-Code-Base-4K* on a combination of permissively licensed instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.

Nous ResearchNous ResearchNous Dolphin 13B
13B16K ctx7.3 GBlegacy
denseLegacy

Dolphin 13B is a general-purpose uncensored model fine-tuned for broad capabilities including coding, reasoning, and creative writing without alignment restrictions.

AlibabaAlibabaQwen 2.5 Coder 7B
7B131K ctx3.9 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

LMSYSLMSYSVicuna 13B
13B4K ctx7.3 GBlegacy
denseLegacy

Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.

WizardLMWizardLMWizardLM 13B
13B8K ctx7.3 GBlegacy
denseLegacy

Project Repo: https://github.com/nlpxucan/WizardLM

MradermacherMMradermacherHelpingAI 3B hindi
3B0K ctx1.7 GB
denseLegacy

 

MradermacherMMradermacherzephyr 7b gemma sft african ultrachat 100k
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherHelpingAI 9B i1
9B0K ctx5 GB
denseLegacy

 

MradermacherMMradermacherCodestral 22B v0.1 i1
22B0K ctx12.3 GB
denseLegacy

 

RichardErkhovRRichardErkhovjointpreferences mistral 7b sft helpful
7B0K ctx3.9 GB
denseLegacy

 

MradermacherMMradermacherzephyr 7b dpo full i1
7B0K ctx3.9 GB
denseLegacy

 

PreviousPage 12 of 14Next