Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
UC BerkeleyUC BerkeleyStarling LM 7B
7B8K ctx3.9 GBlegacy
denseLegacy

- Developed by: Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao. - Model type: Language Model finetuned with RLHF / RLAIF - License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI - Finetuned from model: Openchat 3.5 (based on Mistral-7B-v0.1)

BartowskiBBartowskiinternlm JanusCoder 14B
14B0K ctx7.8 GB
denseLegacy

 

Second-stateSSecond-statestablelm 2 zephyr 1.6b
1.6B0K ctx0.9 GB
denseLegacy

 

UnslothUnslothFalcon H1 1.5B Instruct
1.5B0K ctx0.8 GB
denseLegacy

 

MradermacherMMradermacheraya expanse 8b orthogonal heretic
8B0K ctx4.5 GB
denseLegacy

 

BartowskiBBartowskiinternlm2 5 20b chat
20B0K ctx11.2 GB
denseLegacy

 

ShaowenchenSShaowenchenbaichuan2 7b chat
7B0K ctx3.9 GB
denseLegacy

 

LegraphistaLLegraphistaCodestral 22B v0.1 IMat
22B0K ctx12.3 GB
denseLegacy

 

MradermacherMMradermacherBaichuan M3 235B i1
235B0K ctx131.6 GB
denseLegacy

 

MradermacherMMradermacherlogos16v2 stablelm2 1.6b i1
1.6B0K ctx0.9 GB
denseLegacy

 

RichardErkhovRRichardErkhovstabilityai japanese stablelm instruct beta 70b
70B0K ctx39.2 GB
denseLegacy

 

MradermacherMMradermacherSOLAR 10.7B v1.0
10.7B0K ctx6 GB
denseLegacy

 

Srs6901SSrs6901GGUF SOLARized GraniStral 14B 1902 YeAM HCT
14B0K ctx7.8 GB
denseLegacy

 

BartowskiBBartowskibaichuan inc Baichuan M2 32B
32B0K ctx17.9 GB
denseLegacy

 

BartowskiBBartowskiDiscoPOP zephyr 7b gemma
7B0K ctx3.9 GB
denseLegacy

 

BartowskiBBartowskiHelpingAI2 9B
9B0K ctx5 GB
denseLegacy

 

BartowskiBBartowskiai21labs AI21 Jamba2 3B
3B0K ctx1.7 GB
denseLegacy

 

Sentence TransformersSentence TransformersAll MiniLM L6 v2
0.02B0K ctx0 GBcurrent
denseLegacy

This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.

BaichuanBaichuanBaichuan 13B
13B8K ctx7.3 GBlegacy
denseLegacy

Baichuan-13B-Chat为Baichuan-13B系列模型中对齐后的版本,预训练模型可见Baichuan-13B-Base。

MetaMetaCodeLlama 13B Instruct
13B16K ctx7.3 GBlegacy
denseLegacy

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13 instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

InternLMInternLMInternLM Chat 7B
7B8K ctx3.9 GBlegacy
denseLegacy

InternLM has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: - It leverages trillions of high-quality tokens for training to establish a powerful knowledge base. - It supports an 8k context window length, enabling longer input sequences and stronger reasoning capabilities. - It provides a versatile toolset for users to flexibly build their own workflows.

MistralMistralMistral 7B Instruct v0.3
7B8K ctx3.9 GBlegacy
denseLegacy

The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.

IntelIntelNeural Chat 7B
7B8K ctx3.9 GBlegacy
denseLegacy

Context length for this model: 8192 tokens (same as https://huggingface.co/mistralai/Mistral-7B-v0.1)

Nous ResearchNous ResearchNous Hermes 1.0
9B16K ctx5 GBlegacy
denseLegacy

Nous Hermes is a fine-tuned model optimized for instruction following and helpful dialogue. Trained on curated datasets emphasizing quality responses, reasoning, and user alignment.

PreviousPage 10 of 14Next