Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
MicrosoftMicrosoftPhi-4-reasoning-plus 14B
14.7B33K ctx8.2 GBfrontier
denseLegacy

> [!IMPORTANT] > To fully take advantage of the model's capabilities, inference must use `temperature=0.8`, `top_k=50`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set `max_new_tokens=32768` to allow for longer chain-of-thought (CoT).

AlibabaAlibabaQwen 3 32B
32B131K ctx17.9 GBfrontier
denseLegacy

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

MaziyarPanahiMMaziyarPanahiYi Coder 1.5B Chat
1.5B0K ctx0.8 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 3 12b it
12B0K ctx6.7 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiLlama 3.2 1B Instruct
1B0K ctx0.6 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 2 2b it
2B0K ctx1.1 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiLlama 3.2 3B Instruct
3B0K ctx1.7 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahigemma 3 1b it
1B0K ctx0.6 GB
denseLegacy

 

BartowskiBBartowskicognitivecomputations Dolphin Mistral 24B Venice Edition
24B0K ctx13.4 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiDeepSeek R1 0528 Qwen3 8B
8B0K ctx4.5 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiMistral Small 24B Instruct 2501
24B0K ctx13.4 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiYi 1.5 6B Chat
6B0K ctx3.4 GB
denseLegacy

 

MistralaiMMistralaiMinistral 3 3B Instruct 2512
3B0K ctx1.7 GB
denseLegacy

 

CohereCohereCommand R 35B
35B131K ctx19.6 GBcurrent
denseLegacy

Command R is Cohere's retrieval-augmented generation model optimized for enterprise use. Excels at long-context document processing, tool use, and grounded generation with citation support.

Jina AIJina AIJina Embeddings v3
0.57B8K ctx0.3 GBcurrent
denseLegacy

jina-embeddings-v3: Multilingual Embeddings With Task LoRA

MetaMetaLlama 3.2 11B Vision
11B16K ctx6.2 GBlegacy
visionLegacy

Llama 3.2 11B Vision is Meta's multimodal model that processes both text and images. Supports visual question answering, image captioning, and document understanding alongside standard text generation.

MistralMistralMagistral Small 2507
24B131K ctx13.4 GBlegacy
denseLegacy

Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.

MistralMistralMixtral 8x7B
47B (13B active)33K ctx26.3 GBcurrent
moeLegacy

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest

01.AI01.AIYi 34B Chat
34B200K ctx19 GBlegacy
denseLegacy

- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation

MaziyarPanahiMMaziyarPanahigemma 3 27b it
27B0K ctx15.1 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiYi Coder 9B Chat
9B0K ctx5 GB
denseLegacy

 

BartowskiBBartowskiglm 4 9b chat 1m
9B0K ctx5 GB
denseLegacy

 

TeichAITTeichAIQwen3 8B DeepSeek v3.2 Speciale Distill
8B0K ctx4.5 GB
denseLegacy

 

BAAIBAAIBGE M3
0.57B8K ctx0.3 GBcurrent
denseLegacy

For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding

PreviousPage 4 of 14Next