Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

328 models available

/
Status:
Sort:
TheBlokeTTheBlokeLlama 2 7B Chat
7B0K ctx3.9 GB
denseLegacy

 

XtunerXXtunerllava llama 3 8b v1 1
8B0K ctx4.5 GB
denseLegacy

 

UnslothUnslothQwen3.5 397B A17B
397B0K ctx222.3 GB
denseLegacy

 

Hugging-quantsHHugging-quantsLlama 3.2 1B Instruct Q8 0
1B0K ctx0.6 GB
denseLegacy

 

MistralMistralDevstral Small 2 24B Instruct
24B256K ctx13.4 GBfrontier
denseLegacy

Devstral is an agentic LLM for software engineering tasks. Devstral Small 2 excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench.

MetaMetaLlama 3.3 70B
70B128K ctx39.2 GBcurrent
denseLegacy

Llama 3.3 70B is Meta's most capable single-GPU-class model, offering improved reasoning and instruction following over Llama 3.1 70B. Supports 128K context with enhanced multilingual and code capabilities.

MetaMetaLlama 4 Maverick 17B 128E
400B (17B active)1.0M ctx224 GBfrontier
moeLegacy

Llama 4 Maverick is Meta's large MoE model with 17B active parameters and 128 experts (400B total). Delivers frontier-class performance on reasoning and coding while remaining deployable on a single node.

QwenQwenQwen2.5 3B Instruct
3B0K ctx1.7 GB
denseLegacy

 

QwenQwenQwen2.5 1.5B Instruct
1.5B0K ctx0.8 GB
denseLegacy

 

Ggml-orgGGgml-orgSmolVLM 500M Instruct
0.5B0K ctx0.3 GB
denseLegacy

 

TheBlokeTTheBlokeMistral 7B Instruct v0.2
7B0K ctx3.9 GB
denseLegacy

 

UnslothUnslothDeepSeek R1 0528 Qwen3 8B
8B0K ctx4.5 GB
denseLegacy

 

TheBlokeTTheBlokeTinyLlama 1.1B Chat v1.0
1.1B0K ctx0.6 GB
denseLegacy

 

CohereCohereCommand A 111B
111B262K ctx62.2 GBfrontier
denseLegacy

Command A is Cohere's latest flagship model with 111B parameters, designed for agentic enterprise applications. Features advanced tool use, multi-step reasoning, and retrieval-augmented generation.

AlibabaAlibabaQwen 2.5 Coder 32B
32B131K ctx17.9 GBcurrent
denseLegacy

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

AlibabaAlibabaQwen 2.5 VL 72B
72B33K ctx40.3 GBfrontier
denseLegacy

license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers

UnslothUnslothgemma 3 27b it
27B0K ctx15.1 GB
denseLegacy

 

TheDrummerTTheDrummerGemmasutra Mini 2B v1
2B0K ctx1.1 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityQwen3.5 9B
9B0K ctx5 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiMistral 7B Instruct v0.3
7B0K ctx3.9 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communitygemma 3 4b it
4B0K ctx2.2 GB
denseLegacy

 

Ggml-orgGGgml-orgembeddinggemma 300M
0.3B0K ctx0.2 GB
denseLegacy

 

MaziyarPanahiMMaziyarPanahiMeta Llama 3 8B Instruct
8B0K ctx4.5 GB
denseLegacy

 

Lmstudio-communityLLmstudio-communityQwen3.5 35B A3B
35B0K ctx19.6 GB
denseLegacy

 

PreviousPage 2 of 14Next