Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Browse AI Models

10 models available

/
Status:
Sort:
Filtered by:
MetaMetaLlama 3.3 70B
70B128K ctx39.2 GBcurrent
denseLegacy

Llama 3.3 70B is Meta's most capable single-GPU-class model, offering improved reasoning and instruction following over Llama 3.1 70B. Supports 128K context with enhanced multilingual and code capabilities.

MetaMetaLlama 4 Maverick 17B 128E
400B (17B active)1.0M ctx224 GBfrontier
moeLegacy

Llama 4 Maverick is Meta's large MoE model with 17B active parameters and 128 experts (400B total). Delivers frontier-class performance on reasoning and coding while remaining deployable on a single node.

MetaMetaLlama 3.1 70B
70B128K ctx39.2 GBlegacy
denseLegacy

Llama 3.1 70B is Meta's high-capability open model with 128K context window. Excels at complex reasoning, multilingual tasks, code generation, and tool use with quality competitive with leading proprietary models.

MetaMetaLlama 4 Scout 17B 16E
109B (17B active)10.5M ctx61 GBfrontier
moeLegacy

Llama 4 Scout is Meta's efficient Mixture-of-Experts model with 17B active parameters across 16 experts. Supports a 10M token context window and natively handles text, images, and video inputs.

MetaMetaLlama 3.2 11B Vision
11B16K ctx6.2 GBlegacy
visionLegacy

Llama 3.2 11B Vision is Meta's multimodal model that processes both text and images. Supports visual question answering, image captioning, and document understanding alongside standard text generation.

MetaMetaCodeLlama 13B Instruct
13B16K ctx7.3 GBlegacy
denseLegacy

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13 instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

MetaMetaCodeLlama 7B Instruct
7B16K ctx3.9 GBlegacy
denseLegacy

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

MetaMetaLlama 3.1 8B
8B128K ctx4.5 GBlegacy
denseLegacy

Llama 3.1 8B is Meta's efficient general-purpose model supporting 128K context and multilingual text generation. Optimized for dialogue, summarization, reasoning, and code generation tasks.

MetaMetaLlama 3.2 3B
3B128K ctx1.7 GBlegacy
denseLegacy

Llama 3.2 3B is Meta's compact multilingual text model optimized for edge and mobile deployment. Supports summarization, instruction following, and text generation with strong performance for its size class.

MetaMetaLlama 3.2 1B
1B128K ctx0.6 GBlegacy
denseLegacy

Llama 3.2 1B is Meta's smallest text model designed for on-device inference. Optimized for multilingual text generation, summarization, and instruction following on resource-constrained hardware.