Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Models/CodeLlama 13B Instruct

MetaMeta

CodeLlama 13B Instruct

Legacy
huggingfaceHuggingFace
10.0KDownloads159LikesAug 2023Released16K tokensContextCommunityLicense3 EntryQuality

Get started

— copy & paste to run locally
Ollama
ollama run codellama-13b-instruct
HuggingFace
huggingface-cli download codellama-13b-instruct

Quick specs

Parameters13B
Architecturedense
Context16K tokens
Modalitycode
Min RAM5.1 GB
Rec. RAM7.9 GB (Q4_K_M)
LicenseCommunity
FamilyCodeLlama
✓ Code

About this model

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13 instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

  • •[x] Code completion
  • •[x] Instructions / chat
  • •[ ] Python specialist

Related models

Your hardware

Detecting...

Quick picks

AMD
Best budgetC
RX 7600 XT 16GB~$329 — 21 tok/s
NVIDIA
Best overallB
RTX 5080 16GB~$999 — 79 tok/s

Best hardware

Top picks for CodeLlama 13B Instruct

NVIDIA
RTX 5080 Laptop 16GBB
16 GB
NVIDIA
RTX 5080 16GBB
16 GB
NVIDIA
RTX 4080 Super 16GBB
16 GB
NVIDIA
RTX 5070 Ti 16GBB
16 GB
NVIDIA
RTX 4070 Ti Super 16GBB
16 GB

Quantization options

VRAM estimates by quant level

No hardware detected — fit column shows raw VRAM estimates

QuantBitsVRAMQualityFit
Q2_K
2
5.1 GB
Low—
Q3_K_S
3
6.4 GB
Low—
NVFP4
4
7.3 GB
Medium—
Q4_K_M
4
7.9 GB
Medium—
Q5_K_M
5
9.4 GB
High—
Q6_K
6
10.7 GB
High—
Q8_0
8
13.9 GB
Very High—
F16
16
26.7 GB
Maximum—

Hardware compatibility

Fit estimates across all hardware

Open calculator

Computing compatibility...

Memory breakdown

Reference: NVIDIA A10 24GB

Weights7.9 GB
KV Cache2.0 GB
Runtime0.9 GB
Headroom2.4 GB