Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/RTX 3080 10GB
NVIDIA

NVIDIA

RTX 3080 10GB

RTX 30ConsumerAmperePCIe 4CUDA
10GB
VRAM
760GB/s
Bandwidth
60TFLOPS
FP16 Compute
480TOPS
INT8 Inference
$699 MSRP
VRAM10 GBBandwidth760 GB/sCompute60 TFInference480 TOPSValue8.58 TF/$k
RTX 3080 10GBCategory AvgGTX 1080 Ti 11GB

Specifications

Compute
FP1660 TFLOPS
INT8480 TOPS
ArchitectureAmpere
Memory
VRAM10 GB
Bandwidth760 GB/s
General
FamilyRTX 30
SegmentConsumer
InterconnectPCIe 4
Compute PlatformCUDA
MSRP$699

Architecture

Ampere

Ampere is NVIDIA's second-generation RTX architecture, built on Samsung's 8nm process. It introduced 3rd-generation Tensor Cores with support for sparsity-accelerated INT8 operations and improved FP16 throughput over Turing.

AI Relevance

Sparsity-aware Tensor Cores can effectively double throughput for structured sparse workloads. However, the lack of FP8 support means quantized inference is less efficient than Ada Lovelace or Blackwell.

Process: Samsung 8nmPlatform: CUDATensor Cores: Gen 3Precisions: FP32, FP16, BF16, INT8, INT4

Recommendations by Workload

Agentic Coding

C

Codestral Mamba 7B

This model is still usable for agentic-coding, but it is not the most specialized pick. It sits in the middle of the current model mix. It should run, but memory headroom will be limited. Known channels: huggingface, ollama.

Decode 135.3 tok/s · 38K ctx · llama.cpp
8.4 GB / 10.0 GB VRAM

Chat

B

Qwen 3 8B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 118.4 tok/s · 11K ctx · llama.cpp
7.6 GB / 10.0 GB VRAM

Coding

B

Codestral Mamba 7B

This model is still usable for coding, but it is not the most specialized pick. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, ollama.

Decode 135.3 tok/s · 22K ctx · llama.cpp
7.3 GB / 10.0 GB VRAM

RAG

C

granite 8b code instruct 4k

This model is a direct match for rag. It sits in the middle of the current model mix. It should run, but memory headroom will be limited.

Decode 118.4 tok/s · 34K ctx · llama.cpp
9.3 GB / 10.0 GB VRAM

Reasoning

B

Qwen 3 8B

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 118.4 tok/s · 20K ctx · llama.cpp
8.0 GB / 10.0 GB VRAM

Full Model Compatibility

BartowskiBMeta Llama 3.1 8B Instruct
B57
8B8.0 GB118 tok/s20K ctx
dense
TheBlokeTLlama 2 7B Chat
B57
7B7.3 GB135 tok/s22K ctx
dense
XtunerXllava llama 3 8b v1 1
B57
8B8.0 GB118 tok/s20K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
B57
7B7.3 GB135 tok/s22K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
B57
8B8.0 GB118 tok/s20K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
B57
7B7.3 GB135 tok/s22K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
B57
8B8.0 GB118 tok/s20K ctx
dense
UnslothQwen3.5 9B
C54
9B8.8 GB105 tok/s18K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C54
9B8.8 GB105 tok/s18K ctx
dense
Lmstudio-communityLQwen3.5 9B
C54
9B8.8 GB105 tok/s18K ctx
dense
UnslothQwen3.5 4B
C53
4B5.1 GB237 tok/s31K ctx
dense
Lmstudio-communityLgemma 3 4b it
C53
4B5.1 GB237 tok/s31K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C53
3B4.9 GB273 tok/s33K ctx
dense
QwenQwen2.5 3B Instruct
C52
3B4.5 GB316 tok/s35K ctx
dense
BartowskiBgemma 2 2b it
C51
2B4.3 GB370 tok/s37K ctx
dense
Googlegemma 2b
C51
2B3.9 GB473 tok/s41K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C50
2B3.9 GB473 tok/s41K ctx
dense
QwenQwen2.5 1.5B Instruct
C50
1.5B3.6 GB578 tok/s44K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C50
1B3.5 GB607 tok/s45K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C49
1.1B3.4 GB578 tok/s47K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C49
0.5B3.1 GB607 tok/s51K ctx
dense
Ggml-orgGembeddinggemma 300M
C48
0.3B2.9 GB607 tok/s54K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B417.0 GB4 tok/s4K ctx
moe
MistralDevstral 2 123B Instruct
F0
123B96.1 GB8 tok/s4K ctx
dense
Z.aiGLM-5
F0
744B462.0 GB4 tok/s4K ctx
moe
UnslothQwen3.5 27B
F0
27B22.6 GB35 tok/s7K ctx
dense
UnslothQwen3.5 35B A3B
F0
35B28.7 GB27 tok/s6K ctx
dense
Moonshot AIKimi K2.5
F0
1000B616.9 GB3 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B420.1 GB4 tok/s4K ctx
+1moe
MistralMistral Small 4 119B
F0
119B75.5 GB23 tok/s4K ctx
moe
AlibabaQwen3-Coder 30B A3B Instruct
F0
30.5B21.3 GB80 tok/s8K ctx
moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B300.2 GB6 tok/s4K ctx
moe
AlibabaQwen3-Coder-Next
F0
80B51.5 GB36 tok/s4K ctx
moe
UnslothQwen3.5 122B A10B
F0
122B80.7 GB9 tok/s4K ctx
dense
DeepSeekDeepSeek V3 671B
F0
671B417.0 GB4 tok/s4K ctx
moe
MistralMixtral 8x22B
F0
141B94.0 GB13 tok/s4K ctx
moe
AlibabaQwen 2.5 72B
F0
72B57.1 GB13 tok/s4K ctx
dense
AlibabaQwen 3 235B A22B
F0
235B148.7 GB11 tok/s4K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
F0
30B21.0 GB83 tok/s8K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B306.1 GB2 tok/s4K ctx
dense
MistralDevstral Small 2 24B Instruct
F0
24B20.3 GB40 tok/s8K ctx
dense
MetaLlama 3.3 70B
F0
70B55.5 GB14 tok/s4K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B248.6 GB7 tok/s4K ctx
moe
CohereCommand A 111B
F0
111B87.0 GB9 tok/s4K ctx
dense
AlibabaQwen 2.5 Coder 32B
F0
32B26.4 GB30 tok/s6K ctx
dense
AlibabaQwen 2.5 VL 72B
F0
72B57.1 GB13 tok/s4K ctx
dense
Unslothgemma 3 27b it
F0
27B22.6 GB35 tok/s7K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
F0
35B28.7 GB27 tok/s6K ctx
dense
MistralCodestral 2 25.08
F0
22B18.8 GB43 tok/s9K ctx
dense
MistralDevstral Small 1.1
F0
24B20.3 GB40 tok/s8K ctx
dense

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~422.8 GB
MistralDevstral 2 123B Instruct
123BTier 5Needs ~115.4 GB
Runs on Mac Studio M3 Ultra 256GB
Z.aiGLM-5
744BTier 5Needs ~468.2 GB
UnslothQwen3.5 27B
27BTier 5Needs ~26.8 GB
Runs on RTX 5090 32GB (~$1,999)
UnslothQwen3.5 35B A3B
35BTier 5Needs ~34.2 GB
Runs on Mac mini M4 64GB (~$1,099)

Upgrade paths

Upgrade from RTX 3080 10GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

NVIDIAGTX 1080 Ti 11GBNext step up
11 GB VRAM (+1)
A
Unlocks gemma 3 12b it, Llama 3.2 11B Vision, DeepSeek Coder V2 16B+3 more

 

NVIDIARTX A2000 12GBNVIDIA upgrade
12 GB VRAM (+2)
A
Unlocks gemma 3 12b it, Llama 3.2 11B Vision, DeepSeek Coder V2 16B+8 more

 

AMDRX 7600 XT 16GBBest value
16 GB VRAM (+6)
A
Unlocks StarCoder 15B, DeepSeek R1 Distill Qwen 14B, Phi-4-reasoning-plus 14B+34 more

~$329 MSRP

AMDAMD Instinct MI350X 288GBBiggest leap
288 GB VRAM (+278)8000 GB/s (+7240)
A
Unlocks Devstral 2 123B Instruct, Qwen3.5 27B, Qwen3.5 35B A3B+118 more · +601% faster avg

~$8,000 MSRP

Compare this GPU