Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/RTX 3090 Ti 24GB
NVIDIA

NVIDIA

RTX 3090 Ti 24GB

RTX 30ConsumerAmperePCIe 4CUDA
24GB
VRAM
1kGB/s
Bandwidth
80TFLOPS
FP16 Compute
640TOPS
INT8 Inference
$1,999 MSRP
VRAM24 GBBandwidth1k GB/sCompute80 TFInference640 TOPSValue4 TF/$k
RTX 3090 Ti 24GBCategory AvgAMD Instinct MI100 32GB

Specifications

Compute
FP1680 TFLOPS
INT8640 TOPS
ArchitectureAmpere
Memory
VRAM24 GB
Bandwidth1008 GB/s
General
FamilyRTX 30
SegmentConsumer
InterconnectPCIe 4
Compute PlatformCUDA
MSRP$1,999

Architecture

Ampere

Ampere is NVIDIA's second-generation RTX architecture, built on Samsung's 8nm process. It introduced 3rd-generation Tensor Cores with support for sparsity-accelerated INT8 operations and improved FP16 throughput over Turing.

AI Relevance

Sparsity-aware Tensor Cores can effectively double throughput for structured sparse workloads. However, the lack of FP8 support means quantized inference is less efficient than Ada Lovelace or Blackwell.

Process: Samsung 8nmPlatform: CUDATensor Cores: Gen 3Precisions: FP32, FP16, BF16, INT8, INT4

Recommendations by Workload

Agentic Coding

C

Gemma 3 12B

This model is still usable for agentic-coding, but it is not the most specialized pick. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 97.8 tok/s · 53K ctx · llama.cpp
14.4 GB / 24.0 GB VRAM

Chat

C

Qwen 3 14B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 83.8 tok/s · 15K ctx · llama.cpp
12.9 GB / 24.0 GB VRAM

Coding

C

Devstral Small 2 24B Instruct

This model is a direct match for coding. It belongs to a current frontier family for local AI. It should run, but memory headroom will be limited. Known channels: huggingface, ollama, lm-studio.

Decode 48.9 tok/s · 18K ctx · llama.cpp
21.7 GB / 24.0 GB VRAM

RAG

C

granite 8b code instruct 4k

This model is a direct match for rag. It sits in the middle of the current model mix. It fits natively with comfortable headroom.

Decode 146.7 tok/s · 72K ctx · llama.cpp
10.7 GB / 24.0 GB VRAM

Reasoning

C

Qwen 3 14B

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 83.8 tok/s · 27K ctx · llama.cpp
14.0 GB / 24.0 GB VRAM

Full Model Compatibility

AlibabaQwen3-Coder 30B A3B Instruct
C54
30.5B22.7 GB100 tok/s17K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
C54
30B22.4 GB103 tok/s17K ctx
moe
MistralCodestral 2 25.08
C52
22B20.2 GB53 tok/s19K ctx
dense
MistralDevstral Small 2 24B Instruct
C52
24B21.7 GB49 tok/s18K ctx
dense
MistralDevstral Small 1.1
C52
24B21.7 GB49 tok/s18K ctx
dense
UnslothQwen3.5 27B
C52
27B24.0 GB44 tok/s16K ctx
dense
Unslothgemma 3 27b it
C52
27B24.0 GB44 tok/s16K ctx
dense
UnslothQwen3.5 9B
C51
9B10.2 GB130 tok/s38K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C51
9B10.2 GB130 tok/s38K ctx
dense
Lmstudio-communityLQwen3.5 9B
C51
9B10.2 GB130 tok/s38K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
C51
8B9.4 GB147 tok/s41K ctx
dense
XtunerXllava llama 3 8b v1 1
C51
8B9.4 GB147 tok/s41K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
C51
8B9.4 GB147 tok/s41K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
C50
8B9.4 GB147 tok/s41K ctx
dense
TheBlokeTLlama 2 7B Chat
C50
7B8.7 GB168 tok/s44K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
C50
7B8.7 GB168 tok/s44K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
C50
7B8.7 GB168 tok/s44K ctx
dense
UnslothQwen3.5 4B
C48
4B6.5 GB293 tok/s59K ctx
dense
Lmstudio-communityLgemma 3 4b it
C48
4B6.5 GB293 tok/s59K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C48
3B6.3 GB338 tok/s61K ctx
dense
QwenQwen2.5 3B Instruct
C47
3B5.9 GB391 tok/s65K ctx
dense
BartowskiBgemma 2 2b it
C47
2B5.7 GB458 tok/s67K ctx
dense
Googlegemma 2b
C47
2B5.3 GB587 tok/s72K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C47
2B5.3 GB587 tok/s72K ctx
dense
QwenQwen2.5 1.5B Instruct
C47
1.5B5.0 GB716 tok/s77K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C47
1B4.9 GB752 tok/s78K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C46
1.1B4.8 GB716 tok/s80K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C46
0.5B4.5 GB752 tok/s85K ctx
dense
Ggml-orgGembeddinggemma 300M
C46
0.3B4.3 GB752 tok/s88K ctx
dense
AlibabaQwen 2.5 Coder 32B
C41
32B27.8 GB32 tok/s14K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B418.4 GB5 tok/s4K ctx
moe
MistralDevstral 2 123B Instruct
F0
123B97.5 GB10 tok/s4K ctx
dense
Z.aiGLM-5
F0
744B463.4 GB5 tok/s4K ctx
moe
UnslothQwen3.5 35B A3B
F0
35B30.1 GB34 tok/s13K ctx
dense
Moonshot AIKimi K2.5
F0
1000B618.3 GB4 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B421.5 GB5 tok/s4K ctx
+1moe
MistralMistral Small 4 119B
F0
119B76.9 GB29 tok/s5K ctx
moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B301.6 GB7 tok/s4K ctx
moe
AlibabaQwen3-Coder-Next
F0
80B52.9 GB44 tok/s7K ctx
moe
UnslothQwen3.5 122B A10B
F0
122B82.1 GB11 tok/s5K ctx
dense
DeepSeekDeepSeek V3 671B
F0
671B418.4 GB5 tok/s4K ctx
moe
MistralMixtral 8x22B
F0
141B95.4 GB16 tok/s4K ctx
moe
AlibabaQwen 2.5 72B
F0
72B58.5 GB16 tok/s7K ctx
dense
AlibabaQwen 3 235B A22B
F0
235B150.1 GB13 tok/s4K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B307.5 GB3 tok/s4K ctx
dense
MetaLlama 3.3 70B
F0
70B56.9 GB17 tok/s7K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B250.0 GB9 tok/s4K ctx
moe
CohereCommand A 111B
F0
111B88.4 GB11 tok/s4K ctx
dense
AlibabaQwen 2.5 VL 72B
F0
72B58.5 GB16 tok/s7K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
F0
35B30.1 GB34 tok/s13K ctx
dense

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~424.2 GB
MistralDevstral 2 123B Instruct
123BTier 5Needs ~116.8 GB
Runs on Mac Studio M3 Ultra 256GB
Z.aiGLM-5
744BTier 5Needs ~469.6 GB
UnslothQwen3.5 35B A3B
35BTier 5Needs ~35.6 GB
Runs on Mac mini M4 64GB (~$1,099)
Moonshot AIKimi K2.5
1000BTier 5Needs ~623.3 GB

Upgrade paths

Upgrade from RTX 3090 Ti 24GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

AMDAMD Instinct MI100 32GBNext step up
32 GB VRAM (+8)1228 GB/s (+220)
A
Unlocks Qwen3.5 35B A3B, Qwen 2.5 Coder 32B, Qwen3.5 35B A3B+14 more · +6% faster avg

 

NVIDIARTX 5090 32GBNVIDIA upgrade
32 GB VRAM (+8)1792 GB/s (+784)
A
Unlocks Qwen3.5 35B A3B, Qwen 2.5 Coder 32B, Qwen3.5 35B A3B+14 more · +59% faster avg

~$1,999 MSRP

AppleMac mini M4 64GBBest value
64 GB Unified (+40)
B
Unlocks Qwen3.5 35B A3B, Qwen 2.5 Coder 32B, Qwen3.5 35B A3B+16 more

~$1,099 MSRP

AMDAMD Instinct MI350X 288GBBiggest leap
288 GB VRAM (+264)8000 GB/s (+6992)
A
Unlocks Devstral 2 123B Instruct, Qwen3.5 35B A3B, Mistral Small 4 119B+46 more · +604% faster avg

~$8,000 MSRP

Compare this GPU