Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/Radeon Pro W6800 32GB
AMD

AMD

Radeon Pro W6800 32GB

Radeon ProWorkstationRDNA 2PCIe 4ROCm
32GB
VRAM
512GB/s
Bandwidth
35TFLOPS
FP16 Compute
280TOPS
INT8 Inference
VRAM32 GBBandwidth512 GB/sCompute35 TFInference280 TOPS
Radeon Pro W6800 32GBCategory AvgNVIDIA A100 40GB

Specifications

Compute
FP1635 TFLOPS
INT8280 TOPS
ArchitectureRDNA 2
Memory
VRAM32 GB
Bandwidth512 GB/s
General
FamilyRadeon Pro
SegmentWorkstation
InterconnectPCIe 4
Compute PlatformROCM

Architecture

RDNA 2

RDNA 2 is AMD's second-generation RDNA architecture, built on TSMC 7nm. It introduced hardware ray tracing and Infinity Cache for improved bandwidth efficiency. Powers the RX 6000 series and is also used in gaming consoles.

AI Relevance

Limited official ROCm support for consumer RDNA 2 cards — most AI runtimes require workarounds. Can run smaller models via llama.cpp with Vulkan or HIP backends, but performance is well behind NVIDIA equivalents.

Process: TSMC 7nmPlatform: ROCMPrecisions: FP32, FP16, INT8

Recommendations by Workload

Agentic Coding

C

Devstral Small 2 24B Instruct

This model is still usable for agentic-coding, but it is not the most specialized pick. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 19.6 tok/s · 39K ctx · llama.cpp
26.2 GB / 32.0 GB VRAM

Chat

C

Qwen 3 30B A3B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 39.9 tok/s · 11K ctx · llama.cpp
23.5 GB / 32.0 GB VRAM

Coding

C

Devstral Small 2 24B Instruct

This model is a direct match for coding. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 19.6 tok/s · 23K ctx · llama.cpp
22.5 GB / 32.0 GB VRAM

RAG

C

Codestral 21B Pruned i1

This model is a direct match for rag. It sits in the middle of the current model mix. It fits natively with comfortable headroom.

Decode 22.4 tok/s · 44K ctx · llama.cpp
23.5 GB / 32.0 GB VRAM

Reasoning

C

Devstral Small 2 24B Instruct

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 19.6 tok/s · 23K ctx · llama.cpp
22.5 GB / 32.0 GB VRAM

Full Model Compatibility

AlibabaQwen3-VL 30B A3B Instruct
C55
30B23.2 GB41 tok/s22K ctx
moe
AlibabaQwen3-Coder 30B A3B Instruct
C55
30.5B23.5 GB40 tok/s22K ctx
moe
MistralDevstral Small 2 24B Instruct
C52
24B22.5 GB20 tok/s23K ctx
dense
MistralDevstral Small 1.1
C52
24B22.5 GB20 tok/s23K ctx
dense
UnslothQwen3.5 27B
C52
27B24.8 GB17 tok/s21K ctx
dense
Unslothgemma 3 27b it
C52
27B24.8 GB17 tok/s21K ctx
dense
MistralCodestral 2 25.08
C51
22B21.0 GB21 tok/s24K ctx
dense
AlibabaQwen 2.5 Coder 32B
C48
32B28.6 GB15 tok/s18K ctx
dense
UnslothQwen3.5 35B A3B
C48
35B30.9 GB13 tok/s17K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
C48
35B30.9 GB13 tok/s17K ctx
dense
UnslothQwen3.5 9B
C48
9B11.0 GB52 tok/s47K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C48
9B11.0 GB52 tok/s47K ctx
dense
Lmstudio-communityLQwen3.5 9B
C48
9B11.0 GB52 tok/s47K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
C48
8B10.2 GB59 tok/s50K ctx
dense
XtunerXllava llama 3 8b v1 1
C48
8B10.2 GB59 tok/s50K ctx
dense
TheBlokeTLlama 2 7B Chat
C48
7B9.5 GB67 tok/s54K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
C48
8B10.2 GB59 tok/s50K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
C48
7B9.5 GB67 tok/s54K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
C47
8B10.2 GB59 tok/s50K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
C47
7B9.5 GB67 tok/s54K ctx
dense
UnslothQwen3.5 4B
C47
4B7.3 GB118 tok/s70K ctx
dense
Lmstudio-communityLgemma 3 4b it
C47
4B7.3 GB118 tok/s70K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C47
3B7.1 GB135 tok/s73K ctx
dense
QwenQwen2.5 3B Instruct
C47
3B6.7 GB157 tok/s76K ctx
dense
BartowskiBgemma 2 2b it
C47
2B6.5 GB184 tok/s78K ctx
dense
Googlegemma 2b
C46
2B6.1 GB235 tok/s84K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C46
2B6.1 GB235 tok/s84K ctx
dense
QwenQwen2.5 1.5B Instruct
C46
1.5B5.8 GB287 tok/s88K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C46
1B5.7 GB301 tok/s90K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C46
1.1B5.6 GB287 tok/s92K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C46
0.5B5.3 GB301 tok/s96K ctx
dense
Ggml-orgGembeddinggemma 300M
C45
0.3B5.1 GB301 tok/s99K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B419.2 GB2 tok/s4K ctx
moe
MistralDevstral 2 123B Instruct
F0
123B98.3 GB4 tok/s5K ctx
dense
Z.aiGLM-5
F0
744B464.2 GB2 tok/s4K ctx
moe
Moonshot AIKimi K2.5
F0
1000B619.1 GB2 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B422.3 GB2 tok/s4K ctx
+1moe
MistralMistral Small 4 119B
F0
119B77.7 GB12 tok/s7K ctx
moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B302.4 GB3 tok/s4K ctx
moe
AlibabaQwen3-Coder-Next
F0
80B53.7 GB18 tok/s10K ctx
moe
UnslothQwen3.5 122B A10B
F0
122B82.9 GB5 tok/s6K ctx
dense
DeepSeekDeepSeek V3 671B
F0
671B419.2 GB2 tok/s4K ctx
moe
MistralMixtral 8x22B
F0
141B96.2 GB6 tok/s5K ctx
moe
AlibabaQwen 2.5 72B
F0
72B59.3 GB7 tok/s9K ctx
dense
AlibabaQwen 3 235B A22B
F0
235B150.9 GB5 tok/s4K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B308.3 GB2 tok/s4K ctx
dense
MetaLlama 3.3 70B
F0
70B57.7 GB7 tok/s9K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B250.8 GB4 tok/s4K ctx
moe
CohereCommand A 111B
F0
111B89.2 GB4 tok/s6K ctx
dense
AlibabaQwen 2.5 VL 72B
F0
72B59.3 GB7 tok/s9K ctx
dense

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~425.0 GB
MistralDevstral 2 123B Instruct
123BTier 5Needs ~117.6 GB
Runs on Mac Studio M3 Ultra 256GB
Z.aiGLM-5
744BTier 5Needs ~470.4 GB
Moonshot AIKimi K2.5
1000BTier 5Needs ~624.1 GB
MistralMistral Large 3
675BTier 5Needs ~428.7 GB

Upgrade paths

Upgrade from Radeon Pro W6800 32GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

NVIDIANVIDIA A100 40GBNext step up
40 GB VRAM (+8)1555 GB/s (+1043)
A
Unlocks Falcon 40B Instruct+354% faster avg

~$10,000 MSRP

AMDRadeon PRO W7900 DS 48GBAMD upgrade
48 GB VRAM (+16)864 GB/s (+352)
A
Unlocks Qwen3-Coder-Next, Qwen3 48B A4B Savant Commander Distill 12X Closed Open Heretic Uncensored, Falcon 40B Instruct+76% faster avg

 

AppleMacBook Pro M3 Max 128GBBest value
128 GB Unified (+96)
B
Unlocks Mistral Small 4 119B, Qwen3-Coder-Next, Qwen 2.5 72B+15 more

~$2,499 MSRP

AMDAMD Instinct MI350X 288GBBiggest leap
288 GB VRAM (+256)8000 GB/s (+7488)
A
Unlocks Devstral 2 123B Instruct, Mistral Small 4 119B, Qwen3-Coder 480B A35B Instruct+29 more · +1750% faster avg

~$8,000 MSRP

Compare this GPU