Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/Intel Arc Pro A40 6GB
Intel

Intel

Intel Arc Pro A40 6GB

Arc ProWorkstationAlchemistPCIe 4oneAPI
6GB
VRAM
192GB/s
Bandwidth
10TFLOPS
FP16 Compute
80TOPS
INT8 Inference
VRAM6 GBBandwidth192 GB/sCompute10 TFInference80 TOPS
Intel Arc Pro A40 6GBCategory AvgIntel Arc A550M 8GB

Specifications

Compute
FP1610 TFLOPS
INT880 TOPS
ArchitectureAlchemist
Memory
VRAM6 GB
Bandwidth192 GB/s
General
FamilyArc Pro
SegmentWorkstation
InterconnectPCIe 4
Compute PlatformONEAPI

Architecture

Alchemist

Alchemist is Intel's first discrete GPU architecture under the Arc brand, using Xe-HPG cores manufactured on TSMC's N6 process. It features XMX (Xe Matrix Extensions) engines for AI acceleration.

AI Relevance

XMX engines provide some AI inference acceleration via oneAPI/SYCL. However, the software ecosystem for LLM inference on Intel Arc is still developing, with limited runtime support compared to CUDA.

Process: TSMC N6Platform: ONEAPIPrecisions: FP32, FP16, INT8

Recommendations by Workload

Agentic Coding

C

StarCoder2 3B

This model is still usable for agentic-coding, but it is not the most specialized pick. It sits in the middle of the current model mix. It fits natively with comfortable headroom.

Decode 51.4 tok/s · 45K ctx · llama.cpp
4.3 GB / 6.0 GB VRAM

Chat

C

Qwen 3 4B

This model is a direct match for chat. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 38.6 tok/s · 10K ctx · llama.cpp
4.7 GB / 6.0 GB VRAM

Coding

C

StarCoder2 3B

This model is a direct match for coding. It sits in the middle of the current model mix. It fits natively with comfortable headroom.

Decode 51.4 tok/s · 23K ctx · llama.cpp
4.1 GB / 6.0 GB VRAM

RAG

C

SmolLM3 3B

This model is still usable for rag, but it is not the most specialized pick. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 51.4 tok/s · 45K ctx · llama.cpp
4.3 GB / 6.0 GB VRAM

Reasoning

C

Phi 4 Mini 4B

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 38.6 tok/s · 20K ctx · llama.cpp
4.7 GB / 6.0 GB VRAM

Full Model Compatibility

QwenQwen2.5 3B Instruct
C55
3B4.1 GB51 tok/s23K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C55
3B4.5 GB44 tok/s22K ctx
dense
BartowskiBgemma 2 2b it
C55
2B3.9 GB60 tok/s24K ctx
dense
UnslothQwen3.5 4B
C54
4B4.7 GB39 tok/s20K ctx
dense
Lmstudio-communityLgemma 3 4b it
C54
4B4.7 GB39 tok/s20K ctx
dense
Googlegemma 2b
C54
2B3.5 GB77 tok/s27K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C54
2B3.5 GB77 tok/s27K ctx
dense
QwenQwen2.5 1.5B Instruct
C54
1.5B3.2 GB94 tok/s30K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C53
1B3.1 GB99 tok/s31K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C53
1.1B3.0 GB94 tok/s32K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C52
0.5B2.7 GB99 tok/s35K ctx
dense
Ggml-orgGembeddinggemma 300M
C51
0.3B2.5 GB99 tok/s38K ctx
dense
TheBlokeTLlama 2 7B Chat
D39
7B6.9 GB20 tok/s14K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
D39
7B6.9 GB20 tok/s14K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
D39
7B6.9 GB20 tok/s14K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B416.6 GB2 tok/s4K ctx
moe
MistralDevstral 2 123B Instruct
F0
123B95.7 GB2 tok/s4K ctx
dense
Z.aiGLM-5
F0
744B461.6 GB2 tok/s4K ctx
moe
UnslothQwen3.5 27B
F0
27B22.2 GB6 tok/s4K ctx
dense
UnslothQwen3.5 35B A3B
F0
35B28.3 GB4 tok/s4K ctx
dense
UnslothQwen3.5 9B
F0
9B8.4 GB17 tok/s11K ctx
dense
Moonshot AIKimi K2.5
F0
1000B616.5 GB2 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B419.7 GB2 tok/s4K ctx
+1moe
MistralMistral Small 4 119B
F0
119B75.1 GB4 tok/s4K ctx
moe
AlibabaQwen3-Coder 30B A3B Instruct
F0
30.5B20.9 GB13 tok/s5K ctx
moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B299.8 GB2 tok/s4K ctx
moe
AlibabaQwen3-Coder-Next
F0
80B51.1 GB6 tok/s4K ctx
moe
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
F0
9B8.4 GB17 tok/s11K ctx
dense
UnslothQwen3.5 122B A10B
F0
122B80.3 GB2 tok/s4K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
F0
8B7.6 GB19 tok/s13K ctx
dense
DeepSeekDeepSeek V3 671B
F0
671B416.6 GB2 tok/s4K ctx
moe
MistralMixtral 8x22B
F0
141B93.6 GB2 tok/s4K ctx
moe
AlibabaQwen 2.5 72B
F0
72B56.7 GB2 tok/s4K ctx
dense
AlibabaQwen 3 235B A22B
F0
235B148.3 GB2 tok/s4K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
F0
30B20.6 GB14 tok/s5K ctx
moe
XtunerXllava llama 3 8b v1 1
F0
8B7.6 GB19 tok/s13K ctx
dense
UnslothQwen3.5 397B A17B
F0
397B305.7 GB2 tok/s4K ctx
dense
MistralDevstral Small 2 24B Instruct
F0
24B19.9 GB6 tok/s5K ctx
dense
MetaLlama 3.3 70B
F0
70B55.1 GB2 tok/s4K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B248.2 GB2 tok/s4K ctx
moe
UnslothDeepSeek R1 0528 Qwen3 8B
F0
8B7.6 GB19 tok/s13K ctx
dense
CohereCommand A 111B
F0
111B86.6 GB2 tok/s4K ctx
dense
AlibabaQwen 2.5 Coder 32B
F0
32B26.0 GB5 tok/s4K ctx
dense
AlibabaQwen 2.5 VL 72B
F0
72B56.7 GB2 tok/s4K ctx
dense
Unslothgemma 3 27b it
F0
27B22.2 GB6 tok/s4K ctx
dense
Lmstudio-communityLQwen3.5 9B
F0
9B8.4 GB17 tok/s11K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
F0
8B7.6 GB19 tok/s13K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
F0
35B28.3 GB4 tok/s4K ctx
dense
MistralCodestral 2 25.08
F0
22B18.4 GB7 tok/s5K ctx
dense
MistralDevstral Small 1.1
F0
24B19.9 GB6 tok/s5K ctx
dense

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~422.4 GB
MistralDevstral 2 123B Instruct
123BTier 5Needs ~115.0 GB
Runs on Mac Studio M3 Ultra 256GB
Z.aiGLM-5
744BTier 5Needs ~467.8 GB
UnslothQwen3.5 27B
27BTier 5Needs ~26.4 GB
Runs on RTX 5090 32GB (~$1,999)
UnslothQwen3.5 35B A3B
35BTier 5Needs ~33.8 GB
Runs on Mac mini M4 64GB (~$1,099)

Upgrade paths

Upgrade from Intel Arc Pro A40 6GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

IntelIntel Arc A550M 8GBNext step up
8 GB VRAM (+2)224 GB/s (+32)
A
Unlocks Meta Llama 3.1 8B Instruct, Llama 2 7B Chat, llava llama 3 8b v1 1+99 more

 

IntelIntel Arc A750 8GBIntel upgrade
8 GB VRAM (+2)512 GB/s (+320)
A
Unlocks Meta Llama 3.1 8B Instruct, Llama 2 7B Chat, llava llama 3 8b v1 1+99 more · +40% faster avg

~$289 MSRP

IntelIntel Arc B580 12GBBest value
12 GB VRAM (+6)456 GB/s (+264)
A
Unlocks Qwen3.5 9B, Qwen3.5 9B Uncensored HauhauCS Aggressive, Meta Llama 3.1 8B Instruct+133 more · +27% faster avg

~$249 MSRP

AMDAMD Instinct MI350X 288GBBiggest leap
288 GB VRAM (+282)8000 GB/s (+7808)
A
Unlocks Devstral 2 123B Instruct, Qwen3.5 27B, Qwen3.5 35B A3B+243 more · +2324% faster avg

~$8,000 MSRP

Compare this GPU