Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Hardware/GPUs/Intel Data Center GPU Max 1550 128GB
Intel

Intel

Intel Data Center GPU Max 1550 128GB

Max DatacenterDatacenterPonte VecchioOAMoneAPI
128GB
VRAM
3.2kGB/s
Bandwidth
104TFLOPS
FP16 Compute
208TOPS
INT8 Inference
VRAM128 GBBandwidth3.2k GB/sCompute104 TFInference208 TOPS
Intel Data Center GPU Max 1550 128GBCategory AvgNVIDIA H200 141GB

Specifications

Compute
FP16104 TFLOPS
INT8208 TOPS
ArchitecturePonte Vecchio
Memory
VRAM128 GB
Bandwidth3200 GB/s
General
FamilyMax Datacenter
SegmentDatacenter
InterconnectOAM
Compute PlatformONEAPI

Architecture

Ponte Vecchio

Ponte Vecchio is Intel's datacenter GPU architecture powering the Max series accelerators. It uses advanced multi-tile packaging combining Intel 7 and TSMC N5 processes, with up to 128 GB HBM2e memory.

AI Relevance

With 128 GB HBM2e and oneAPI support, the Max 1550 can host large AI models. Used in the Aurora exascale supercomputer. However, the AI software ecosystem is smaller than CUDA or ROCm.

Process: Intel 7 + TSMC N5Platform: ONEAPIPrecisions: FP64, FP32, TF32, FP16, BF16, INT8

Recommendations by Workload

Agentic Coding

B

Mistral Small 4 119B

This model is still usable for agentic-coding, but it is not the most specialized pick. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 80.8 tok/s · 46K ctx · llama.cpp
88.3 GB / 128.0 GB VRAM

Chat

B

Mistral Small 4 119B

This model is a direct match for chat. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 80.8 tok/s · 12K ctx · llama.cpp
87.1 GB / 128.0 GB VRAM

Coding

C

Qwen3-Coder-Next

This model is a direct match for coding. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 125.2 tok/s · 32K ctx · llama.cpp
63.3 GB / 128.0 GB VRAM

RAG

C

Command R 35B

This model is a direct match for rag. It sits in the middle of the current model mix. It fits natively with comfortable headroom. Known channels: huggingface, ollama, lm-studio.

Decode 94.4 tok/s · 89K ctx · llama.cpp
46.0 GB / 128.0 GB VRAM

Reasoning

B

Mistral Small 4 119B

This model is a direct match for reasoning. It belongs to a current frontier family for local AI. It fits natively with comfortable headroom. Known channels: huggingface, lm-studio.

Decode 80.8 tok/s · 23K ctx · llama.cpp
87.3 GB / 128.0 GB VRAM

Full Model Compatibility

MistralMistral Small 4 119B
B56
119B87.3 GB81 tok/s23K ctx
moe
UnslothQwen3.5 122B A10B
C54
122B92.5 GB31 tok/s22K ctx
dense
CohereCommand A 111B
C53
111B98.8 GB30 tok/s21K ctx
dense
AlibabaQwen3-Coder-Next
C53
80B63.3 GB125 tok/s32K ctx
moe
MistralMixtral 8x22B
C52
141B105.8 GB45 tok/s19K ctx
moe
AlibabaQwen 2.5 72B
C52
72B68.9 GB46 tok/s30K ctx
dense
AlibabaQwen 2.5 VL 72B
C51
72B68.9 GB46 tok/s30K ctx
dense
MetaLlama 3.3 70B
C51
70B67.3 GB47 tok/s30K ctx
dense
MistralDevstral 2 123B Instruct
C50
123B107.9 GB27 tok/s19K ctx
dense
UnslothQwen3.5 35B A3B
C49
35B40.5 GB94 tok/s51K ctx
dense
Lmstudio-communityLQwen3.5 35B A3B
C49
35B40.5 GB94 tok/s51K ctx
dense
AlibabaQwen 2.5 Coder 32B
C49
32B38.2 GB103 tok/s54K ctx
dense
UnslothQwen3.5 27B
C48
27B34.4 GB122 tok/s60K ctx
dense
Unslothgemma 3 27b it
C48
27B34.4 GB122 tok/s60K ctx
dense
AlibabaQwen3-Coder 30B A3B Instruct
C48
30.5B33.1 GB280 tok/s62K ctx
moe
AlibabaQwen3-VL 30B A3B Instruct
C48
30B32.8 GB290 tok/s62K ctx
moe
MistralDevstral Small 2 24B Instruct
C48
24B32.1 GB138 tok/s64K ctx
dense
MistralDevstral Small 1.1
C47
24B32.1 GB138 tok/s64K ctx
dense
MistralCodestral 2 25.08
C47
22B30.6 GB150 tok/s67K ctx
dense
UnslothQwen3.5 9B
C46
9B20.6 GB367 tok/s99K ctx
dense
HauhauCSHQwen3.5 9B Uncensored HauhauCS Aggressive
C46
9B20.6 GB367 tok/s99K ctx
dense
Lmstudio-communityLQwen3.5 9B
C46
9B20.6 GB367 tok/s99K ctx
dense
BartowskiBMeta Llama 3.1 8B Instruct
C46
8B19.8 GB413 tok/s103K ctx
dense
XtunerXllava llama 3 8b v1 1
C45
8B19.8 GB413 tok/s103K ctx
dense
UnslothDeepSeek R1 0528 Qwen3 8B
C45
8B19.8 GB413 tok/s103K ctx
dense
MaziyarPanahiMMeta Llama 3 8B Instruct
C45
8B19.8 GB413 tok/s103K ctx
dense
TheBlokeTLlama 2 7B Chat
C45
7B19.1 GB472 tok/s107K ctx
dense
TheBlokeTMistral 7B Instruct v0.2
C45
7B19.1 GB472 tok/s107K ctx
dense
MaziyarPanahiMMistral 7B Instruct v0.3
C45
7B19.1 GB472 tok/s107K ctx
dense
UnslothQwen3.5 4B
C45
4B16.9 GB826 tok/s121K ctx
dense
BartowskiBLlama 3.2 3B Instruct
C45
3B16.7 GB952 tok/s123K ctx
dense
BartowskiBgemma 2 2b it
C45
2B16.1 GB1291 tok/s127K ctx
dense
Lmstudio-communityLgemma 3 4b it
C45
4B16.9 GB826 tok/s121K ctx
dense
QwenQwen2.5 3B Instruct
C45
3B16.3 GB1102 tok/s125K ctx
dense
Googlegemma 2b
C45
2B15.7 GB1653 tok/s130K ctx
dense
TheDrummerTGemmasutra Mini 2B v1
C45
2B15.7 GB1653 tok/s130K ctx
dense
Hugging-quantsHLlama 3.2 1B Instruct Q8 0
C45
1B15.3 GB2117 tok/s134K ctx
dense
QwenQwen2.5 1.5B Instruct
C45
1.5B15.4 GB2016 tok/s133K ctx
dense
TheBlokeTTinyLlama 1.1B Chat v1.0
C45
1.1B15.2 GB2016 tok/s135K ctx
dense
Ggml-orgGSmolVLM 500M Instruct
C45
0.5B14.9 GB2117 tok/s137K ctx
dense
Ggml-orgGembeddinggemma 300M
C45
0.3B14.7 GB2117 tok/s139K ctx
dense
DeepSeekDeepSeek R1 671B
F0
671B428.8 GB14 tok/s5K ctx
moe
Z.aiGLM-5
F0
744B473.8 GB13 tok/s4K ctx
moe
Moonshot AIKimi K2.5
F0
1000B628.7 GB10 tok/s4K ctx
moe
MistralMistral Large 3
F0
675B431.9 GB14 tok/s5K ctx
+1moe
AlibabaQwen3-Coder 480B A35B Instruct
F0
480B312.0 GB19 tok/s7K ctx
moe
DeepSeekDeepSeek V3 671B
F0
671B428.8 GB14 tok/s5K ctx
moe
AlibabaQwen 3 235B A22B
F0
235B160.5 GB38 tok/s13K ctx
moe
UnslothQwen3.5 397B A17B
F0
397B317.9 GB8 tok/s6K ctx
dense
MetaLlama 4 Maverick 17B 128E
F0
400B260.4 GB25 tok/s8K ctx
moe

Just out of reach

Models you could run with an upgrade

High-quality models that need a bit more memory

DeepSeekDeepSeek R1 671B
671BTier 5Needs ~434.6 GB
Z.aiGLM-5
744BTier 5Needs ~480.0 GB
Moonshot AIKimi K2.5
1000BTier 5Needs ~633.7 GB
MistralMistral Large 3
675BTier 5Needs ~438.3 GB
AlibabaQwen3-Coder 480B A35B Instruct
480BTier 5Needs ~317.4 GB

Upgrade paths

Upgrade from Intel Data Center GPU Max 1550 128GB

See what you unlock with more powerful hardware

Upgrade options

Upgrade options

NVIDIANVIDIA H200 141GBNext step up
141 GB VRAM (+13)4800 GB/s (+1600)
A
Unlocks Qwen 3 235B A22B, DeepSeek Coder V2 236B, DeepSeek V2.5 236B+98% faster avg

~$30,000 MSRP

AMDAMD Instinct MI325X 256GBBiggest leap
256 GB VRAM (+128)6000 GB/s (+2800)
A
Unlocks Qwen 3 235B A22B, Llama 4 Maverick 17B 128E, DeepSeek Coder V2 236B+4 more · +113% faster avg

 

AMDAMD Instinct MI350X 288GBBest value
288 GB VRAM (+160)8000 GB/s (+4800)
A
Unlocks Qwen3-Coder 480B A35B Instruct, Qwen 3 235B A22B, Llama 4 Maverick 17B 128E+5 more · +183% faster avg

~$8,000 MSRP

Compare this GPU