Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Qwen3.5 35B A3B/on NVIDIA L4 24GB

Can it run?

Can NVIDIA L4 24GB run Qwen3.5 35B A3B?

FWon't run

Too heavy

Using Q4_K_M in Ollama

Capabilities:

Fit status

Too heavy

Decode

9.1 tok/s

TTFT

21197 ms

Safe context

13K

Memory

30.4 GB / 24.0 GB

Memory breakdown

Weights21.3 GB
KV Cache5.5 GB
Runtime1.2 GB
Headroom2.4 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingFToo heavy9.1 tok/s30832 ms21K
ChatDVery compromised (needs ~2.8 GB host RAM)8.1 tok/s13037 ms7K
CodingFToo heavy9.1 tok/s21197 ms13K
RAGFToo heavy9.1 tok/s38539 ms21K
ReasoningFToo heavy9.1 tok/s25051 ms13K

Quantization options

How Qwen3.5 35B A3B (35B params) fits at each quantization level on NVIDIA L4 24GB (24.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
13.7 GB
LowC42
Q3_K_SBest for your GPU
3
17.2 GB
LowC45
NVFP4
4
19.6 GB
MediumC45
Q4_K_M
4
21.3 GB
MediumC45
Q5_K_M
5
25.2 GB
HighF0
Q6_K
6
28.7 GB
HighF0
Q8_0
8
37.5 GB
Very HighF0
F16
16
71.8 GB
MaximumF0

Upgrade options

Hardware that runs Qwen3.5 35B A3B well

AppleMac mini M4 64GBBudget pick
C4 tok/s decode

~$1,099 MSRP

AppleMacBook Pro M4 Pro 64GBBest value
C9.8 tok/s decode

~$1,599 MSRP

NVIDIANVIDIA A100 40GBBiggest leap
B61.2 tok/s decode

~$10,000 MSRP

NVIDIARTX PRO 5000 Blackwell 48GBNVIDIA upgrade
B52.9 tok/s decode

 

See all results for NVIDIA L4 24GBSee all hardware for Qwen3.5 35B A3B