Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/TinyLlama 1.1B Chat v1.0/on NVIDIA H100 PCIe 80GB

Can it run?

Can NVIDIA H100 PCIe 80GB run TinyLlama 1.1B Chat v1.0?

CUsable

Runs well

Using Q4_K_M in Ollama

Capabilities:

Fit status

Runs well

Decode

1680.0 tok/s

TTFT

350 ms

Safe context

120K

Memory

10.7 GB / 80.0 GB

Memory breakdown

Weights0.7 GB
KV Cache0.8 GB
Runtime1.2 GB
Headroom8.0 GB

Performance by workload

WorkloadGradeFitDecodeTTFTContext
Agentic CodingCRuns well1680.0 tok/s350 ms240K
ChatCRuns well1680.0 tok/s350 ms60K
CodingCRuns well1680.0 tok/s350 ms120K
RAGCRuns well1680.0 tok/s350 ms240K
ReasoningCRuns well1680.0 tok/s350 ms120K

Quantization options

How TinyLlama 1.1B Chat v1.0 (1.100000023841858B params) fits at each quantization level on NVIDIA H100 PCIe 80GB (80.0 GB usable).

QuantBitsVRAMQualityFit
Q2_K
2
0.4 GB
LowD30
Q3_K_S
3
0.5 GB
LowD30
NVFP4
4
0.6 GB
MediumD30
Q4_K_M
4
0.7 GB
MediumD30
Q5_K_M
5
0.8 GB
HighD30
Q6_K
6
0.9 GB
HighD30
Q8_0
8
1.2 GB
Very HighD30
F16Best for your GPU
16
2.3 GB
MaximumD30

Get started

Upgrade options

Hardware that runs TinyLlama 1.1B Chat v1.0 well

AppleMacBook Pro M3 Max 128GBBudget pick
C240 tok/s decode

~$2,499 MSRP

AppleMac Studio M1 Ultra 128GBBest value
C440 tok/s decode

~$3,999 MSRP

AppleMac Studio M2 Ultra 128GBBiggest leap
C464 tok/s decode

~$3,999 MSRP

See all results for NVIDIA H100 PCIe 80GBSee all hardware for TinyLlama 1.1B Chat v1.0