Will It Run AI
CalculatorModelsHardwareCompare
Product
  • Calculator
  • Compare
  • Tier List
Browse
  • Models
  • Hardware
  • Docs
About
  • Why It Works
  • What's New
  • Legal Notice
  • Privacy Policy

All estimates are approximations based on mathematical models and public specifications. Actual performance may vary. Do not make purchasing decisions based solely on these estimates.

Data sourced from Hugging Face, Ollama, and official model documentation. Model names and logos are trademarks of their respective owners.

© 2026 Will It Run AI — Fase Consulting Ibiza, S.L. (NIF: B57969656)

Home/Models/DeepSeek R1 671B

DeepSeekDeepSeek

DeepSeek R1 671B

Frontier
huggingfaceHuggingFaceollamaOllama
1.4MDownloads13.1KLikesJan 2025Released131K tokensContextMITLicense5 EntryQuality

Get started

— copy & paste to run locally
Ollama
ollama run deepseek-r1-671b
HuggingFace
huggingface-cli download deepseek-r1-671b

Quick specs

Parameters671B (37B active)
Architecturemoe (MoE)
Context131K tokens
Modalitytext
Min RAM261.7 GB
Rec. RAM409.3 GB (Q4_K_M)
LicenseMIT
FamilyDeepSeek
✓ Chat✓ Reasoning

About this model

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing.

  • •We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This...
  • •We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and...

Related models

Your hardware

Detecting...

Quantization options

VRAM estimates by quant level

No hardware detected — fit column shows raw VRAM estimates

QuantBitsVRAMQualityFit
Q2_K
2
261.7 GB
Low—
Q3_K_S
3
328.8 GB
Low—
NVFP4
4
375.8 GB
Medium—
Q4_K_M
4
409.3 GB
Medium—
Q5_K_M
5
483.1 GB
High—
Q6_K
6
550.2 GB
High—
Q8_0
8
718.0 GB
Very High—
F16
16
1375.6 GB
Maximum—

Quality benchmarks

DeepSeek R1 671B benchmark scores

Benchmark verified

Coding

SWE-bench Verified49.2%
HumanEval+85.0%
Aider Polyglot53.3%
LiveCodeBench—

Reasoning

MMLU-Pro84.0%
GPQA Diamond71.5%
MATH-50097.3%
ARC Challenge—

General

Chatbot Arena—
IFEval83.3%

Source: official · 2025-01-20

Hardware compatibility

Fit estimates across all hardware

Open calculator

Computing compatibility...

Memory breakdown

Reference: NVIDIA A10 24GB

Weights409.3 GB
KV Cache5.8 GB
Runtime0.9 GB
Headroom2.4 GB