runlocal.devCheck My GPU →

DeepSeek R1 7B

MIT

DeepSeek's 7B reasoning-focused distilled model. Strong chain-of-thought reasoning, runs on 8GB VRAM.

Provider

DeepSeek

Parameters

7B

Context

65.536K

Released

2025-01-20

VRAM Requirements by Quantization

MethodDisk SizeVRAM RequiredFits GPUs
Q8_07.5 GB8.5 GB14 GPUs
Q4_K_M4.4 GB5.5 GB15 GPUs
Q4_04.1 GB5.2 GB15 GPUs

Install with Ollama

Run in terminal:

ollama pull deepseek-r1:7b

Minimum 5.2GB VRAM required. Install Ollama from ollama.com

Benchmark Scores

mmlu72.5%
humaneval80.1%

Scores are approximate and may vary by quantization level.

Compatible GPUs (15)

HuggingFace

deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

View on HF →