DeepSeek R1 7B
MIT
DeepSeek's 7B reasoning-focused distilled model. Strong chain-of-thought reasoning, runs on 8GB VRAM.
Provider
DeepSeek
Parameters
7B
Context
65.536K
Released
2025-01-20
VRAM Requirements by Quantization
| Method | Disk Size | VRAM Required | Fits GPUs |
|---|---|---|---|
| Q8_0 | 7.5 GB | 8.5 GB | 14 GPUs |
| Q4_K_M | 4.4 GB | 5.5 GB | 15 GPUs |
| Q4_0 | 4.1 GB | 5.2 GB | 15 GPUs |
Install with Ollama
Run in terminal:
ollama pull deepseek-r1:7bMinimum 5.2GB VRAM required. Install Ollama from ollama.com
Benchmark Scores
mmlu72.5%
humaneval80.1%
Scores are approximate and may vary by quantization level.
Compatible GPUs (15)
AMD RX 7900 GRE (16GB)AMD RX 7900 XTX (24GB)Apple M4 Pro (24GB) (24GB)Apple M3 Max (36GB) (36GB)Apple M4 Max (48GB) (48GB)NVIDIA RTX 4060 (8GB)NVIDIA RTX 4070 SUPER (12GB)NVIDIA RTX 3080 12GB (12GB)NVIDIA RTX 4080 SUPER (16GB)NVIDIA RTX 4060 Ti 16GB (16GB)NVIDIA RTX 4070 Ti SUPER (16GB)NVIDIA RTX 5080 (16GB)NVIDIA RTX 3090 (24GB)NVIDIA RTX 4090 (24GB)NVIDIA RTX 5090 (32GB)
HuggingFace
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B