runlocal.devCheck My GPU →

Mistral Small 3.2

Apache 2.0

Mistral's efficient 22B model with strong instruction following and multilingual support. Apache 2.0 license.

Provider

Mistral AI

Parameters

22B

Context

128K

Released

2025-07-01

VRAM Requirements by Quantization

MethodDisk SizeVRAM RequiredFits GPUs
Q8_022.5 GB24.5 GB3 GPUs
Q4_K_M12.8 GB14 GB12 GPUs
Q4_012.2 GB13.5 GB12 GPUs

Install with Ollama

Run in terminal:

ollama pull mistral-small

Minimum 13.5GB VRAM required. Install Ollama from ollama.com

Benchmark Scores

mmlu81.5%
humaneval77.8%

Scores are approximate and may vary by quantization level.

Compatible GPUs (12)

HuggingFace

mistralai/Mistral-Small-3.2-24B-Instruct-2506

View on HF →