Gemma 4 26B-A4B
MoEGemma
Gemma 4 MoE variant with 4B active params from a 26B pool. Unsloth achieved best-in-class GGUF across 22 quantization levels — broadest quant coverage in the Gemma 4 family.
Provider
Google DeepMind
Parameters
26B (4B active MoE)
Context
128K
Released
2026-04-20
VRAM Requirements by Quantization
| Method | Disk Size | VRAM Required | Fits GPUs |
|---|---|---|---|
| Q8_0 | 27 GB | 29 GB | 5 GPUs |
| Q4_K_M | 15 GB | 16.5 GB | 9 GPUs |
| Q4_0 | 14.3 GB | 15.5 GB | 9 GPUs |
| Q2_K | 9 GB | 10.5 GB | 18 GPUs |
Install with Ollama
Run in terminal:
ollama pull gemma4:26b-a4bMinimum 10.5GB VRAM required. Install Ollama from ollama.com
Benchmark Scores
mmlu83%
humaneval80.5%
Scores are approximate and may vary by quantization level.
Compatible GPUs (18)
AMD RX 9070 XT (16GB)AMD RX 7900 GRE (16GB)AMD RX 7900 XTX (24GB)AMD Ryzen AI Max+ 395 (unified memory) (64GB)Apple M4 Pro (24GB) (24GB)Apple M3 Max (36GB) (36GB)Apple M4 Max (48GB) (48GB)Apple M4 Ultra (64GB) (64GB)NVIDIA RTX 3080 12GB (12GB)NVIDIA RTX 4070 SUPER (12GB)NVIDIA RTX 4070 Ti SUPER (16GB)NVIDIA RTX 4080 SUPER (16GB)NVIDIA RTX 5070 Ti (16GB)NVIDIA RTX 4060 Ti 16GB (16GB)NVIDIA RTX 5080 (16GB)NVIDIA RTX 4090 (24GB)NVIDIA RTX 3090 (24GB)NVIDIA RTX 5090 (32GB)
HuggingFace
google/gemma-4-26b-a4b-it