runlocal.devCheck My GPU →

Qwen 3.5 27B

Qwen

Balanced 27B model with strong reasoning. Runs on 16GB VRAM with Q4 quantization.

Provider

Alibaba

Parameters

27B

Context

131.072K

Released

2025-09-01

VRAM Requirements by Quantization

MethodDisk SizeVRAM RequiredFits GPUs
Q8_027 GB29 GB3 GPUs
Q4_K_M15.3 GB16.8 GB7 GPUs
IQ3_M11.5 GB13 GB12 GPUs
Q2_K9.5 GB11 GB14 GPUs

Install with Ollama

Run in terminal:

ollama pull qwen3.5:27b

Minimum 11GB VRAM required. Install Ollama from ollama.com

Benchmark Scores

mmlu83.1%
humaneval79.8%

Scores are approximate and may vary by quantization level.

Compatible GPUs (14)

HuggingFace

Qwen/Qwen3.5-27B-Instruct

View on HF →