runlocal.devCheck My GPU →

HunYuan Hy3

MoEHunYuan

Tencent's open-source preview MoE model. +40% efficiency gain on reasoning and coding vs prior generation. 21B active params from a 295B pool. Weights on HuggingFace; local deployment at scale needs multi-GPU.

Provider

Tencent

Parameters

21B active / 295B total (MoE)

Context

128K

Released

2026-04-24

VRAM Requirements by Quantization

MethodDisk SizeVRAM RequiredFits GPUs
BF16 (reference)540 GB590 GB0 GPUs
Q4_K_M (est.)165 GB180 GB0 GPUs

Benchmark Scores

mmlu86.5%
humaneval87.5%

Scores are approximate and may vary by quantization level.

HuggingFace

tencent/Hy3-preview

View on HF →