
Gemma 4: Google's Most Capable Open-Weight Model Family Drops Under Apache 2.0
Google has officially released the Gemma 4 family of open-weight models on April 2, 2026, representing a significant leap in open-source AI capabilities and accessibility.
The release includes four model variants: Effective 2B (E2B), Effective 4B (E4B), a 26B Mixture of Experts (MoE) model, and a 31B Dense model. This range ensures coverage from edge devices and mobile phones to high-performance server deployments.
The most consequential change is the licensing: Gemma 4 ships under the Apache 2.0 license, replacing the more restrictive Gemma Use Policy of previous versions. This means developers can freely use, modify, and distribute these models — including for commercial applications — without the constraints that limited Gemma 2 and 3 adoption.
Gemma 4 introduces native multimodal support across the family. All variants can process text and images, while the smaller E2B and E4B models also support audio input. This makes Gemma 4 the first open-weight model family to offer full multimodal capabilities at the 2B parameter scale.
The models are purpose-built for advanced reasoning and agentic workflows. They excel at multi-step problem solving, tool use, and autonomous task completion — capabilities that were previously exclusive to much larger proprietary models.
Early benchmarks show the 31B Dense model competing with models 3-5x its size on reasoning tasks, while the MoE variant achieves similar performance with significantly lower inference costs due to its sparse activation pattern.
For the data science community, Gemma 4 represents a turning point: production-quality AI capabilities that can be self-hosted, customized, and deployed without vendor lock-in or restrictive licensing.