Google released Gemini 3 Flash, expanding its Gemini 3 model family with a faster model that’s now the default in the Gemini app.
Gemini 3 Flash is also rolling out globally as the default model for AI Mode in Search.
The release builds on Google’s recent Gemini 3 rollout, which introduced Gemini 3 Pro in preview and also announced Gemini 3 Deep Think as an enhanced reasoning mode.
What’s New
Gemini 3 Flash replaces Gemini 2.5 Flash as the default model in the Gemini app globally, which means free users get the Gemini 3 experience by default.
In Search, Gemini 3 Flash is rolling out globally as AI Mode’s default model starting today.
For developers, Gemini 3 Flash is available in preview via the Gemini API, including access through Google AI Studio, Google Antigravity, Vertex AI, Gemini Enterprise, plus tools such as Gemini CLI and Android Studio.
Pricing
Gemini 3 Flash pricing is listed at $0.50 per million input tokens and $3.00 per million output tokens on Google’s Gemini API pricing documentation.
On the same pricing page, Gemini 2.5 Flash is listed at $0.30 per million input tokens and $2.50 per million output tokens.
Google says Gemini 3 Flash uses 30% fewer tokens on average than Gemini 2.5 Pro for typical tasks, and citing third-party benchmarking for a “3x faster” comparison versus 2.5 Pro.
Why This Matters
The default language model in the Gemini app has changed, and users have access at no extra cost.
If you build on Gemini, Gemini 3 Flash offers a new option for high-volume workflows, priced well below Pro-tier rates.
Looking Ahead
Gemini 3 Flash is rolling out now. In Search, Gemini 3 Pro is also available in the U.S. via the AI Mode model menu.