Google has upgraded its Gemini chatbot with a new AI image model designed to improve the finesse with which users can edit photos, catching up with OpenAI's popular image tool and attracting more ChatGPT users.
Google rolled out the Gemini 2.5 Flash Image update, which has been pushed out to all Gemini app users since Tuesday, and is also available to developers through the Gemini API, Google AI Studio and the Vertex AI platform. The model edits images more accurately based on a user's natural language requests, and is particularly good at preserving consistency of details such as faces and animals, which is a shortcoming of most competing products. Previously, an anonymous AI image editor raised eyebrows on LMArena's crowdsourced evaluation platform, which Google recognized as its own Gemini 2.5 Flash Image model, codenamed "nano-banana," which has performed well in multiple benchmarks.
Google emphasized that the update significantly improves the seamlessness of editing, and the output can meet a variety of uses. AI image models have become the focus of competition among tech giants, and the use of ChatGPT soared after OpenAI launched its GPT-4o native image generator. In order to catch up with OpenAI, Google has made an effort in image modeling, and its new model not only focuses on visual quality, but also enhances the ability to follow instructions. In addition, the model is designed for consumer use scenarios, such as helping users plan home and garden projects, and can integrate multiple reference images into a single cue to generate a coherent image.
While Gemini's AI image generator makes creation easier, Google has set limits to prevent users from generating inappropriate content. Previously, Google apologized and even suspended the image generator after Gemini generated inaccurate images of historical figures. Now, Google believes it has found a better balance that gives users the freedom to create, but doesn't give them carte blanche. Google explicitly prohibits the generation of "non-consensual intimate images" in its terms of service, while similar restrictions seem to be missing on other platforms such as Grok, which allowed users to generate AI celebrity porn images of celebrities like Taylor Swift.
To address the authenticity challenge posed by deeply faked images, Google is adding visual watermarks and metadata identifiers to AI-generated images, but users may not be aware of these identifiers when browsing on social media.