The new AI model from Google named Gemini 2.0 Flash has faced criticism because it successfully removes watermarks from pictures. Users of social media platforms recently found this controversial capability, which allows the model to remove watermarks from pictures that come from Getty Images and other major stock media providers.
Google released expanded developer access to Gemini 2.0 Flash image generation features, which provide greater freedom for creating and modifying images during the previous week. This powerful feature of the platform seems to have insufficient built-in security measures. Gemini 2.0 Flash stands out from other AI models since it demonstrates the ability to effortlessly remove watermarks, which creates ethical and legal problems.
The image generation functionality of Gemini 2.0 Flash operates as an experimental feature, which Google explicitly declares inappropriate for commercial purposes. AI Studio serves as the only platform where developers can access this tool at present. The model remains in its initial development phase, yet it demonstrates enough editing capability to potentially violate copyright laws. The tool effectively raises copyright holder concerns even though it faces difficulties when dealing with complex or semi-transparent watermarks.

Under U.S. copyright law, removing a watermark from an image without owner permission stands as illegal activity except when specific conditions apply. The AI models Claude 3.7 Sonnet from Anthropic and GPT-4o from OpenAI come with embedded protections that stop users from removing watermarks because of ethical and legal considerations.
Industry experts, together with copyright holders, advocate for enhanced AI regulations due to growing industry concerns. The lack of official comments from Google about the content protection issue could lead to mounting pressure for the company to strengthen its AI model security measures.