Adobe Has Been Found Selling Fake AI Images Of The Destruction In Gaza

Adobe is selling realistically generated images of the Israel-Hamas conflict, and these images have been circulated on the internet without clear indications that they are artificial.

As part of Adobe’s adoption of generative artificial intelligence (AI), they allow people to upload and sell AI-generated images through their stock image subscription service, Adobe Stock. Adobe does require submitters to disclose if the images were generated using AI, and they mark such images within their platform as “generated with AI.” The submission guidelines are similar to those for any other image, and illegal or infringing content is prohibited.

When users search for images on Adobe Stock, they may see a mix of real and AI-generated images. Some AI-generated images are clearly staged, while others can appear authentic and un-staged.

This also applies to Adobe Stock’s collection of images related to Israel, Palestine, Gaza, and Hamas. For example, when searching for “Palestine,” the first image shown is a photorealistic representation of a missile attack on a cityscape titled “Conflict between Israel and Palestine generative AI.” Other AI-generated images depict protests, on-the-ground conflicts, and even children fleeing from bomb blasts, all of which are not real.

These AI-generated images are being used without clear disclosure of their authenticity in the context of the Israel-Hamas conflict, amid a proliferation of misinformation and misleading online content on social media.

Some small online news outlets, blogs, and newsletters have featured the AI-generated image titled “Conflict between Israel and Palestine generative AI” without marking it as the product of generative AI. It’s uncertain whether these publications are aware that it is a fake image.

Dr. T.J. Thomson, a senior lecturer at RMIT researching the use of AI-generated images, has raised concerns about the transparency of AI image use and whether audiences can recognize their authenticity. There are worries that these images may mislead people, distort reality, and disrupt our understanding of truth and accuracy. Thomson also noted that there are discussions about the labor implications of using AI images rather than relying on on-the-ground photographers.

While AI-generated images can be a valuable tool, there are concerns about their potential misuse. Thomson emphasized the need for wisdom and caution in their use to avoid misleading or distorting the truth.

Leave a Reply

Your email address will not be published. Required fields are marked *