Recently, Meta has come under fire for mistakenly tagging real photos as “Made by AI.”
According to TechCrunch, a technology news website, Pete Souza, a former White House photographer, accidentally classified a basketball game photo as AI-generated, drawing attention to this issue. Souza expressed dissatisfaction, pointing out he could not remove the incorrect label from his picture.
It’s not just Souza’s works that present an issue. When using Adobe software to make simple changes, such as deleting minor items with Adobe’s Generative Fill, Meta has labeled these photographs as “Made by AI.” Further confusion has resulted from photographers arguing that minor changes do not qualify an image as AI-generated. Photographers have observed that “images being marked as AI-generated is the result of even basic edits on Adobe.”
Meta representative Kate McLaughlin informed The Verge that the company is aware of the problem and is now reviewing its approach “so that [its] labels reflect the amount of AI used in an image.” This was in reaction to the company’s increasing concerns.
Meta first announced in February that it would begin labeling photos made with third-party software, such as Google, OpenAI, Microsoft, Adobe, Shutterstock, and Midjourney, as “Made with AI.” The goal was transparency on the usage of AI in image creation. This policy has been in place since May, and Meta uses metadata to identify the use of AI tools in creating photos.
Notwithstanding these good intentions, the recent problem emphasizes how difficult it may be to distinguish AI-generated content. To prevent photographers’ actual work from being wrongly classified as AI-created, Meta is constantly making changes to strike a balance between transparency and accuracy.