Verify, a new tool in the battle against AI-generated misinformation, takes a unique approach by checking images for authenticity rather than marking AI outputs. Developed by an alliance of global news organizations, technology companies, and camera makers in response to the proliferation of sophisticated fakes, Verify aims to combat the challenge of discerning between real and AI-generated images online. The web-based tool allows users to examine an image’s digital signature, encompassing crucial details such as date, time, location, and photographer information.
In the face of the rising threat of AI-generated misinformation, Verify has the potential to become a crucial component, particularly in the media industry, where accurate visuals play a pivotal role. To enhance its effectiveness, Verify collaborates with major camera manufacturers such as Nikon, Sony, and Canon, enabling them to embed authenticity credentials directly into photos as they are taken.
Nikon is set to introduce professional-grade mirrorless cameras with built-in authentication technology, ensuring that digital signatures are automatically added to every photograph. Sony will follow suit with a firmware update for its mirrorless SLR cameras, while Canon plans to release a camera with built-in authentication signatures in 2024, extending to video authentication signatures later in the year.
Contrary to traditional watermarking methods, Verify does not mark AI outputs but rather provides evidence that an image is not AI-generated by showcasing its digital signature. This approach aims to prevent confusion between authentic photographs and AI-generated content, offering transparency in an era where discerning between the two is increasingly challenging.
Additionally, digital signatures can serve as a deterrent against plagiarism, a prevalent issue in online spaces where images are often miscredited. While not every viewer may delve into the digital signature of a photo, the built-in credit could discourage fraudulent attempts to pass off someone else’s work as one’s own, thereby contributing to the integrity of online visual content.
Verify’s initiative aligns with ongoing efforts to combat AI misinformation, complementing Google DeepMind’s SynthID, which marks AI outputs with watermarks, surviving various attempts at removal and distortion.