close
close

Google will start labeling AI-generated images in search later this year

Google says it plans to make changes to Google Search that will more clearly indicate which images in search results have been generated by artificial intelligence — or edited by AI tools.

Over the next few months, Google will begin labeling AI-generated and edited images in the “About this image” box across Search, Google Lens, and Android’s Circle to Search. Similar disclosures could come to other Google properties in the future, like YouTube; Google says it has more to share later in the year.

Most importantly, Just images containing “C2PA metadata” will be flagged as AI-manipulated in search. C2PA, short for Coalition for Content Provenance and Authenticity, is a group that develops technical standards to track the history of an image, including the hardware and software used to capture and/or create it.

Companies like Google, Amazon, Microsoft, OpenAI, and Adobe are backing C2PA. But the coalition’s standards haven’t gained widespread acceptance. As The Verge noted in a recent article, C2PA faces a number of challenges with adoption and interoperability; only a handful of generative AI tools and cameras from Leica and Sony support the group’s specifications.

What’s more, C2PA metadata—like any other metadata—can be deleted or scrubbed, or become corrupted to the point of being unreadable. And images from some of the more popular generative AI tools, such as Flux, which xAI’s Grok chatbot uses to generate images, NO have C2PA metadata included, in part because their creators have not agreed to endorse that standard.

Some action is better than nothing as deepfakes continue to spread rapidly. AI-generated content fraud is estimated to have increased by 245% between 2023 and 2024. Deloitte predicts that deepfake-related losses will increase from $12.3 billion in 2023 to $40 billion in 2027.

Studies show that most people fear being tricked by deepfakes and that AI could be used to spread propaganda.