In response to the escalating prevalence of tools generating synthetic content, Google is set to implement a new policy regarding political advertisements on its platforms. Starting November, a year ahead of the next US presidential election, political ads will be mandated to explicitly disclose if images or audio have been created using artificial intelligence (AI).
Enhancing Transparency in Political Messaging
While Google’s current ad policies prohibit the manipulation of digital media for political deception, this update specifically targets election-related content featuring “synthetic elements” depicting real or lifelike individuals and events. Advertisements must make prominent disclosures, utilizing labels like “this image does not depict real events” or “this video content was synthetically generated” to alert viewers.
Guarding Against Misinformation
The tech giant’s ad policy already safeguards against demonstrably false claims that could erode trust in the electoral process. Political ads are required to disclose their funding sources, and detailed information about the messages is accessible in an online ads library. Any digital alterations in election ads must be presented conspicuously in a manner likely to catch the viewer’s attention.
Identifying Synthetic Content
Instances warranting disclosure labels encompass synthetic imagery or audio portraying individuals engaging in actions or making statements they never did, or depicting events that never transpired. Notably, AI-generated content has been used to circulate misleading images and videos, raising concerns over the potential for misinformation and manipulation.
Google’s Ongoing Commitment
Google remains committed to investing in technology that detects and removes such deceptive content. While manipulated imagery is not a new phenomenon, the rapid advancements in generative AI and its potential misuse call for vigilant monitoring.