In a move to bolster AI regulations as the US gears up for its presidential election, Google is introducing a policy necessitating political advertisements to declare any “synthetic” elements. This covers AI-generated sounds and visuals.
Launching in November, this revised policy dictates that advertisements utilizing artificial intelligence to depict events or individuals in a manner that isn’t genuine must clearly and prominently indicate their synthetic nature to the audience. This requirement encompasses audio, video, and image advertisements on Google’s platforms, including its ad display network and YouTube. However, adjustments viewed as minor, such as color modifications or image cropping, are not bound by this disclosure rule.
The emergence of AI-generated content in political advertisements has been evident. For instance, a video by Florida Governor Ron DeSantis’s campaign, presumably containing AI-crafted deepfakes, was unveiled on platform X earlier this year. Additionally, the Republican National Committee aired an ad that, as per its disclaimer, solely utilized AI imagery.
Google’s decision arrives on the heels of the Federal Election Commission’s exploration into the governance of AI-created deepfakes within political advertisements. Last July, Google, along with tech juggernauts like Amazon, Microsoft, and OpenAI, committed to AI safety measures as presented by the Biden administration.
Google’s timely decision to enforce transparency in political ads leveraging AI augments the integrity of political discourse. As technology advances, ensuring voters receive authentic information is paramount to uphold the pillars of democracy.