AI-generated images and audio are already making their way into the 2024 Presidential election cycle. In an effort to staunch the flow of disinformation ahead of what is expected to be a contentious election, Google announced on Wednesday that it will require political advertisers to “prominently disclose” whenever their advertisement contains AI-altered or -generated aspects, “inclusive of AI tools.” The new rules will based on the company’s existing Manipulated Media Policy and will take effect in November.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” a Google spokesperson said in a statement obtained by The Hill. Small and inconsequential edits like resizing images, minor cleanup to the background or color correction will all still be allowed — those that depict people or things doing stuff that they never actually did or those that otherwise alter actual footage will be flagged.
Those ads that do utilize AI aspects will need to label them as such in a “clear and conspicuous” manner that is easily seen by the user, per the Google policy. The ads will be moderated first through Google’s own automated screening systems and then reviewed by a human as needed.
Google’s actions run counter to other companies in social media. X/Twitter recently announced that it reversed its previous position and will allow political ads on the site, while Meta continues to take heat for its own lackadaisical ad moderation efforts.
The Federal Election Commission is also beginning to weigh in on the issue. LAst month it sought public comment on amending a standing regulation “that prohibits a candidate or their agent from fraudulently misrepresenting other candidates or political parties” to clarify that the “related statutory prohibition applies to deliberately deceptive Artificial Intelligence campaign advertisements” as well.