Google Will Flag AI-Generated Content
By Adam Pease
Google Will Flag AI-Generated Content
Google has recently announced it will be requiring AI-generated content to be flagged on its search platform. This news comes amidst worries about the negative social effects of generative content, and represents an early attempt to address these concerns.
Tackling Synthetic Content in Political Ads
In an era where artificial intelligence (AI) technologies like deepfakes can easily manipulate audio and images, the line between reality and digitally-crafted content is becoming increasingly blurred.
As a proactive measure, Google has announced that political ads featuring AI-generated content on its platforms must clearly disclose this fact to the public. Slated for implementation in November, the new policy comes ahead of the next U.S. presidential election.
It aims to address concerns over AI-driven disinformation campaigns that could potentially influence voters and undermine trust in electoral processes.
Google’s decision to require such disclosure adds another layer to its existing ad policies, which already prohibit the manipulation of digital media to deceive or mislead on issues of public concern.
The new rules are particularly focused on election-related ads, requiring them to “prominently disclose” if they contain “synthetic content.”
For example, labels like “this image does not depict real events” or “this video content was synthetically generated” could serve as flags for viewers. These labels are intended to be “clear and conspicuous,” placed where they’re likely to be noticed by the audience.
This move aligns with Google’s ongoing efforts to make political advertising more transparent, such as by disclosing who paid for a given ad and by maintaining an online ads library.
The Challenges of AI in Information Dissemination
The requirement for disclosure comes in the wake of multiple instances where AI-generated content has sown confusion and mistrust.
From a fake image of former U.S. President Donald Trump being arrested to a deepfake video of Ukrainian President Volodymyr Zelensky, the misuse of generative AI technologies is not just a hypothetical worry; it’s already happening.
Experts in the AI field have expressed concerns about the rapid advancements in generative AI technologies and their potential for misuse, particularly in the political realm where they can have outsized consequences.
While Google’s policy update is a commendable step toward transparency and integrity in political advertising, it is just a part of a larger tapestry of efforts needed to combat misinformation and disinformation.
Google itself has pledged to continue investing in technology designed to detect and remove such misleading content. As we inch closer to major electoral events, all eyes will be on how tech giants like Google implement and enforce these policies, and whether these steps are effective in providing a trustworthy platform for political discourse.
Google has announced a new policy requiring political ads on its platforms to clearly disclose any use of AI-generated content, set to take effect this November ahead of the next U.S. presidential election.
The move aims to address concerns about AI’s potential role in disseminating disinformation during electoral campaigns. The policy complements Google’s existing ad rules, which already prohibit manipulating digital media to deceive or mislead on issues of public concern.
Join Us LIVE on Thursday, September 21, 2023 at 10 AM PT | 1 PM ET
Join us on Thursday, September 21st, for exclusive early access to Aragon Research’s predictions for 2024 and beyond. These impactful and actionable predictions are essential for your strategic and operational planning for the upcoming years; crush your goals, and surpass your competitors.
This events features LIVE analyst sessions with Aragon Research’s expert analysts and our featured Women-In-Technology guest panelists discussing “The Current State of Women-In-Tech.”