Site icon Aragon Research

Google Will Flag AI-Generated Content

By Adam Pease

Google Will Flag AI-Generated Content

Google has recently announced it will be requiring AI-generated content to be flagged on its search platform. This news comes amidst worries about the negative social effects of generative content, and represents an early attempt to address these concerns.

Tackling Synthetic Content in Political Ads

In an era where artificial intelligence (AI) technologies like deepfakes can easily manipulate audio and images, the line between reality and digitally-crafted content is becoming increasingly blurred.

As a proactive measure, Google has announced that political ads featuring AI-generated content on its platforms must clearly disclose this fact to the public. Slated for implementation in November, the new policy comes ahead of the next U.S. presidential election.

It aims to address concerns over AI-driven disinformation campaigns that could potentially influence voters and undermine trust in electoral processes.

Google’s decision to require such disclosure adds another layer to its existing ad policies, which already prohibit the manipulation of digital media to deceive or mislead on issues of public concern.

The new rules are particularly focused on election-related ads, requiring them to “prominently disclose” if they contain “synthetic content.”

For example, labels like “this image does not depict real events” or “this video content was synthetically generated” could serve as flags for viewers. These labels are intended to be “clear and conspicuous,” placed where they’re likely to be noticed by the audience.

This move aligns with Google’s ongoing efforts to make political advertising more transparent, such as by disclosing who paid for a given ad and by maintaining an online ads library.

The Challenges of AI in Information Dissemination

The requirement for disclosure comes in the wake of multiple instances where AI-generated content has sown confusion and mistrust.

From a fake image of former U.S. President Donald Trump being arrested to a deepfake video of Ukrainian President Volodymyr Zelensky, the misuse of generative AI technologies is not just a hypothetical worry; it’s already happening.

Experts in the AI field have expressed concerns about the rapid advancements in generative AI technologies and their potential for misuse, particularly in the political realm where they can have outsized consequences.

While Google’s policy update is a commendable step toward transparency and integrity in political advertising, it is just a part of a larger tapestry of efforts needed to combat misinformation and disinformation.

Google itself has pledged to continue investing in technology designed to detect and remove such misleading content. As we inch closer to major electoral events, all eyes will be on how tech giants like Google implement and enforce these policies, and whether these steps are effective in providing a trustworthy platform for political discourse.

Bottom Line

Google has announced a new policy requiring political ads on its platforms to clearly disclose any use of AI-generated content, set to take effect this November ahead of the next U.S. presidential election.

The move aims to address concerns about AI’s potential role in disseminating disinformation during electoral campaigns. The policy complements Google’s existing ad rules, which already prohibit manipulating digital media to deceive or mislead on issues of public concern.


Mastering 2024 and Beyond with Aragon’s Exclusive Predictions for AI

Join Us LIVE on Thursday, September 21, 2023 at 10 AM PT | 1 PM ET

 

Join us on Thursday, September 21st, for exclusive early access to Aragon Research’s predictions for 2024 and beyond. These impactful and actionable predictions are essential for your strategic and operational planning for the upcoming years; crush your goals, and surpass your competitors.

This events features LIVE analyst sessions with Aragon Research’s expert analysts and our featured Women-In-Technology guest panelists discussing “The Current State of Women-In-Tech.”

Sign Up for Free Today


 

This blog on is part of the Content AI blog series by Aragon Research’s Analyst, Adam Pease.

Missed the previous installments? Catch up here:

Blog 1: RunwayML Foreshadows the Future of Content Creation

Blog 2: NVIDIA Enters the Text-to-Image Fray

Blog 3: Will OpenAI’s New Chatbot Challenge Legacy Search Engines?

Blog 4: Adobe Stock Accepts Generative Content and Meets Backlash

Blog 5: OpenAI Makes a Move for 3D Generative Content with Point-E

Blog 6: ChatGPT and the Problem of Detecting AI-Generated Content

Blog 7: Content AI: Voice AI Takes a Step Forward

Blog 8: AI in the Courtroom: Are Robot Lawyers the Future of Law?

Blog 9: GitHub Copilot and the Legality of Generative Content

Blog 10: Google Steps into the Chat AI Ring with Bard, Anthropic Investment

Blog 11: Exploring Google Bard’s Botched Demo

Blog 12: Meta AI Is Working at the Intersection of Robotics and Generative AI

Blog 13: Meta’s New AI Model Leaks

Blog 14: Students in China Use ChatGPT from Behind the Firewall

Blog 15: OpenAI’s ChatGPT API Will Transform Application Experiences

Blog 16: Microsoft Announces Copilot X, GPT-4 Integration

Blog 17: BloombergGPT Brings Generative AI to Finance

Blog 18: Stability AI Releases Its First Large Language Model: StableLM

Blog 19: OpenAI to Patent ‘GPT’

Blog 20: Pinecone and the Power of Vector Databases for AI

Blog 21: Alphabet Plans New Generative AI Announcements for Google I/O

Blog 22: Europe Moves to Regulate Generative AI

Blog 23: OpenAI Introduces Code Interpreter Plugin for ChatGPT

Blog 24: Generative AI and the Labor Market: Is It Causing Job Loss?

Blog 25: OpenAI Announces Function Calling for Its GPT-4 API

Blog 26: The State of Open-Source Language Models

Blog 27: The State of Generative Video

Blog 28: Google’s “Genesis”: A News Writing AI Shocking Journalists

Blog 29: OpenAI Brings Custom Instructions to ChatGPT

Blog 30: New York Times Limits Use of Data for Generative AI

Blog 31: Faced With Generative AI, Teachers Are Returning to Paper and Pen

Blog 32: Anthropic Partners with SKT for Telecom Language Model

Blog 33: Federal Judge Rules AI-Generated Works Are Not Copyright-Protected

Blog 34: AI in the Classroom: A Reflection on Gwinnett County’s Trailblazing Initiative

Exit mobile version