Politics

How AI shapes the landscape of political misinformation

AI impact on political misinformation

Google is the first company to require a watermark disclaimer on ads that use AI, but how much will that affect the spread of political misinformation?

Generated image of President Joe Biden and former President Donald Trump collaborating.
Matt Brodsky

A wave of public concern over the influence of artificial intelligence on misinformation and the democratic process has emerged as the countdown to the upcoming election season begins. Citizens and experts are raising questions about the role and regulation of AI technology in shaping political advertising.

“There’s a growing portion of people who agree that AI needs to be carefully managed,” researcher Baobao Zhang said.

Google recently announced that the company will soon require advertising created with AI to have a clear watermark disclaimer. This decision comes after the Federal Election Commission (FEC) was pressured to host an open public comment period.

“I think this is a really important step,” said Dr. Carah Ong Whaley, the Academics Program Officer at the Center for Politics at the University of Virginia.

Google’s decision to start self-regulation before harsher guardrails were put in place are a reply to the public’s worry. But what effects will this disclaimer have on the spread of misinformation?

“Somebody who’s outraged by something or just wants to spread the misinformation themselves could screenshot the fake ad, remove the watermark and then spread that image,” Whaley said.

Zhang touched on the expansive role of AI beyond the ad space, “Something like a YouTube video where a campaign is funded by a dark money group could use AI to generate an ad.”

This is what we saw when Governor DeSantis shared generated images of former President Donald Trump hugging Dr. Anthony Fauci, the former head of the National Institute of Allergy and Infectious Disease. The DeSantis campaign released the image in a video. Since it was not posted through Google Ads, the content was complicit with the current AI regulations.

Whaley agrees with the narrow application of the Google decision, but still calls for increased awareness when navigating content online. “Disclaimers are a good thing, but there are still challenges and workarounds we have to think through.”

With the impending pervasive use of AI generated content in political campaigns, is it up to the consumer to recognize misinformation? 

“We consume so much digital content now that it’s hard to say you have to be able to discern everything,” Zhang said.

Whaley and Zhang agree that it is a near impossible task for the public to discern fact from fiction when it comes to media consumption.

“Putting the onus on individuals and the public makes the public far more susceptible to harm, especially when we’re talking about a lot of voters who don’t have the time, or the resources or the training in media literacy,” Whaley said. “Congress is really the one that needs to act.” And congress is acting. 

Multiple pieces of legislation have been introduced into both the House and the Senate. Democratic Representative Yvette Clarke, Senator Chuck Schumer (who introduced the SAFE Innovation Framework for AI this past June), and most recently Republican Senator Pete Ricketts have all been proponents of AI regulation.

Ricketts introduced the Advisory for AI Generated Content Act on September 12. “With Americans consuming more media than ever before, the threat of weaponized disinformation confusing and dividing Americans is real,” Rickets expressed in a press release. “My bill requires a watermark on AI generated content would give Americans a tool to understand what is real and what is made-up.”

While AI has already affected previous political elections, Zhang says there is reason to be “cautiously optimistic.” Google’s decision to mark political ads that were created with AI is an “important step,” but there is still a need for increased regulation.

As for now, the general public, corporations and Congress should be working toward stricter regulation of misinformation at the hand of artificial intelligence.

About the AI Illustration

The image of President Biden and former President Trump was created using Midjourney and the following prompts: Photo: A dimly lit room with a large round table. On the table lies a laptop displaying an AI-manipulated ad of two politicians, Donald Trump and Joe Biden embracing. The watermark on the image reads “AI-Generated Content”. Surrounding the laptop are various newspapers and magazines with headlines related to AI regulation, the FEC, and public concerns. Illustration: A split screen. On the left, a hand is using software to create an AI-manipulated image of two public figures shaking hands. On the right, another hand holds a magnifying glass over the image, revealing a watermark that says “AI-Generated”. Photo: A university lecture hall with Dr. Carah Ong Whaley and Dr. Baobao Zhang at the front, discussing AI regulations. Behind them, a large projector screen displays a bar graph showing public opinion on AI regulation over time. Vector Graphic: A flowchart on a digital backdrop. The flow starts with “AI Content Creation” leading to “Google’s Watermark” with a question mark. Branches lead to “Screenshot & Share without Watermark”, “YouTube Video Release”, and “Regulation by Congress”. Each branch has symbols representing the respective steps, such as a camera for screenshotting and a video icon for YouTube release. “a strong horizontal image” as this image is going to be heading image of the story. –ar 3:2