Illustration by Matthew Brodsky

Deep fakes and AI are reinventing seeing and believing

Technological trends and tricks have infiltrated every aspect of our daily lives from politics to religion to pop culture.

Published: May 1, 2023

DropCap Background

he Pope is wearing a Balenciaga jacket and chain, Taylor Swift has no sympathy for her “poor” fans who can’t afford her tickets, and the Republican National Committee has already begun to produce AI-generated election ads targeting President Biden’s campaign. 

Welcome to 2023. Artificial Intelligence and deep fakes have been surfacing across the internet for several years now. But as technology advances, it has become harder and harder to detect what’s real and what has been fabricated.

At the same time, the public’s trust in the government has plummeted the last three years, which has technology researcher Sucheta Lahiri worried that artificial intelligence could detrimentally affect parts of society.

“These AI technologies are playing with the trust of the people and somewhere there is a conscious effort in tapping into the cultures where people trust more easily,” said Lahiri, a fourth-year doctoral candidate at Syracuse University’s School of Information Studies.

The creators of deep fakes and AI-generated audio have injected this technology into several facets of everyday life. Pop culture deep fakes can have more of a comedic effect than a harmful one. But when politics and religion come into play, these same technologies can have a significant impact on people’s decisions and public opinion.


Before voters enter the polling booth on Election Day, they are inundated with TV ads, social media posts and text alerts trying to influence the vote. In recent election cycles, that barrage of pitches has made it even more challenging to vet as technology makes it more difficult to sift through disinformation.

Mark Schmeller, associate professor of history at the Maxwell School of Citizenship and Public Affairs, said he is concerned about the vulnerability of what he dubs the “low information voters.”   

These voters are less likely to educate themselves on the current political landscape, Schmeller said. It’s their vote that may be easily swayed by a 30-second AI-generated clip of President Biden saying something nonsensical. 

“It’s the people who are less motivated to seek out information about politics, and consequently, are more susceptible to become swing voters,” Schmeller said. “Their impressions of politics and how the system works are very vague. So their decisions are really mostly driven by emotion, or maybe the personality of the candidate.” 

Robert Murrett, deputy director of SU’s Institute for Security Policy and Law, points to the 2016 election as a prime example of how AI can alter the state of politics. During the election, Russia disseminated disinformation through the use of AI-generated media in an effort to sway voters. Using social media and television to get these messages out, they successfully manipulated the election through the use of modern technology.

“I think the 2016 election was probably the biggest inflection point and the Russians were successful in using fakes.”

Although Schmeller does consider this technology a threat to public opinion, he acknowledges that it’s not the first time the truth has been jeopardized.

“There’s a long history, certainly in American culture, of fake news, hoaxes and just basic misinformation,” Schmeller said.

Hamid Ekbia, director of the Autonomous Systems Policy Institute, said “bad actors can use [AI and deep fakes] to target vulnerable populations who are already at risk of bias and discrimination.”

Ekbia said the upcoming 2024 presidential election will likely be littered with deep fakes and AI-generated photos and audio of candidates.

“There’s every indication that falsehoods and lies are going to start to be spread during the election,” Ekbia said, adding that he hopes that as technology advances, so does media literacy and the number of those who seek out truthful information.


Unlike politics, religious beliefs depend almost entirely on faith and trust. No Gallup polls indicate which God grants the most miracles, or what denomination sends the most followers to an afterlife. 

“When it comes to religion, people already have a higher level of trust,” the iSchool’s Lahiri said. “So if religion is taken as a medium to disseminate this technology it’s scary to think about.” 

Ekbia refers to this modern phenomenon as the “unraveling of sanctity.” He compares technology’s impact on religion to the advent of publishing centuries ago. When print media arrived, people could interpret the Bible individually rather than learn about religion through word of mouth.

To Ekbia, the spread of deep fakes and AI-generated media is not much different from this age-old cycle, it is merely taking a new form.

Ekbia’s home country of Iran claims to be implementing AI technologies that can detect those who violate religious codes to combat the uprisings among the youth. But Ekbia sees this as another source of disinformation. 

“More often than not, the claims that are made about the technical capabilities of these machines are hyped up, they are false,” Ekbia said. “So that’s another kind of falsehood. Unfortunately, this time, not by users, but by the builders of the system.”

As these technologies advance, Ekbia worries that their threat alone will affect people both spiritually and politically – even if they hold no substantial power. Ekbia gives an example of the recent controversy over the Dalai Lama having inappropriate relations with a young boy in a video that circulated on social media circles and garnered attention with millions of views on YouTube.

The Dalai Lama could have claimed this video was a deep fake to save him from his fall from grace, Ekbia points out. The widespread use of deep fakes has opened the door to deceive an audience without implementing any technology at all. Although this is rattling to devout religious followers, Ekbia sees this as an opportunity to reveal the true sanctity of all human beings. 

“I think that is that there’s something, frankly, nice about this,” Ekbia said. “But on the other hand, if it turns into falsehoods and fake information, none of us should celebrate it.” 

Pop culture

No different than American politics or centuries-old religions, the spotlight public figures exist under continues to broaden with the advancement of technology. 

SIDEARM Sports chief operating officer Michael Clarke said there are ways that pop culture has benefitted from artificial intelligence, such as bringing back adored rapper, Tupac Shakur, for a concert.

Clarke said deep fakes can be effective “if you wanted to reignite nostalgia with some particular artists, or if you wanted to re-deliver a message from someone who’s no longer here to do that.”

He mentioned the joy brought to fans at the 2012 Coachella music festival when a hologram of the late Tupac Shakur appeared on stage alongside Dr. Dre and Snoop Dogg to perform his song, “Hail Mary,” as the final act of the duo’s set.

“He was right there performing again, and it brought joy to a lot of people,” Clarke said. “It might be a frivolous silly thing, but, it did no harm.”

Clarke said that being able to preserve cultural icons of our time through AI or deep fakes has introduced a new way to document history while invoking more emotion than most written word could.

This technology has already opened doors for growing human connections through AI and virtual influencers. Miquela Sousa, an AI-generated robot influencer, emerged in 2016 and has since garnered the attention of her 2.8 million Instagram followers. In 2018, she was dubbed one of Time Magazine’s 25 most influential people on the internet.

Clarke said the idea of generating “a completely new persona of a human that is used in a beneficial way for society, creating a fake person that doesn’t exist yet” could force people to reimagine the capabilities of humans. Creating an ethical model persona could encourage people to look inward and emulate that behavior. 

There are, of course though, harmful ways AI can affect public figures. Taylor Swift has been one of many celebrities who have fallen victim to AI-generated audio. Fans, or “Swifties,” have been left to decipher if the pop star is calling them poor for not having the funds to see her show live or if she ever actually called up Kim Kardashian during a feud. 

While Taylor Swift will survive pop culture’s sudden fascination with making her say outrageous things on the internet, she is still most certainly affected by disinformation.


| help tiktok ruined the quality | NOT REAL TIKTOK ITS A AI VOICE | first post ⭐️ || #edit #taylorswift #tickets #cc #capcut #fyp #concert #fypシ #fy #editor #firstvideo #foryou #zxycba

♬ original sound – ⭐️ (taylors version)

It’s no secret that technology is changing the world rapidly, and history shows that cycles of misinformation and deception always find new ways to generate. The problem is and never has been technology though – it’s the way we use it.

“Developments in technology are not good or bad. It’s just what people do with them and AI is probably one of the most challenging and current examples,” said the Institute for Security Policy and Law’s Robert Murrett.