An Old Problem Amplified
The roots of disinformation are ancient. But new technologies have enabled it to spread faster, giving rise to what experts are calling the “infodemic.”
false report of a campus shooting spread like wildfire through Syracuse University in April, just as a similar claim panicked West Hill High School in Onondaga in March. Russian agents use TikTok to spread misleading propaganda about its invasion of Ukraine — a strategy the Russians honed by dividing Americans on topics such as gender and race. Spam accounts for nearly 50% of all emails, with more than 88 billion phishing, scam, and other fraudulent emails sent each day in 2022.
At the dawn of the 21st Century, many experts saw the internet as a democratizing force that would arm citizens with crucial information and force totalitarian governments around the world to give voters power or face extinction. A quarter century later, we now know that the Internet is also the perfect delivery system for an age-old disease — misinformation.
The internet, mobile phones, social media and the other communication tools we take for granted have enormous power for good, no doubt. But they also have serious side effects that often erase the benefits, with studies finding that fake news posts spread faster than fact-checked news does online.
Experts say the problem of misinformation isn’t new. But in the same way airlines create new challenges for epidemiologists trying to stop the spread of diseases like COVID, the internet creates the perfect delivery system for amplifying propaganda and other false and misleading messages, creating, as the World Health Organization put it, an infodemic.
The difficulty in combating digital deception stems from the difficulty humans face when trying to separate fact from fiction, said Joshua Introne, an assistant professor at SU’s School of Information Studies. The problem is that those decisions are tied to our larger belief systems, which misinformation is skilled at manipulating, Introne said.
Another problem is information overload and the proliferation of channels to receive that information, Introne said. The number of professional fact-checkers, journalists and editors pales in comparison to the number of individuals and groups who play a role in the creation and dissemination of misinformation in the digital age.
But disinformation and “fake news” existed well before the internet, with roots dating to Roman times, according to a report from the International Center for Journalists. Octavian used a smear campaign against his rival, Mark Anthony, to convince the Roman Senate to wage war on Anthony and his lover, Cleopatra.
The propaganda campaign took the form of “short, sharp slogans written upon coins in the style of archaic Tweets,” Izabella Kaminska wrote in the Financial Times. “These slogans painted Antony as a womanizer and a drunk, implying he had become Cleopatra’s puppet, having been corrupted by his affair with her.”
The campaign had little basis in the truth, but it worked: Anthony and Cleopatra would commit suicide as they found themselves surrounded by Octavian’s forces, and Octavian would become Caesar Augustus. “Fake news had allowed Octavian to hack the republican system once and for all,” Kaminska observed.
In 1493, the invention of the Gutenberg printing press expanded the scope of falsified news. By 1835, the U.S. had a full-blown fake news scandal, The Great Moon Hoax, when The New York Sun published six articles detailing the discovery of life on the moon, including exaggerated drawings of alien life forms.
Leticia Bode, a professor of communication, culture, and technology at Georgetown University, agrees that misinformation has deep roots. But she also argues that contemporary distribution channels have added to the damage misinformation causes.
“I think, part of the reason we feel like misinformation is such a problem right now is that our media ecosystem makes misinformation much easier to see, and makes us much more aware of it,” she said.
In the past, people received information from books, newspapers, magazines and other published entities. The “gatekeepers” who worked for such entities had control of the content, for better or worse, according to scholars such as former Newhouse professor Pamela Shoemaker and Central Michigan University’s Elina Erzikova.
Social media has given a platform to anyone who wants to spread false information or unknowingly transmit damaging information. People with extreme views can find community on the platforms and reach a bigger audience than at any other time in history, according to groups like the Anti-Defamation League.
As much as technology can aid in the spread of disinformation, it can do the same with the spread of legitimate information. Phones give us the ability to check the validity of information at any time.
“In social media, you’re seeing side-by-side journalism from reputable organizations, and citizen journalism that may or may not be reputable, and it makes it hard to make sense of them all,” Bode said. “But it’s also easy to lose track of the fact that we have the entire world of knowledge in our hands and in our pockets on a daily basis. It’s amazing that we have the ability to look up information at the drop of a hat.”
Joshua Garland, a researcher at the Center on Narratives, Disinformation and Strategic Influence at Arizona State University, says the speed at which information spreads in 2023 makes the problem more difficult to combat.
“And so is it worse now?” he asked. “I’m not sure. But it certainly spreads a lot faster. It’s one of those things where you’ve given a voice to everybody. “Even the most random person in the world, that’s completely unvetted, can have a voice for millions and millions and millions of people.”
Part of what is different about social media is how much it has monetized clicks, resulting in “clickbait” and “sensationalized headlines,” directly tied to human psychology, Garland said.
“There are a lot of different things about our internal psychology that people have discovered that make it really easy to twist a headline (and) … get people outraged,” Garland said.
Whether sensationalized headlines are misinformation becomes a slippery slope, Garland said. Though some elements may be exaggerated or even false, a clickbait story often still contains true information. Such tactics may not be beneficial to our information environment, he said, but to equate clickbait with a lie may go too far.
Technologists hope that artificial intelligence will make it easier to identify and flag misinformation, but AI is not yet advanced enough, Garland said.
“It’s really, really easy to figure out if a social media post is about a particular topic,” he said. “It’s really, really hard, and not currently possible, given the current AI, to validate the claim that’s being made.”
More proactive organizations can complement AI with human fact-checkers, but it’s a “band-aid solution,” Introne said.
“I agree that it’s not a solution to disinformation,” Garland responded. “But given the current computational technology, it is the best we can do.”
Introne says solutions might emerge from “inoculation theory” — the idea that if we expose people to an antisocial attitude or belief, it may help inoculate them from the belief if they encounter it online. In this way, it works like a vaccine, exposing someone to the disease in a controlled setting to prepare the body for the real thing. But Introne said such approaches will only work for a “subset of the population,” those who want to be “vaccinated.”
Although it is clear that social media has fueled false information, there is an emerging suggestion that the root problem behind misinformation stems from various physiological and social phenomena, one of which Introne calls “belief dynamics.”
“There is a sort of factionalism around belief systems, people grouped together in order and they might be grouping together in order to establish and maintain social identity,” he said. “Because we all feel like we’re swimming in this really complicated environment, people grab onto any identity as they can. And along with that identity comes a set of beliefs, a set of narratives.”
Introne explained misinformation is the result of deep political and social division and a lack of human connection.
“Maybe we need to figure out how to build tools that will let us get close to one another in ways that are as impactful as meeting someone face to face,” Introne said. “I think that there needs to be rich interactions rather than the distilled interactions that go on in social media.”
Introne also finds insights from “symbolic operational paradox,” a theory in political science that people will stand behind beliefs not because they truly believe in them, but because they are like “badges” that signal who they are.
Emily Thorson, a professor of political science at SU, says identifying misinformation is an important first step. Once that step is achieved, she said people should share and like more accurate messages on social media to counter the misinformation.
“One thing people can and should do is speak up on social media when you see somebody saying something that’s not true,” she said. “You’re not necessarily doing it for the benefit of the person who you’re responding to – they might not be willing to change their belief. What you can do is help ensure that people reading the conversation don’t also accept the misinformation as true.”