Taking Disinformation to Court

Legal experts say traditional legal remedies like libel laws will have a limited effect on misinformation.

Published: May 1, 2023

DropCap Background
D

igital disinformation has evolved faster than the law can catch up, furrowing the brows of experts looking to fill the legal gaps that allow platforms to spread misleading information and go unpunished. 

We talked to some of those experts about the current state of the law on deceptive content. 

Here’s what we learned.

what does the law say about disinformation?

The spread of false or misleading information on social media platforms and media websites often get a pass from the U.S. legal system as courts have been reluctant to curb speech expression in an environment that often favors unconstrained speech under the First Amendment.

University of Baltimore law expert Eric Easton said that traditional remedies in cases where disinformation harms a person’s reputation — known as libel — could offer a route for legal action. If false, defamatory information is included in an article or social media post, libel laws may apply. Other legal remedies, like invasion of privacy or intentional infliction of emotional distress, could also apply depending upon the circumstance, he explained. 

Easton warned that there are limitations to the effectiveness of using traditional remedies for misinformation. 

“To the extent that this information is expressed as opinion, it can be difficult to invoke those torts (meaning laws),” he said. “It’s not that the court can’t see through the words of opinion. It can, but it’s reluctant to, and it’s a much simpler proposition if false information is stated as factual.”

Opinions, he said, are protected speech and are harder to prove false, Easton said. If someone decides to tweet that they hate a singer, it may hurt the celebrity’s feelings or even harm their career, but it’s strictly an opinion and cannot be grounds for a defamation suit. If that person tried to start a rumor that the singer abused her children, then the singer might have grounds for a case. 

Yet a successful suit still isn’t guaranteed unless the singer shows that her career suffered as a result. Easton pointed to the Supreme Court’s 2012 ruling in the United States v. Xavier Alvarez. A man elected to a district water board in California told attendees at a meeting that he was a U.S. Marine veteran who had been injured repeatedly and received the Congressional Medal of Honor, as well as other grandiose statements that turned out to be lies. 

The court ruled that even when speech is shown to be a lie, if the plaintiff cannot show explicit harm, then the lie itself is given a pass. This allows for an additional layer of protection and complexity for those perpetuating disinformation. Plaintiffs must prove the disinformation is causing harm. That’s difficult when we’re inundated by deceptive content everyday. 

What reforms to current laws might help?

Aside from traditional remedies that can be used to pursue legal action against disinformation, Easton said there are other strategies that could help combating disinformation that have not been fully developed.

“Probably the most effective would be a repeal or revision of Section 230 of the Communications Decency Act,” Easton said. 

Section 230 gives immunity to third-party websites like Facebook or Twitter to post misinformation without being sued. If the section was revised or repealed, Congress could withdraw the immunity from social media platforms that are enabling the spread of disinformation, Easton said. 

Another route involves invoking antitrust laws, which seek to prevent monopolies, that weaken social media platforms’ control over the information market and allow for more regulation, Easton said. 

Easton foresees a more creative approach to addressing the circulation of false information. 

“I rather suspect that the route that legislation would have to take would be not to directly deal with misinformation or disinformation, but with the algorithms that are used to spread false information in a way that it does the most damage,” he said. “That battle has yet to be fought whether the First Amendment applies to those algorithms or not. My suspicion is, there’s probably a way around the First Amendment to reach those algorithms, but it’s a long way off, even if legislation is enacted.”

What about political disinformation?

“The United States has been reluctant to enact any legislation governing false claims in the political space, ” Santa Clara University legal expert David Sloss said. 

There are laws in place to regulate disinformation in the commercial space like those prohibiting false advertising, Sloss said, but so far those laws have not been used to regulate political advertising. 

“There’s a real skepticism about the idea of using law to regulate political speech,” he said.  “There’s not really law out there that tries to distinguish between true and false claims when it comes to elections, or politics in general.”

“There’s not really law out there that tries to distinguish between true and false claims when it comes to elections, or politics in general.”

Sloss acknowledged that many people think it’s a good thing that there’s no regulation of political speech. But, he believes some regulation is necessary when it comes to checking the validity of the political information citizens use to inform our votes.

“It’s a very delicate task,” Sloss said, “because the last thing you want to do is set up a system where the government decides what’s true and what’s false. Then you end up with something that looks much more like a totalitarian system where the government has its official version of truth.”

Sloss drew up a legislative proposal titled “The National Endowment for Fact-Checking Act

that seeks a “middle ground” where the government could pass legislation that limits the dissemination of disinformation without disrupting the balance of power.

While Sloss hasn’t found a representative to support his proposed statute, he’s continuing to lobby for a path forward. 

“The National Endowment for Fact-Checking Act will help preserve the integrity of American democracy by reducing the electronic amplification of false and misleading claims related to elections and public health,” the proposal reads. “The Act is entirely consistent with the First Amendment because it relies on private companies, not government sanctions, to de-amplify habitual liars.”

Sloss’ act would create a new organization, The National Endowment for Fact-Checking. This nonpartisan group would provide federal funds to fact-checking organizations to help improve their functionality and would give quarterly reports to Congress and the Federal Communications Commission to share information regarding institutions, organizations and individuals who repeatedly spread disinformation. 

“The statutory scheme nudges companies to de-amplify persistent offenders, but does not compel them to do so,” the act reads. “It also provides incentives for habitual liars to tell the truth, or risk losing their access to large electronic megaphones.”

While Sloss’ act wouldn’t force platforms to remove individuals or companies spreading disinformation, he hopes it would pressure platforms to take initiative against disinformation spreaders. 

“Reporting that kind of information to Congress and the FCC or to some other agency might provide a foundation for the government to be able to act in a more targeted way to deal with individuals who are essentially the main culprits for spreading bad information to large audiences,” Sloss said.

Another possible approach Sloss mentioned would be a revival of the FCC’s Fairness Doctrine, which was abolished under the Reagan Administration in 1987. The purpose of the doctrine was to make broadcast stations cover controversial issues in a balanced way, but it was opposed by reporters who felt it was their responsibility to balance competing narratives and shed light on truth. 

What can we do to become better at spotting misinformation?

Sloss referenced Sweden’s recent national effort to combat disinformation through the creation of the  Swedish Psychological Defense Agency, that was created in January of 2022, just before the September general election which sought to combat “foreign malign information influence activities,” and also to “strengthen the population’s ability to detect and resist malign influence campaigns and disinformation,” according to the website. 

Sweden’s unified education system makes it easier to implement programs that promote digital literacy and the spotting of disinformation. The thousands of different education systems in the U.S. make it difficult to impose uniform education regarding protecting citizens when it comes to disinformation, Sloss said. 

The correct approach to combating disinformation in a legal context is widely disputed among experts, but one thing is agreed upon: the need for education and awareness. 

Experts acknowledge the legal system has some catching up to do in the battle against disinformation. So they recommend people triple checking the information source when scrolling through social media, forward suspicious emails to law enforcement instead of clicking the link, and utilize caution when navigating online transactions.

Pile of papers