The Jan. 6 attack on the Capitol proved that false information spread online can have real-world consequences.
The deadly riot, which followed weeks of disinformation about the 2020 election spread by former-President Donald Trump resulted in his suspension or outright ban from social networks including Twitter (TWTR), Facebook (FB), and Google (GOOG, GOOGL). Since then, according to Zignal Labs, disinformation about the election has fallen 73%.
Still disinformation—purposely false information—and misinformation—information that someone spreads without knowing it is false—won’t disappear now that Trump is out of office or off social media. And the continued propagation of such false information can still prove incredibly dangerous.
“My views are it is the next epidemic to solve after we figure out [the] coronavirus,” Ari Lightman, professor or digital media and marketing at Carnegie Mellon University, told Yahoo Finance.
But stopping, or at least slowing, the spread of false information will take far more than banning even social media’s most prominent users.
How misinformation and disinformation spread online
“Disinformation predates Trump,” Carnegie Mellon University Institute for Software Research professor Kathleen Carley told Yahoo Finance. “It goes back to the beginning of humankind. So it’s not like him being out of office will get rid of disinformation entirely.”
But the internet and social media have helped make the spread of false information easier than ever before.
According to a 2018 MIT study, false news on Twitter “spreads farther, faster, deeper, and more broadly” than the truth. It would be easy to blame Twitter and its ilk for making it so simple to share information to millions of other users, but that’s not exactly right.
“It is not just the fault of technology,” Sinan Aral, David Austin Professor of Management at MIT and one of the study’s authors, told Yahoo Finance.
“It is the combination of technology and its design combined with human cognitive instincts that together create the outcomes we see. And so there are responsibilities for the tech companies in their design, there are responsibilities for regulators, and there are responsibilities, as well, for users.”
So why do people spread disinformation?
Others may also want to discredit people, or show how much they hate something by spreading lies about it. Some though are merely in it for fun.
Take the case of Adam Rahuba, an internet troll who, The Washington Post reported in July, regularly posted about fictitious Antifa events to antagonize and draw armed right-wing counter protesters to locations including Gettysburg National Military Park. During one such incident, a person accidentally shot himself.
Not all false information is spread with malicious intent, though. Misinformation can often be spread by users with a genuine desire to help others. “There’s a lot of information that’s misconstrued, misinterpreted,” Lightman said.
“Even people going out there to try to do societal benefit are misled. We have to figure out how to assess this, because people are making decisions based on bad information that are going to lead to societal harm,” he added.
Tackling the spread
With increased awareness that online disinformation and misinformation can lead to real-world dangers, the question remains: How do we stop the spread? Unfortunately, there’s no easy answer.
“It’s going to take a whole army of researchers, technologists, academics, the platforms, news agencies, and journalists...to figure this out,” Lightman said.
MIT’s Aral, meanwhile, says that social media platforms need to double-down on labeling content by providing where it originated and what sources its claims rely on. What’s more, he said, tech companies could introduce prompts that get users to question what they’re reading.
“So they change their mindset and suddenly they’re critically evaluating what they’re reading, which has been shown to reduce the likelihood of believing and sharing false news,” he explained.
To their credit, both Facebook and Twitter have made efforts to point out false information using prompts that appear above or below posts that are proven false — something both companies did in the events leading up to the Jan. 6 attack on the Capitol.
But, according to a study outlining what’s called The Implied Truth Effect, posting warnings alongside fake news can actually have the opposite effect, leading people to think those without warnings are the truth.
Carley said trusted sources and authorities also need to take a page from the trolls and adversaries spreading disinformation and misinformation to better combat them.
“One of the reasons some of the disinformation stories’ spread is so big is that there were communities around the disinformation source that were willing to repeat it, and act like megaphones. We need those same kinds of communities that are trusted but around credible sources of information,” Carley said.
Social networks and internet platforms could also introduce delays that prevent users from seeing a piece of content appear in their newsfeeds for an instant and sharing them before understanding their full context.
We shouldn't have to say this, but you should read an article before you Tweet it. https://t.co/Apr9vZb2iI
So, we’ve been prompting some people to do exactly that. Here’s what we’ve learned so far. ⤵️
— Twitter Comms (@TwitterComms) September 24, 2020
Twitter has already taken such a step by introducing a prompt before you try to retweet an article telling users to read a piece before sharing it. According to the company, the prompt resulted in 40% more people reading articles when seeing the notification, and 33% increase in people reading articles before retweeting them.
That would also prevent people from lazily retweeting or sharing posts, something a 2019 study found is one of the reasons people fall for false information.
But it’s also easy to bypass the prompt by ignoring it and quickly tapping retweet or quote tweeting.
The best defense then, may simply be educating people to recognize truth rather than what they want to be true. Rebuilding trust in government and other essential institutions could go a long way there.
“We all have to agree with what truth is, what constitutes truth,” Lightman said. “Otherwise things spiral out of control very quickly.”
Got a tip? Email Daniel Howley at email@example.com over via encrypted mail at firstname.lastname@example.org, and follow him on Twitter at @DanielHowley.
More from Dan: