Reigniting Trust

The Power of False Information

Jieun Shin studies why rumors, fake news and other misleading information captivate America.

Most people have wondered at one time or another why false information persists on the internet. Despite warnings, Americans continue to share doctored photos, unsubstantiated rumors and half truths—why?

Jieun Shin
Jieun Shin

Jieun Shin, Ph.D., an assistant professor in the Department of Telecommunication at the University of Florida, has been addressing that question for almost a decade. Conducting research using Twitter data back in 2012—before the term “fake news” widely entered the American lexicon—Shin studied how untrue stories spread digitally during the presidential contest between Mitt Romney and Barack Obama. Her conclusion is startling.

“The truth, the facts, do not interest people,” she said. “When it is not verified, when it is out there and has not been confirmed or it is difficult to verify, they go loud and share constantly.”

‘A Collective Work’

Now working with the university’s Consortium on Trust in Media and Technology, Shin’s research had a great deal of application in the 2020 election, a contest that generated a torrent of misinformation.

| Click here to read a Q&A with Jieun Shin. |

In 2012, Shin looked at two rumors that dogged the Romney campaign: that the candidate had a financial stake in a company that created voting machines and that a slogan used by the campaign was once used by the Ku Klux Klan. Both were untrue. Neither was addressed by the campaign. And both persisted.

By contrast, a story saying that Obama’s daughter traveled to Mexico under the protection of two dozen Secret Service agents received a very different reception. That story was true. It was confirmed by the campaign. And interest in sharing the story dissipated quickly.

Shin’s conclusion is that false information—often tantalizing in its implications—holds an allure that simply cannot be replicated by verified stories, even if those stories are revelatory and important.

“Fake news and false information are inherently interesting,” she said. “It is almost as if there is unlimited imagination. People can edit it. They can add another ingredient. It is almost like a collective work.”

Other studies have shown similar results. Research at MIT, for example, showed that false stories are 70 percent more likely to be retweeted than those that are true. It also found that it took true stories six times longer than false stories to reach a group of 1,500 people.

Why People Share

People share material, true or false, because it enhances their identity and their standing, Shin said. Often, it shows that they belong to a group, such as a political party.

“When you share, you give an impression that you are in the know,” she said. “You want to give other people the impression that you are a news junkie, and that you read a lot. It’s as though they are saying, ‘I found something—and I am the first one to let you know.’”

Among all this research, Shin said scholars have found very little that can outright stop the spread of misinformation. But there are some indicators that simply urging people to think before they share can make an impact.

“There is some research, which I did not do, that stirs some hope,” she said. “If you nudge people before they share and say, ‘please think about the consequences, think about the accuracy level of the content,’ the research shows that it improves the quality of sharing. They are more likely to share only verified information.”

Platforms and Algorithms

Of course, false information is not only spread by people. Technology plays a role, particularly algorithms, social media platforms and recommendation engines that spread information with little attention to the content itself.

To study the issue, Shin looked at Amazon recommendations for books about vaccination, a hot topic in recent years as anti-vaccination sentiment took hold in many American communities. This occasionally led to outbreaks of disease long ago eradicated in the U.S., including a now-famous case in which several people contracted measles after a visit to Disneyland. The anti-vaccination movement led to changes in several state laws.

Shin found that anti-vaccination books were far more prevalent on Amazon than those advocating vaccination and that, accordingly, the online store was recommending anti-vaccine books based on the customer feedback.

“Of course, the algorithm does not know,” she said. “It only knows that people who bought this anti-vaccine book bought a bunch of other anti-vaccine books. They just keep recommending anti-vaccine books, which aggravates the problem.”

Building on that, Shin is conducting research that will delve into which social media platforms are associated with the highest levels of misinformation.

“We also want to look at whether making an effort makes a difference,” she said. “For instance, if Twitter says they are going to spend $2 million to create an advisory board and partner with third-party fact checkers to curb misinformation, would it change the audience’s perception? Will it increase trust?”

Asked whether technology might one day help spread more credible information and increase trust in media, Shin said much depends on how social media platforms evolve.

“The algorithms have both potentials,” she said. “If the algorithms get smarter and become more socially responsible, they can help. Otherwise, they can make things worse.”