SPONSOR: Datametrex AI Limited
(TSX-V: DM) A revenue generating small cap A.I. company that NATO and
Canadian Defence are using to fight fake news & social media
threats. The company announced three $1M contacts in Q3-2019. Click here for more info.
“Rated falseâ€: Here’s the most interesting new research on fake news and fact-checking
- What better way to start the new year than by learning new things about how best to battle fake news and other forms of online misinformation?
- Below is a sampling of the research published in 2019 — seven journal articles that examine fake news from multiple angles, including what makes fact-checking most effective and the potential use of crowdsourcing to help detect false content on social media.
By Denise-Marie Ordway
Our friends at Journalist’s Resource,
that’s who. JR is a project of the Shorenstein Center on Media,
Politics and Public Policy at the Harvard Kennedy School, and they spend
their time examining the new academic literature in media, social
science, and other fields, summarizing the high points and giving you a
point of entry.
Here, JR’s managing editor, Denise-Marie Ordway, sums up some of the
most compelling papers on fake news and fact-checking published in 2019.
(You can also read some of her other roundups focusing on research from
2018 and 2017.)
What better way to start the new year than by learning new things
about how best to battle fake news and other forms of online
misinformation? Below is a sampling of the research published in 2019 —
seven journal articles that examine fake news from multiple angles,
including what makes fact-checking most effective and the potential use
of crowdsourcing to help detect false content on social media.
Because getting good news is also a great way to start 2020, I
included a study that suggests President Donald Trump’s “fake newsâ€
tweets aimed at discrediting news coverage could actually help
journalists. The authors of that paper recommend journalists “engage in a
sort of news jujitsu, turning the negative energy of Trump’s tweets
into a force for creating additional interest in news.â€
“Real
solutions for fake news? Measuring the effectiveness of general
warnings and fact-check tags in reducing belief in false stories on
social mediaâ€: From Dartmouth College and the University of
Michigan, published in Political Behavior. By Katherine Clayton, Spencer
Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna
Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu,
Rachel Sang, Rachel Scholz‑Bright, Austin T. Welch, Andrew G. Wolff,
Amanda Zhou, and Brendan Nyhan.
This study provides several new insights about the most effective
ways to counter fake news on social media. Researchers found that when
fake news headlines were flagged with a tag that says “Rated false,â€
people were less likely to accept the headline as accurate than when
headlines carried a “Disputed†tag. They also found that posting a
general warning telling readers to beware of misleading content could
backfire. After seeing a general warning, study participants were less
likely to believe true headlines and false ones.
The authors note that while their sample of 2,994 U.S. adults isn’t
nationally representative, the feedback they got demonstrates that
online fake news can be countered “with some degree of success.†“The
findings suggest that the specific warnings were more effective because
they reduced belief solely for false headlines and did not create
spillover effects on perceived accuracy of true news,†they write.
“Fighting misinformation on social media using crowdsourced judgments of news source qualityâ€:
From the University of Regina and Massachusetts Institute of
Technology, published in the Proceedings of the National Academy of
Sciences. By Gordon Pennycook and David G. Rand.
It would be time-consuming and expensive to hire crowds of
professional fact-checkers to find and flag all the false content on
social media. But what if the laypeople who use those platforms pitched
in? Could they accurately assess the trustworthiness of news websites,
even if prior research indicates they don’t do a good job judging the
reliability of individual news articles? This research article, which
examines the results of two related experiments with almost 2,000
participants, finds the idea has promise.
“We find remarkably high agreement between fact-checkers and
laypeople,†the authors write. “This agreement is largely driven by both
laypeople and fact-checkers giving very low ratings to hyper-partisan
and fake news sites.â€
The authors note that in order to accurately assess sites, however,
people need to be familiar with them. When news sites are new or
unfamiliar, they’re likely to be rated as unreliable, the authors
explain. Their analysis also finds that Democrats were better at gauging
the trustworthiness of media organizations than Republicans — their
ratings were more similar to those of professional fact checkers.
Republicans were more distrusting of mainstream news organizations.
“All
the president’s tweets: Effects of exposure to Trump’s ‘fake news’
accusations on perceptions of journalists, news stories, and issue
evaluationâ€: From Virginia Tech and EAB, published in Mass
Communication and Society. By Daniel J. Tamul, Adrienne Holz Ivory,
Jessica Hotter, and Jordan Wolf.
When Trump turns to Twitter to accuse legitimate news outlets of
being “fake news,†does the public’s view of journalists change? Are
people who read his tweets less likely to believe news coverage? To
investigate such questions, researchers conducted two studies, during
which they showed some participants a sampling of the president’s “fake
news†tweets and asked them to read a news story.
Here’s what the researchers learned: The more tweets people chose to
read, the greater their intent to read more news in the future. As
participants read more tweets, their assessments of news stories’ and
journalists’ credibility also rose. “If anything, we can conclude that
Trump’s tweets about fake news drive greater interest in news more
generally,†the authors write.
The authors’ findings, however, cannot be generalized beyond the
individuals who participated in the two studies — 331 people for the
first study and then 1,588 for the second, more than half of whom were
undergraduate students.
Based on their findings, the researchers offer a few suggestions for
journalists. “In the short term,†they write, “if journalists can push
out stories to social media feeds immediately after Trump or others
tweet about legitimate news as being ‘fake news,’ then practitioners may
disarm Trump’s toxic rhetoric and even enhance the perceived
credibility of and demand for their own work. Using hashtags, quickly
posting stories in response to Trump, and replying directly to him may
also tether news accounts to the tweets in social media feeds.â€
“Who shared it?: Deciding what news to trust on social mediaâ€:
From NORC at the University of Chicago and the American Press
Institute, published in Digital Journalism. By David Sterrett, Dan
Malato, Jennifer Benz, Liz Kantor, Trevor Tompson, Tom Rosenstiel, Jeff
Sonderman, and Kevin Loker.
This study looks at whether news outlets or public figures have a
greater influence on people’s perception of a news article’s
trustworthiness. The findings suggest that when a public figure such as
Oprah Winfrey or Dr. Oz shares a news article on social media, people’s
attitude toward the article is linked to how much they trust the public
figure. A news outlet’s reputation appears to have far less impact.
In fact, researchers found mixed evidence that audiences will be more
likely to trust and engage with news if it comes from a reputable news
outlet than if it comes from a fake news website. The authors write that
“if people do not know a [news outlet] source, they approach its
information similarly to how they would a [news outlet] source they know
and trust.â€
The authors note that the conditions under which they conducted the
study were somewhat different from those that participants would likely
encounter in real life. Researchers asked a nationally representative
sample of 1,489 adults to read and answer questions about a simulated
Facebook post that focused on a news article, which appeared to have
been shared by one of eight public figures. In real life, these adults
might have responded differently had they spotted such a post on their
personal Facebook feeds, the authors explain.
Still, the findings provide new insights on how people interpret and
engage with news. “For news organizations who often rely on the strength
of their brands to maintain trust in their audience, this study
suggests that how people perceive their reporting on social media may
have little to do with that brand,†the authors write. “A greater
presence or role for individual journalists on social networks may help
them boost trust in the content they create and share.â€
“Trends in the diffusion of misinformation on social mediaâ€:
From New York University and Stanford University, published in Research
and Politics. By Hunt Allcott, Matthew Gentzkow, and Chuan Yu.
This paper looks at changes in the volume of misinformation
circulating on social media. The gist: Since 2016, interactions with
false content on Facebook have dropped dramatically but have risen on
Twitter. Still, lots of people continue to click on, comment on, like
and share misinformation.
The researchers looked at how often the public interacted with
stories from 569 fake news websites that appeared on Facebook and
Twitter between January 2015 and July 2018. They found that Facebook
engagements fell from about 160 million a month in late 2016 to about 60
million a month in mid-2018. On Twitter, material from fake news sites
was shared about 4 million times a month in late 2016 and grew to about 5
million shares a month in mid-2018.
The authors write that the evidence is “consistent with the view that
the overall magnitude of the misinformation problem may have declined,
possibly due to changes to the Facebook platform following the 2016
election.â€
“Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoningâ€: From Yale University, published in Cognition. By Gordon Pennycook and David G. Rand.
This study looks at the cognitive mechanisms behind belief in fake
news by investigating whether fake news has gained traction because of
political partisanship or because some people lack strong reasoning
skills. A key finding: Adults who performed better on a cognitive test
were better able to detect fake news, regardless of their political
affiliation or education levels and whether the headlines they read were
pro-Democrat, pro-Republican or politically neutral. Across two studies
conducted with 3,446 participants, the evidence suggests that
“susceptibility to fake news is driven more by lazy thinking than it is
by partisan bias per se,†the authors write.
The authors also discovered that study participants who supported
Trump had a weaker capacity for differentiating between real and fake
news than did those who supported 2016 presidential candidate Hillary
Clinton. The authors write that they are not sure why that is, but it
might explain why fake news that benefited Republicans or harmed
Democrats seemed more common before the 2016 national election.
“Fact-checking: A meta-analysis of what works and for whomâ€:
From Northwestern University, University of Haifa, and Temple
University, published in Political Communication. By Nathan Walter,
Jonathan Cohen, R. Lance Holbert, and Yasmin Morag.
Even as the number of fact-checking outlets continues to grow
globally, individual studies of their impact on misinformation have
provided contradictory results. To better understand whether
fact-checking is an effective means of correcting political
misinformation, scholars from three universities teamed up to synthesize
the findings of 30 studies published or released between 2013 and 2018.
Their analysis reveals that the success of fact-checking efforts varies
according to a number of factors.
The resulting paper offers numerous insights on when and how fact-checking succeeds or fails. Some of the big takeaways:
— Fact-checking messages that feature graphical elements such as
so-called “truth scales†tended to be less effective in correcting
misinformation than those that did not. The authors point out that “the
inclusion of graphical elements appears to backfire and attenuate
correction of misinformation.â€
— Fact-checkers were more effective when they tried to correct an
entire statement rather than parts of one. Also, according to the
analysis, “fact-checking effects were significantly weaker for
campaign-related statements.â€
— Fact-checking that refutes ideas that contradict someone’s personal
ideology was more effective than fact-checking aimed at debunking ideas
that match someone’s personal ideology.
— Simple messages were more effective. “As a whole, lexical
complexity appears to detract from fact-checking efforts,†the authors
explain.
Source: https://www.niemanlab.org/2020/01/rated-false-heres-the-most-interesting-new-research-on-fake-news-and-fact-checking/