Understanding and Reducing the Spread of Misinformation Online

https://psyarxiv.com/3n9u8/

Understanding and reducing the spread of misinformation online
Authors: Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, David Rand
Created on November 13, 2019

Supplemental Materials: https://osf.io/p6u8k/

Accuracy prompts decrease sharing of false and misleading news content
Contributors: Gordon Pennycook, David Rand
Date created: 2019-08-29 12:07 AM

 
Twitter thread by David Rand:

🚨Working paper alert!🚨 "Understanding and reducing the spread of misinformation online"

We introduce a behavioral intervention (accuracy salience) & show in surveys+field exp w >5k Twitter users that it increases quality of news sharinghttps://t.co/GYpg7jGtNk

1/ pic.twitter.com/VilwqQkbxD

— David G. Rand (@DG_Rand) November 17, 2019

We first ask why people share misinformation. It is because they simply can't assess the accuracy of information?

Probably not!

When asked about accuracy, MTurkers rate true headlines much higher than false. But when asked if theyd share online, veracity has little impact
2/ pic.twitter.com/OIJ04h2Fxb

— David G. Rand (@DG_Rand) November 17, 2019

So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?

Probably not!

Those same Turkers overwhelmingly say that its important to only share accurate information.
3/ pic.twitter.com/W1UA6VGSBd

— David G. Rand (@DG_Rand) November 17, 2019

We propose the answer is *distraction*: this accuracy motive is overshadowed in social media context by other motives, e.g. attracting/pleasing followers or signaling group membership. This contrasts w post-truth account where people are aware of (non)veracity but share anyway
4/

— David G. Rand (@DG_Rand) November 17, 2019

We test these views by making concept of accuracy top-of-mind. If people already recognize whether content is accurate but just don’t care much, accuracy salience should have no effect. But if problem is distraction, then accuracy salience should make people more discerning.
5/

— David G. Rand (@DG_Rand) November 17, 2019

In 3 preregistered exps (total N=2775) w MTurkers & ~representative sample, we have subjects in Treatment rate the accuracy of 1 nonpolitical headline at the study's outset. As predicted, this reduces sharing intentions for false (but not true) headlines relative to control.
6/ pic.twitter.com/6xefDGr2ea

— David G. Rand (@DG_Rand) November 17, 2019

Finally, we test our intervention "in the wild" on Twitter. We build up a follower-base of users who retweet Breitbart or Infowars. We then send each user a DM asking them to judge the accuracy of a nonpolitical headline (w DM date randomly assigned to allow causal inference)
7/ pic.twitter.com/xNYMJD9rB9

— David G. Rand (@DG_Rand) November 17, 2019

We quantify quality of their tweets using fact-checker trust ratings of 60 news sites. At baseline, our users share links to quite low-trustworthiness sites – mostly Brietbart, DailyCaller plus Fox. We then compare link quality pre-treatment vs the 24 hrs after receiving DM
8/ pic.twitter.com/z8SQCtmgIm

— David G. Rand (@DG_Rand) November 17, 2019

We find a significant increase in the quality of news posted after receiving the accuracy-salience DM: 1.4% increase in avg quality, 3.5% increase in summed quality, 2x increase in discernment. Users shift from DailyCaller/Breitbart to NYTimes!
9/ pic.twitter.com/52fAceFUPu

— David G. Rand (@DG_Rand) November 17, 2019

We hope these studies will lead to more work in behavioral science on social media sharing & that our Twitter method to more field exps.

We also hope platforms will take note, as our intervention is easily implementable. Could lead to less misinfo w/o centralized censorship!

— David G. Rand (@DG_Rand) November 17, 2019

I'm extremely excited about this project, which was led by @GordPennycook @_ziv_e @MohsenMosleh , with further invaluable input from coauthors @AaArechar @deaneckles

Please let us know what you think: comments, critiques, suggestions etc. Thanks!!

— David G. Rand (@DG_Rand) November 17, 2019

Because of the nature of our experimental design, we weren't really powered to test for long-term effects. My guess is that it probably didn't last that long – but its a treatment that the platforms could deliver regularly (e.g. with pop-ups in the newsfeed) pic.twitter.com/mxFaPHeeRi

— David G. Rand (@DG_Rand) November 19, 2019

Totally agree! And philanthropists could buy ads delivering the treatment to misinfo sharers

— David G. Rand (@DG_Rand) November 19, 2019

 
For RT:

I added @DG_Rand's "Understanding and reducing the spread of misinformation online" thread to my collection of tweets: https://t.co/AAG4adFONQ pic.twitter.com/huEcy7aCqb

— Götz Kluge (@Bonnetmaker) January 12, 2020

Trump’s Failing Memory

Um no, ths is whr u said. https://t.co/QlORWMWs9a

— TBird (@58TByrd) January 7, 2019


 How about saying “sorry” to James Mattis?

 

New York Times on Syria, 2019-01-07~08:

Categorically Untrue Statement

This is, without a doubt, the most uninformed, imbecilic, toady, poorly-written, categorically untrue statement I have ever seen from a president of the United States. A complete disgrace. https://t.co/9eqoWFeroX

— Joe Cirincione (@Cirincione) November 20, 2018

 
 Later the US Senate got closer to the truth. From the Congressional Record:

Trump doesn't trust the press, but believed Mohammad bin Salman. Weird.

From the Senate:
SUPPORTING A DIPLOMATIC SOLUTION IN YEMEN AND CONDEMNING THE MURDER OF JAMAL KHASHOGGI
(Senate – December 13, 2018)
Text available as: TXT, PDFhttps://t.co/0nkvohYEyq pic.twitter.com/Tz7cl006l8

— Goetz Kluge (@Bonnetmaker) December 16, 2018

GDPR Cookie Consent with Real Cookie Banner