Renée DiResta is the technical research manager at Stanford Internet Observatory, which studies abuse in information technologies. Below she shares advice on how to help stop the spread of disinformation across social networks. Watch and listen to her sessions from the Aspen Ideas Festival.
What is the difference between misinformation and disinformation?
Misinformation and disinformation are both, at their core, incorrect information. However, the motivation for sharing the content and the actors who share it are very different. Misinformation sometimes refers to an “honest mistake” — for example when an article written by a generally reputable media property includes an error and it spreads organically. Disinformation, by contrast, is deliberately wrong and spread tactically. It is explicitly intended to cause confusion or to lead the target audience to believe a lie. Disinformation is a tactic in information warfare.
What sparked your interest in disinformation campaigns and mapping data across social media?
Back in 2014, I got very interested in the growth of the anti-vaccine movement and other pseudoscience conspiracies on social media, particularly the ways in which the asymmetric passion of truther communities led them to produce tons of false content that was then algorithmically amplified by social platforms. I watched the gradual incorporation of bots and fake accounts to further amplify these messages. Around the same time, violent extremists began to co-opt social platforms to spread terrorist propaganda in much the same way. I realized that the problem was that the features of the social ecosystem themselves enabled systemic manipulation, and began to study the strategies and tactics across communities.
How can social media companies and the government combat the spread of misinformation?
It must be a joint effort. Social media companies have information about user behavior that the government doesn’t have; third-party researchers have information about how information moves across the ecosystem that neither has. We all have a few pieces of the puzzle and must cooperate and share information and alert each other to evidence of manipulative campaigns, especially where election interference is involved.
Why do you think that social platforms are further facilitating the process of radicalization on the internet?
I think that recommendation engines are culpable in radicalization. We’ve seen them suggest everything from jihadis to follow on Twitter, to extremist videos on YouTube, to pushing bomb-making materials as “items purchased together” on Amazon, to manipulation-coordination groups on Facebook. They are simultaneously powerful because they suggest what users want to see, and simplistic because they don’t consider any system of ethics around what they suggest.
I’ve personally seen accounts that participate in pseudoscience groups on Facebook get recommended Pizzagate and QAnon content on Facebook despite never having searched for or engaged with political conspiracy theories. Recommendation engines are a conspiracy correlation matrix. We’ve known about their potential to radicalize users for a while — it was such a concern that Alphabet’s Jigsaw team piloted a project called Redirect that tried to nudge users searching for violent extremist content in a different direction.
What is one thing the average person can do to fight disinformation on social media?
Check before you share! Take the extra few seconds to look over the source, or go read the article in full to make sure the title accurately reflects the content and that the site is reputable.
The views and opinions of the author are her own and do not necessarily reflect those of the Aspen Institute.