In a polluted information ecosystem, our actions – even if well-intentioned – can make the disinformation problem worse.
“Online, everyday actions like responding to a falsehood in order to correct it or posting about a conspiracy theory in order to make fun of it – case in point, QAnon – can send pollution flooding just as fast as the falsehoods and conspiracy theories themselves,” writes Whitney Phillips, the co-author of You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape.
“Once we publish or send or retweet, our messages are no longer ours; in an instant, they can ricochet far beyond our own horizons, with profound risks to the [information] environment. At least potentially.”
Is Phillips saying we should stop tweeting?
Not responding to polluted information “prevents people from telling the truth, educating those around them … and pushing back against bigots, manipulators, and chaos agents”.
Instead, her advice is to be more strategic about who and what we amplify. We should question what we don’t know, whether we might be giving free publicity to malicious actors, and if the possible benefits of our actions outweigh the pollution we might cause.
Considering who and what you’re amplifying is an important consideration for journalists too.
Dr Claire Wardle, co-founder of anti-misinformation nonprofit First Draft, warns that reporting on disinformation prematurely – before it reaches a “tipping point” where it is increasingly visible – could boost misleading content.
“All journalists and their editors should now understand the risks of legitimising a rumour and spreading it further than it might have otherwise travelled …”
The flipside is also true: wait too long and a falsehood may turn into a “zombie rumour” that refuses to die.
Why we fall for disinformation
Herman Wasserman, professor of media studies at the University of Cape Town, has researched “fake news” in South Africa, Kenya and Nigeria. What surprised him about surveys in these countries, he told Africa Check, was how many social media users shared false information despite suspecting that it was unverified or made up.
What Mandy Jenkins learned while studying consumers of disinformation during her John S Knight journalism fellowship at Stanford University in the US, was that the people she interviewed overestimated their ability to distinguish between “fake” and real. They tended to rely on search engines for verification but, unfortunately, search engines often “give you what you want to see”.
They were also overwhelmed with information. “It’s very tempting to close it off and just say: ‘You know what … I only want this stuff from my friends and my circle,'” Jenkins says.
In their report on disinformation disorder, Wardle and media researcher Hossein Derakhshan argue that when we share news on social media, we’re not simply transmitting information. We become performers for “tribes” of followers or friends.
“This tribal mentality partly explains why many social media users distribute disinformation when they don’t necessarily trust the veracity of the information they are sharing: they would like to conform and belong to a group, and they ‘perform’ accordingly.”
“A lot of the time, what we see is that people will share the false content, either because they believe that it’s true, or because they want to believe that it’s true – it’ll confirm some kind of political leaning or political bias that they already have.
“And so part of the question is: Who is [going to tell them that] they’ve shared a piece of disinformation? Because if it’s somebody from what is seen as the other side … then there’s a danger that you’ll actually reinforce their resistance to the truth …”
Correcting false information doesn’t always work.
With health information, researchers Leticia Bode and Emily K Vraga recommend including a link to a credible source in your correction to increase its chances of success.
Disinformation researcher Nina Jankowicz‘s book, How To Lose the Information War: Russia, Fake News, and the Future of Conflict, makes a case for solutions that consider the divisions in society that make us vulnerable to disinformation in the first place.
She writes that in countries where disinformation has long existed, “empowering people to be active and engaged members of society through investments in the information space and in people themselves is always part of the solution”.
Estonia, for example, focused on education and invested in both media and contact between people to “repair the gaps in trust and crises of identity” that made the country’s Russian-speakers an “easy target” of Russian disinformation campaigns.
Don’t be an accidental co-conspirator
People who spread disinformation rely on unsuspecting social media users to amplify their content. Renee DiResta, research manager at the Stanford Internet Observatory in the US, noticed a shift in the tactics used in 2018. “Twitter’s self-imposed product tweaks have already largely relegated automated bots to the tactical dustbin. Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead.”
So how can we avoid becoming accidental co-conspirators in a disinformation campaign?
The advice of UCT’s Wasserman is to:
- Actively look for “good information”. Examples include “independent, critical, rigorous journalism” and – in the case of Covid-19 – official sources of health information
- Don’t share unverified information “just in case” it might be true. “It’s like passing on a virus – your fake post or false information can go on to multiply, infect many others, and do real harm.”
- Verify before you share, and develop the necessary skills to do so.
These skills include knowinghow to do a reverse image search.
A reverse image search caught out an instance of false context – where an old photo taken in a different country was redistributed as part of the #PutSouthAfricansFirst campaign. Read more here.
Prof Camaren Peter and Yossabel Chetty of theCentre for Analytics and Behavioural Change (CABC), a nonprofit that tracks narrative manipulation on social media, urge Twitter users to carefully look at the tweeter’s account before interacting with its content.
“Check the account out to see how recently it has been set up, how many followers it has, and the frequency and types of posts they put out. Scrutinise the bio. There are many parody accounts using the names of well-known South Africans or celebrities to broker trust.”
A blue tick is useful when you’re trying to verify someone’s identity, but Peter and Chetty warn that it doesn’t guarantee the accuracy of the account’s tweets.
Disinformation research scientist Richard Ngamita encourages social media users to report suspected disinformation “so that the social media companies log that information and this will be used in their algorithms in the long run”. Peter and Chetty also recommend flagging disinformation with Twitter, by clicking on the three dots above a tweet and reporting it as “suspicious or spam“.
Ngamita’s tips for Twitter users who want to identify disinformation include checking if the same text has been used by multiple accounts. “You can copy parts of the text and search to see if there are other users using this same content.”
To counter disinformation, he says, you should “think twice before you retweet, comment or like a tweet”.
Says Phillips: “We can’t control social media platform policies. We can’t control government regulation. We can’t control the various industrial polluters who make a killing by killing democracy. What we can control is how and when we choose to post; and by extension, the amount of pollution we filter into the landscape.”