How to debunk media myths
In this post, UWS’s Ullrich Ecker, John Cook and Stephen Lewandowsky argue that cognitive science can help PRs form strategies in managing media misreporting.
A growing cohort of commentators has bemoaned the descent of contemporary political “debate” into a largely fact-free zone.
People used to be entitled to their own opinions, but not their own set of facts. In the contemporary spectacle that passes for politics, it appears as though politicians are also entitled to make up their own facts at will.
There are small but encouraging signs that this era of post-fact politics might be coming to an end.
The boundary between truth and falsehood has arguably been eroded during the past few decades, aided in part by a media which has gradually discarded actual journalism that establishes and reports facts in favour of “he-said-she-said” churnalism.
This trend has made it possible for outlandish and patently false claims, such as the imaginary uncertainty surrounding President Obama’s place of birth, to be given extended coverage by the “mainstream” media, rather than being speedily dismissed upon investigation for complete lack of substance.
Into this fact-free media world exploded a bombshell earlier this year when public editor of the New York Times Arthur Brisbane asked whether the paper should be a “truth vigilante”. Brisbane asks “whether and when New York Times news reporters should challenge ‘facts’ that are asserted by newsmakers they write about.”
Isn’t this why we have a free press in the first place?
The very fact this question is posed reveals the full depth of the vortex into which some Western societies have descended — Australia, sadly, among them. But the fact the question was posed also shows this crisis is beginning to penetrate even the minds of those who are partially responsible for it in the first place. This is surely an encouraging sign.
Moons and myths
In light of all this, how might we restore the twin notions of “fact” and “reality” to public discourse?
If people mistakenly believe the moon is made of green cheese, how can we help them acquire a more realistic view of the world? Research in cognitive science can help answer that question.
Unfortunately, it is not as simple as saying “actually, the moon is not made of green cheese.”
It is not even sufficient to say it repeatedly.
So how do we then correct misinformation?
Enter the Debunking Handbook.
The Debunking Handbook is a freely available booklet written by two of the present authors which provides practical tips to effectively debunk misinformation and avoid pitfalls. The booklet reviews and explains some of the recent research on misinformation effects, and we provide a quick summary here.
The first thing we need to realise is that simple retractions are often ineffective. For example, when a person — let’s call him John — is accused of a crime, a simple statement that John has been found innocent will not suffice to eradicate people’s suspicions of John. Even if people understand and remember the retraction, the initial accusation will have an ongoing effect on people’s understanding of the crime and their attitude towards the accused.
This persistence of misinformation arises because people build “mental models” of the world based on the information they are given. When some of this information later turns out to be wrong, a gap is left in this mental model. Having these gaps feels uncomfortable, so in the absence of a better explanation, people often opt for the initial, easily available explanation even if it is wrong.
Even retracting an untruth multiple times may not do the trick. That’s perhaps a bit surprising because repetition is one of the most potent ways to increase belief in an assertion.
Alas, not only are simple retractions pretty ineffective, some debunking tactics can actually backfire and ironically amplify the misinformation effects. The debunking handbook describes three such “backfire” effects.
The “familiarity” backfire effect arises because retractions often repeat the misinformation (for example, “the moon is not made of green cheese” repeats the moon-cheese association). This makes the false link appear more familiar, and familiar arguments are more likely to be accepted as true. The “overkill” backfire effect implies that people may be more likely to accept the green-cheese hypothesis the more one throws contrary arguments at them. Worse yet, if people’s core beliefs rest on the assumption that the moon is made of green cheese, then any direct attempt to alter their beliefs may meet resistance and lead to entrenchment of the original misinformation. This is called the “worldview” backfire effect.
So what can you do to avoid these backfire effects?
Fill in the gaps
First of all remember a retraction will leave a gap in a person’s understanding of the world, so the correction should try to fill the gap. Sometimes this is easy. In the crime example, if the true culprit has been found, the gap in the mental model can easily be filled — it wasn’t John, it was Jim. Providing plausible and valid alternative information will drastically reduce reliance on misinformation.
Another gap worth filling after a retraction is to explain why the misinformation may have been presented in the first place.
Promoting a sceptical look at both the evidence itself and who presented the evidence (and what their agenda might be) is undoubtedly a good thing. “So Jack told you it was John? Well, guess what, Jack is Jim’s cousin”. A healthy sense of scepticism helps people tell the wrong from the right.
Don’t add fuel to the fire
First, one should not start a debunk by repeating the myth. Begin with the truth: the moon is a rock.
In some cases, repeating the myth is unavoidable when trying to debunk it; otherwise people may not know what you’re talking about. But if the myth has to be repeated, repeat it after presenting the facts.
Also, any myth repetition should be prefaced with a warning. Warnings put people in a cognitive mode of strategic monitoring and can hence reduce effects of misinformation.
Back to basics
Next, keep it simple. Stick to the facts. Leave out irrelevant details. Choose the strongest argument(s) and focus on what’s important. If you can offer one strong reason why the misinformation is false, leave it at that — do not accompany a strong reason with a few weaker ones; they may undermine your strong case.
Use simple language. Avoid the standard science terms relating to probability and the ever-looming possibility of falsification — “highly likely” and “strongly suggests” mean different things to a scientist and the man in the street.
Begin and end on a strong and simple message that people will remember and tweet to their friends, such as “study shows MMR vaccines are safe.”
Know your audience
The trickiest backfire effect to deal with is arguably the worldview backfire effect. Fact is, all the evidence in the world will not change the view of the hardcore moon-cheese believer.
But you stand a greater chance of correcting misinformation among those not as firmly decided. Hence debunking should be directed towards the undecided majority rather than the unswayable (and usually most vocal) minority. There is no point arguing with “birthers” about the President’s birth place, but you can address rational adults.
Any debunking messages should also be worded and framed in a way that is least threatening to the recipient’s worldview. Using non-inflammatory language, or presenting the opportunities that a change-of-mind may bring with it, can go a long way in avoiding polarisation.
The full picture
So what’s the take-home message?
When debunking misinformation, don’t just retract. Give the facts. Warn people if you have to repeat the myth. Explain why the misinformation was given in the first place. Focus on what is most important. Use simple language and graphics. End on a strong take-home message. And begin by downloading the Debunking Handbook.