Helping people spot misinformation
And the greatest gift psychology gave the world
There is a popular account that misinformation is a virus, against which vulnerable people can be inoculated. The inoculation is done by training people to spot signs of misinformation, signs like polarising rhetoric, conspiratorial thinking, playing on the emotions and more. Individuals are inoculated by being warned they are about to see misleading information and then, along with that misinformation they are given a preemptive refutation, a prebunking (the contrast is with refuting bad information after it is received, debunking).
Opinions vary about the usefulness of this metaphor, and about whether there really are surface level signs that reliably identify misinformation. I’ve even heard inoculation theory intemperately described as a way for psychologists to re-brand media literacy so they can publish on it without citing educational research.
But assembled against these doubts stands a growing body of work, led by the modern proponents of inoculation theory, that seems to show that their prebunking formula works: the evidence shows that it helps people spot fake news and other kinds of misinformation.
In 2023 a first report fired a significant challenge against this edifice. The nature of that challenge, and how the proponents of inoculation theory responded, is the topic of today’s newsletter. In telling the story we’ll cover not just the science of misinformation, but also see something of how science advances, and learn about the greatest gift the study of psychology ever gave the world, a mathematical approach to understanding judgments which emerged from studies of radar operators in World War 2 - Signal Detection Theory.
* *
The 2023 report reanalyses the very studies done by proponents of inoculation theory themselves, but coming to a dramatically different conclusion. Researchers Ariana Modirrousta-Galian and Philip Higham had realised that reports of inoculation studies focused on the proportion of misinformation correctly identified (typically showing that this went up). This is only half the information you need to decide if the inoculation intervention was helping people.
To see why, consider that you can spot more misinformation correctly in two ways: you can become better at discriminating good from bad information (what we really want people to do), or you can just become more skeptical in general, changing your bias towards calling things misinformation (correctly identifying more misinformation, but at the cost of calling some true stories misinformation).
There is a fundamental trade-off when you have to make a category judgment: you need to balance the benefits of correct categorisation with the costs of mistakes in either direction. The same trade-off applies whether you are categorising a blip on a radar screen as a plane or a piece of static, a piece of news as true or false, or an investment as good or bad.
Signal Detection Theory provides a framework for thinking about these trade-offs in a principled manner. A key principle is that a decision-maker can be characterised by two independent qualities: their discrimination (ability to sort good from bad) and their bias (tendency to balance judgments towards one choice or the other). Signal Detection Theory provides the tools to calculate estimates of these qualities from the raw decision data.
Using this, Modirrousta-Galian and Higham argue that all previous inoculation studies had a fatal flaw: they reported an increase in the probability of correctly categorising false stories as misinformation, but they didn’t account for possible changes in the rate at which true stories were categorised as misinformation. Once you do that, their 2023 report showed, it looks like inoculation studies produce a general increase in skepticism about news (a shift in participant bias), but NO change in their ability to accurately distinguish true from false news (no shift in participant discrimination).
This damning claim was repeated by a second study the following year, which independently collected data and applied the same analysis. In line with the Modirrousta-Galian and Higham report, this second study found a general shift in participants’ skepticism (bias) following inoculation, not an improvement in their ability to tell true from false (discrimination).
* *
Now the proponents of inoculation theory have hit back, with a new report analysing data across 33 of their experiments, synthesising the evidence to claim that - with this larger evidence base - inoculation looks like it really does alter participants’ underlying ability to discriminate true from false news, not just shift their biases.
In homage to Signal Detection Theory, here is the key graph. This is a so-called ROC curve (it stands for “Receiver Operating Characteristics”, evoking the origins in analysis of radar operators). The plot shows estimates from across all studies re-analysed, shown on two axes: hit rate (how many false news items were correctly identified, y-axis) versus false alarm rate (how many true news items were mistakenly called false, x-axis). The reason the results form a curve is that you can - as discussed - increase your hit rate by accepting that you’ll make more false alarms. You can move in the other direction, lowering the number of false alarms, but at the cost of a lower hit rate. In this way, the curve visualises the trade-off participants are making between two kinds of errors (false alarms and misses).
Every choice balances this trade-off, but we hope that different individuals might be better or worse at their discrimination, as well as differing in their bias. This is what the coloured estimation areas on the plot show: purple for participants from the control condition of inoculation experiments, and yellow for participants from the condition which received inoculation training. The yellow curve is more concave, pushing further towards the optimal performance represented by the top left of the plot. In this corner, should they ever reach it, are participants who score 100% hits and 0% false alarms - making no errors.
Comparing how concave two curves are lets you compare power of discrimination independently of bias. Here it is clear that the yellow curve bows further towards the top left along the full range of possible bias.
The implication of this is a direct rebuttal to the initial re-analysis (just 8 studies). This new report re-analyses 33 studies, accounting for more than 37 thousand participants. In their researchers’ own words:
We consistently find that inoculation improves discernment between reliable and unreliable news without inducing response bias.
Inoculation interventions can be scaled and implemented without concerns about breeding generalized distrust.
Should we believe it?
Let’s take a minute to appreciate the sophistication of what is going on here. We have meta-analysis - the formal framework for synthesising across multiple studies. We have critique and rebuttal over multiple papers. We have a principled framework for doing a better analysis of the results (Signal Detection Theory).
Overall, this is how science should work: criticism driving new work and higher standards in a process of excoriating mutual improvement (to borrow a memorable phrase from another context).
The standards of the new report are very high, including rigour and honesty signals like open data, open code and preregistration
If there is any basis to doubt the result, I think it must come from the fact that the analysis only uses data from the research group's own studies. This makes the data readily available for re-analysis, but it runs the risk that if there is a common flaw in the studies, it contaminates the overall result.
To be clear, I have no idea if there is such a flaw, but the possibility is the only thing that gives me hesitation about what seems a pretty decisive claim. What kind of thing am I imagining? Maybe something like unrepresentative or implausible stimuli. If the studies shared a common definition of misinformation (say one with clear, but unrealistic, signs of being false) then participants could be trained to discriminate better without it meaning much for the generalisability of the results, however sophisticated or however large the total sample size.
Recall that the second critical report I mentioned was an independent study, collecting their own data and showing that participants increased their skepticism without increasing their discrimination.
When I was a young skeptic people used to say “The plural of anecdote is not data”, but it took me a long time to really understand what they meant. The point is not that anecdotes are unreliable, although that is the context in which the phrase was often used. The point is that the evidential value of anecdotes doesn’t increase when you have more of them. Because anecdotes are flawed evidence, 1 is as convincing as 100. There is no aggregation benefit.
A comparable principle holds here. Inoculation theory may have the support of 33 studies, but if these are all fundamentally similar in some way (and that seems likely since they are from the same extended research group), they shouldn’t weigh as 33 times greater evidence than 1 contrary report.
Given this, I would say there is still room to doubt the effectiveness of inoculation theory, even though this new report does cause me to shift my beliefs back towards thinking it effective (and despite how much I know the metaphor of inoculation infuriates many people).
The story the data tells looks unambiguous, but there may be a different story if we look at quirks of how the data was generated in the first place. Certainly the contradiction between studies from the inoculation proponents and other labs needs resolving.
Like the early operators studied by Signal Detection Theory, scrying unclear evidence from their radar scopes, we must try to balance endorsing true theories against rejecting false ones, mindful that there’s a trade-off between too general a skepticism and too general credulity. The recent re-analysis is a clear signal, if not decisive, and - taking a step back - shows a field in good health that it supports such back and forth.
This newsletter is free for everyone to read and always will be. To support this you can upgrade to a paid subscription (more on why here)
Keep reading for references and further reading on Signal Detection theory, and more on flaws in AI models, psychological targeting and cognitive bias.
References
The Meta-analysis: Simchon, A., Zipori, T., Teitelbaum, L., Lewandowsky, S., & van der Linden, S. (2025). A Signal Detection Theory Meta-Analysis of Psychological Inoculation Against Misinformation. Current Opinion in Psychology, 102194. https://doi.org/10.1016/j.copsyc.2025.102194
Critical reanalysis of early data: Modirrousta-Galian, A., & Higham, P. A. (2023). Gamified inoculation interventions do not improve discrimination between true and fake news: Reanalyzing existing research with receiver operating characteristic analysis. Journal of Experimental Psychology: General, 152(9), 2411–2437. https://doi.org/10.1037/xge0001395
Independent demonstration of a shift in bias, not discrimination: Hoes, E., Aitken, B., Zhang, J., Gackowski, T., & Wojcieszak, M. (2024). Prominent misinformation interventions reduce misperceptions but increase scepticism. Nature Human Behaviour, 8(8), 1545-1553. https://www.nature.com/articles/s41562-024-01884-x
Critique of the “Misinformation as virus” approach by Dan Williams: Misinformation is not a virus, and you cannot be vaccinated against it
Key paper on modern inoculation theory: Lewandowsky, S., & Van Der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European review of social psychology, 32(2), 348-384. https://doi.org/10.1080/10463283.2021.1876983
Very good, but only if you *really* want to get into Signal Detection Theory: Wickens, T. D. (2001) Elementary Signal Detection Theory. https://doi.org/10.1093/acprof:oso/9780195092509.001.0001
BBC Media Centre: Serious accuracy issues persist in AI models used for news summarisation
New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested.
and
Key findings:
45% of all AI answers had at least one significant issue.
31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
20% contained major accuracy issues, including hallucinated details and outdated information.
Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.
Dan Davies: financial crisis as cognitive shock
Thought provoking take by Dan Davies :
my starting point for thinking about the Great Financial Crisis is that it was a cognitive shock – a kind of head injury to the overall economic system, damaging one of its important organs so that it was no longer able to perform a vital function of balancing present consumption and investment for the future. That’s why (and I think economics ought to regard this as more of a puzzle than it does) the financial crisis, which caused practically zero physical destruction of productive capability, left scarring and after-effects for more than a decade, while the COVID-19 pandemic, which killed millions and left lots of buildings unusable, was all reversed within a couple of years, with a fairly trivial inflation by past standards as the worst economic consequence.
Link: are we having fear yet? the cerebral insult theory of financial crises
PAPER: The (In)Effectiveness of Psychological Targeting: A Meta-Analytic Review
Abstract, with highlighting by me
The use of psychological targeting—employing machine learning to predict consumer personality from digital footprints and subsequently tailoring persuasive messages—has emerged as a controversial yet prominent practice in digital marketing. Despite frequent claims about its potential to enhance message resonance and behavior change, a comprehensive, cross-disciplinary assessment of its effectiveness has been lacking. We address this gap with the first meta-analysis to systematically evaluate the two core components of psychological targeting: inferring personality from digital footprints and the impact of personality-tailored messages on consumer outcomes, as well as their combined, end-to-end effectiveness. Across 41 studies spanning marketing, psychology, and computer science, we find that only about 5% of the variance in personality can be predicted from digital footprints, and personality-tailored messages show negligible effects on behavior. We document pervasive methodological issues, highlighting that methodological rigor, not model class or data type, primarily determines reported accuracy. When design and evaluation flaws are controlled, the combined end-to-end effectiveness of psychological targeting approaches zero. We conclude by providing recommendations to strengthen future research in this field.
Perla, R., Maran, T., Bagci, B., Kraus, S., Kanbach, D. K., & Bouncken, R. B. (2025). The (In) Effectiveness of Psychological Targeting: A Meta‐Analytic Review. Psychology & Marketing. https://doi.org/10.1002/mar.70073
Catch-up
In case you missed it, recent newsletters include:
The Ideological Turing Test. Do you truly understand those you disagree with?
Community Notes require a Community. How we used a novel analysis to understand what causes people to quit the widely adopted content-moderation system
When half the population don’t trust. A Brexit-era survey of who voters trust and what it means for our information environment
…And finally
“Sampling Bias” by sketchplanations
END
Comments? Feedback? Love for Signal Detection Theory? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online




On prebunking and response bias: skepicism and gullibility biases can each be 'functional' if the source of the information is detected by the recipient. Notably, information from politicians and information from close family can benefit from one or other response bias. Gaming an ability to detect the source might be useful 'inoculation'.
On samplng error: the best descripton of this I've seen is the YouTube video about US statistician Abraham Wald. He proved that because returning bombers had few bullet holes in their engine or cockpit, those were the areas that should be armour-plated.