How do you fact-check a deep story?
Reasonable People #48: The dynamics of 'participatory disinformation' campaigns show the limits of trying to correct individual rumours
A new paper forensically dissects the spread of three 2020 US Presidential election rumours and uses this to explore the nature of disinformation campaigns.
Lead author Stephan Prochaska with co-authors from the Center for an Informed Public at the University of Washington (including Centre director Kate Starbird) presented Mobilizing Manufactured Reality: How Participatory Disinformation Shaped Deep Stories to Catalyze Action during the 2020 US Presidential Election at the CSCW conference earlier this month.
The paper uses three case studies to investigator election rumours on the Twitter platform. This analysis confirms the importance of a small number of high-profile actors in spreading disinformation, and the speed at which rumours spread online.
Here’s the tweet that kicked off the “Sonoma Ballot Dumping” rumour, an arguably irresponsible tweet by a conservative influencer, which he later half-heartedly tried to qualify (without deleting the original):
This was later amplified by a highly partisan news site, Gateway Pundit. The timeline makes clear that by the time the official correction to this misinformation arrives (about 12 hours later that day), most of the circulation of the rumour had already occurred:
Across all three case studies, Prochaska and colleagues show how rumour cascades rely on the speed of social media, and the choice of influencers to selectively amplify confirming rather than disconfirming follow ups, whilst removing context which might allow correct interpretation.
The speed of online rumours and the bad faith of some actors are known challenges to fact-checking, but the Prochaska analysis emphasises two further factors.
First is the participatory dynamic of disinformation spread is emphasised. A small number of individuals may seed rumours, but many others participate in sharing it, and contextualising it as part of a wider story. Disinformation, argues Prochaska and team, is collaborative work which encompassed covert actors who seed disinformation (knowing it to be false), high profile influencers (who should know better), artificial agents (bots) and grassroots participants who may genuinely believe the falsehoods they share. Grassroots participants collaborate with high profile influencers, allowing influencers to present claims without explicitly endorsing them, strategically hedging (“just asking questions”). Grassroots participants then supply context and amplify, making explicit the implications (e.g. that the election is being stolen).
The second factors is the wider claims, the ‘deep stories’ which individual rumours both reinforce, and are reinforced by. Deep stories like “the election was stolen”, or “the government can’t be trusted”. The deep story plays a role in manufacturing the grievance, the moral legitimacy, which is then a platform for offline action.
From the paper:
A key finding of our work is that many of these influential figures amplified specific pieces of alleged evidence in a way that allowed themselves to be distanced from having to take responsibility for the spreading of falsehoods. Additionally, their audiences interpreted their ambiguous tweets using a frame of fraud, making it so that influential figures rarely, if ever, had to make specific claims to imply the existence of widespread fraud. Using this strategy, the deep story was used as implied evidence for the claim at the same time as the claim being presented was added as evidence for the deep story, creating a self-reinforcing construction of moral legitimacy1
In the paper conclusion, the feasibility of fact-checking ‘participatory disinformation’ campaigns is directly questioned:
thinking of disinformation as simply a matter of facticity2 has not proven effective for mitigating its spread. Any given fact-check can only correct a relatively small number of claims and even those claims remain useful in supporting the underlying deep story. Crucially, the deep story of voter fraud also delegitimizes the media and fact checkers. Consequently, those who already believe the deep story are unlikely to believe corrections of false claims.
…
Additionally, our work suggests that the primary difference between social movements based on disinformation and social movements not based on disinformation is the malleability of the underlying story and its accompanying narratives: because those who generate disinformation are motivated primarily by ends outside of the stories they spread (e.g. power, financial gain, etc.) they are more free to disregard facticity, as spreading the content of their message was never their primary goal. The very structure of a participatory disinformation campaign functions to muddy the lines between genuine engagement and motivated propaganda in such a way that the result looks and sounds like a traditional social movement, when in fact it is a deliberate effort whose goals are known mainly by those who strategically disseminate (and even opportunistically come to believe) the misleading claims.
Fact checking doesn’t work (the way you think it does)
The Prochaska paper conjures a picture of sophisticated, distributed, active disinformation campaigns, against which fact-checking isolated claims will always be chasing behind - a little slow, a little late. Not enough.
To be fair to fact checkers, they are well aware of magnitude of the problem. Here is a manifesto published by three fact checking organisations in 2019 - Africa Check, Chequeado (Spain), and Full Fact (UK): Fact checking doesn’t work (the way you think it does).
It’s a stirring account which begins by dismissing the naive view that any fact check will be instantly taken up by those spreading falsehoods. No, the manifesto writes, that’s not the point. Fact-checks are still worth doing, since a) there is a a large audience for them, outside the people spreading false claims b) they establish a “they know we check effect” where public figures take the possibility of being fact-checked into account:
We see individuals’ and institutions’ surprise at being asked to justify the facts behind what they say, and we see governments, media outlets, and pressure groups making changes to avoid being vulnerable to factual criticism.
This is ‘first generation’ fact-checking, says the manifesto, and still a noble calling, at the heart of good journalism, even if not nearly enough.
Second generation fact-checking is about power and accountability, and has these characteristics:
publish and act: follow up on fact-checks to take whatever moral, public or regulatory action to force corrections from disseminators of falsehoods
build datasets on the spread of misinformation, supporting research on the nature of the problem
advocate for systems change - from interventions to promote media literacy to policy changes
contribute to a culture where institutions can be trusted and public debate is grounded in common knowledge of what’s true or false (see RP#31)
The manifesto highlights some successes of this second generation model, while concluding that a third generation of fact-checking is required - digital, global, massively collaborative. “We know we will never stop misinformation and disinformation completely, but we can aim to reduce the harm it causes.”, they conclude, and they sure as hell sound like they believe the effort is worth it.
Fact checking across the epistemic fracture
Fact-checking adopts a narrow remit - purposefully so. The task is not to ask if a claim is useful, or fun, or popular, or helpful or even interesting. It is just to ask if the claim is true. I think - not being a fact-checker, but having spoken to some - that fact-checkers would say that this is a strength, not a weakness.
But, still, the way the manifesto calls for a second and third generation approach to fact checking shows an unease about the limits of the fact-checking in the current news environment.
Prochaska and colleagues underscore the extent of the challenge. It is not just an environment where people are at risk of being misinformed by accidental rumours. These are orchestrated deliberate disinformation campaigns. Well funded campaigns which cross oceans, years, human and artificial actors, knowing and unknowing participants. These campaigns are not made up of factually incorrect or merely misleading claims, they are made of narratives, deep stories which don’t rely on a factual foundation and guide how new information is interpreted. These deep stories control how information is assembled, rather than being assembled from new information.
(I don’t know if this is where Prochaska and colleagues borrowed the term from, but I first heard the phrase “deep stories” from Arlie Hochschild, she uses it in her book Strangers In Their Own Land: Anger and Mourning on the American Right. It seems clear to me that all of us have Deep Stories, narratives which express our feelings about the way the world is and are relatively immune to contradiction by any single fact).
David Roberts writes about “tribal epistemology”, where a group develops a story so strong that anyone who contradicts it is classified as the enemy, not to be trusted, and so can be dismissed. He’s explicit that this is what he thinks has happened to the Republican party in the US. Roberts:
Ultimately, communication, and with it survival as a polity, depends on a shared body of facts and assumptions about the world. For decades, the right has been sawing away at the threads that still connect it to mainstream institutions, procedures, and norms of conduct, to the point that it has created a hermetically sealed and impenetrable world of its own.
I’m not an expert in US politics - although like all Europeans I am subject to its emanations - but it’s clear to me that Roberts’ account of Republicans undermining their ability to hear corrective evidence is also a story, and one which has its own epistemic dangers. If we believe that a group is beyond reason then we won’t even try, and the alternatives to reason are worse, and degrading for everyone involved.
I salute the fact-checkers in their quest to shore up the common ground of society. Combatting this frightening future of participatory disinformation campaigns, deep stories, and the Big Lie may be impossible with fact-checking alone, but whatever the suite of counter-measures involved, it is surely going to include fact-checking.
The big questions are around epistemic communities and how we re-build a media and society which supports trust and trustworthiness. Fact-checking can’t do that on it’s own, but I don’t believe we’ll do it without checking facts.
References, Links, Background
Prochaska, S., Duskin, K., Kharazian, Z., Minow, C., Blucker, S., Venuto, S., ... & Starbird, K. (2023). Mobilizing Manufactured Reality: How Participatory Disinformation Shaped Deep Stories to Catalyze Action during the 2020 US Presidential Election. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-39. https://doi.org/10.1145/3579616
University of Washington: Center for an Informed Public (‘Our mission is to resist strategic misinformation, promote an informed society, and strengthen democratic discourse.)
Their research on election disinformation has made them enemies. They are undeterred. (‘UW misinformation researchers will not buckle under political attacks’ )
Follow Kate Starbird on mastodon
Fact checking doesn’t work (the way you think it does) Africa Check, Chequeado, and Full Fact 2019-06-20:
Vox 2016: What a liberal sociologist learned from spending five years in Trump's America
David Roberts 2020: With impeachment, America’s epistemic crisis has arrived
Catch-up service, misinformation theme:
A new study puts another nail in the coffin of "fake news": Reasonable People #47: Mainstream stories have the lion's share of the impact on people's attitudes to the covid vaccine, shows analysis of massive dataset from Facebook
The Factual: Reasonable People #41: I haven't felt optimistic about news for a long time, but learning about The Factual kindled some hope
The Epistemic IKEA effect: Reasonable People #20: The benefits of letting people come to their own conclusions
The marketplace of ideas: Reasonable People #11 A story about whether truth will out
END
Comments? Feedback? Deep stories? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online
Those who follow the rationalist community will recognise what is being described as a “trapped prior”, the situation where our preconceptions are so strong that all incoming evidence is interpret as supporting the preconception, meaning that no evidence can disconfirm our beliefs
What is wrong with the word “veracity” eh?
Is there any evidence about whether a 'trapped prior' could be related to fearfulness?
Any fact check or misinformation essay which fails to highlight that H. Biden’s laptop was his, not disinfo, becomes part of the untrustworthy media. This lie, used to censor the truth, means the 2020 election was unfair. Unfair, rigged, stolen. The loser in a close but unfair election, or game, can honestly claim it was stolen.
The desire to avoid discussing this elite lie is similar to avoidance of the Obama spy lie as part of the Collusion hoax, with the FBI acting on false info provided by HR Clinton.
Some truth can be used to obfuscate the inconvenient truth the elites want to avoid, allowing the elites to rationalize their own lies while complaining about untruths others believe.
There’s always a market for those elite rationalizations.