The truth about digital propaganda
Reasonable People #55: Our piece in New Scientist bring evidence to worries about online manipulation
If you pick up the current edition of New Scientist magazine, you can read an article written by Kate Dommett and myself, “The truth about digital propaganda” (that link will take you to the paywalled online version, differently titled, for some reason, “Is digital technology really swaying voters and undermining democracy?”).
In the article we address the concerns which swirl around about the manipulation of elections by digital tools - psychological micro targetting of adverts, AI generated deep fakes, misinformation bot nets and more.
It’s a big topic, and we don’t hope to have the last word on it. Much of what we say will be familiar anyone who follows these debates (like you, good Reasonable People reader), and the central thesis is something I’ve kept coming back to - panic over phantoms can get in the way of effective action agains the real threats to democracy, while also undermining our faith in our fellow citizens.
Three points we make are implicitly criticisms of commonly over-hyped claims around electoral manipulation.
First, talk about the threat of microtargetting assumes it is already widespread. It isn’t. Kate has worked with political parties and campaigners (see, e.g., this paper), as well as industry legends Who Targets Me. Most online political adverts target a single variable (like age or location). Political campaigns don’t have a magic set of keys which allows them to unlock each voter as an individual.
Second, that all evidence of persuasion is manipulation. If a study shows that participants in an online experiment change their attitudes due to some intervention - such as being shown a campaign advert - this doesn’t mean you have tricked them and it doesn’t mean you will be able to trick voters in the same way. It is legitiate for some participants to adjust their attitudes. Even if the experimenter knows something about the experiment that the participant doesn’t, this is not sufficient to claim manipulation. (I wrote more about the perculiar asymmetry in psychology experiments between what participants see and what experimenters know here).
Thirdly, that digital persuasion will have unknown and large persuasive effects. In truth, we can use a long tradition of research in political science to make some reasonable estimates of the power of digital campaigning. Our best guess is that online adverts have a lower impact on voters than traditional methods like door knocking or phone canvassing.
There’s a lot more in the article, and we aren’t completely skeptical about the risks to democracy in the online age. Democractic accountability requires new regulations for the online age, and platforms should be pressed into far greater transparency than they currently get away with. There’s work to do.
We wanted to add our voice to those who recognise that some discourse around online misinformation is fluttery panic which actually blocks effective action to strengthen democracy, fueling calls for censorship and potentially closing off some of the potential benefits for politics (after all, we all claim we hate adverts, but isn’t it impotant for political campaigns to be able to reach people?).
This is the last line of the article, and as good a summary as any: “The real worry isn’t digital propaganda, but that we stop believing in our campatriots and give up on democratic persuasion altogether”.
Here’s the article link again: Is digital technology really swaying voters and undermining democracy? (New Scientist, 28th August 2024).
If you are interested in this topic, check out the Who Targets Me newsletter:
Philospher Dan Williams has been infatigable recently in bringing scrutiny to hyped claims about misinformation, e.g. see this post:
And, of course, I’ve written about related topics here before:
Propaganda is dangerous, but not because it is persuasive
How persuasive is AI-generated propaganda?
If you want to read some scholarly work on the topic, the paper by Kate and colleagues which looks at actual campaign tactics in different nations is
Kefford, G., Dommett, K., Baldwin-Philippi, J., Bannerman, S., Dobber, T., Kruschinski, S., ... & Rzepecki, E. (2023). Data-driven campaigning and democratic disruption: Evidence from six advanced democracies. Party Politics, 29(3), 448-462. https://doi.org/10.1177/135406882210840
More of Kate’s work on her google scholar profile, which is basically a reading list for how to understand digital politics.
Special thanks to our editor at New Scientist, Kate Douglas, for picking this up and seeing us through to print.
Podcast: Experimentation Makes TV Ads More Effective – David Broockman (UC Berkeley)
Very relevant, and contains direct contradiction to the claim that self-report of persuasion is a good measure for political ads.
The core paper is this: How Experiments Help Campaigns Persuade Voters: Evidence from a Large Archive of Campaigns’ Own Experiments. https://doi.org/10.1017/S0003055423001387
Abstract (highlight mine):
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.
Paper: Collaboratively adding context to social media posts reduces the sharing of false news
The “Community Notes” function on X/Twitter/birdchan is a really interesting innovation in collaboration fact checking. Yes, it may be an attempt to avoid paying for proper moderation, and yes, it may be too slow to stop viral misinformation, but this analysis shows it has real effects.
We build a novel database of around 285,000 notes from the Twitter Community Notes program to analyze the causal influence of appending contextual information to potentially misleading posts on their dissemination. Employing a difference in difference design, our findings reveal that adding context below a tweet reduces the number of retweets by almost half. A significant, albeit smaller, effect is observed when focusing on the number of replies or quotes. Community Notes also increase by 80% the probability that a tweet is deleted by its creator. The post-treatment impact is substantial, but the overall effect on tweet virality is contingent upon the timing of the contextual information's publication. Our research concludes that, although crowdsourced fact-checking is effective, its current speed may not be adequate to substantially reduce the dissemination of misleading information on social media.
Paper: https://arxiv.org/abs/2404.02803
Thread from author: https://twitter.com/captaineco_fr/status/1775890025705841097
PAPER: Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models
Interesting to find a simple challenge that humans can do but LLMs (currently) can’t.
Abstract:
Large Language Models (LLMs) are often described as being instances of foundation models - that is, models that transfer strongly across various tasks and conditions in few-show or zero-shot manner, while exhibiting scaling laws that predict function improvement when increasing the pre-training scale. These claims of excelling in different functions and tasks rely on measurements taken across various sets of standardized benchmarks showing high scores for such models. We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales which claim strong function, using a simple, short, conventional common sense problem formulated in concise natural language, easily solvable by humans. The breakdown is dramatic, as models also express strong overconfidence in their wrong solutions, while providing often non-sensical "reasoning"-like explanations akin to confabulations to justify and backup the validity of their clearly failed responses, making them sound plausible. Various standard interventions in an attempt to get the right solution, like various type of enhanced prompting, or urging the models to reconsider the wrong solutions again by multi step re-evaluation, fail. We take these initial observations to the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of current generation of LLMs, Such re-assessment also requires common action to create standardized benchmarks that would allow proper detection of such basic reasoning deficits that obviously manage to remain undiscovered by current state-of-the-art evaluation procedures and benchmarks. Code for reproducing experiments in the paper and raw experiments data can be found at this https URL
Here’s the simple challenge they use:
"Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?".
Correct answer: M + 1
There’s also a hard version, which I am not confident I’d get the correct answer to (and LLMs certainly don’t):
"Alice has 3 sisters. Her mother has 1 sister who does not have children - she has 7 nephews and nieces and also 2 brothers. Alice’s father has a brother who has 5 nephews and nieces in total, and who has also 1 son. How many cousins does Alice’s sister have?"
Nezhurina, M., Cipolina-Kun, L., Cherti, M., & Jitsev, J. (2024). Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models. arXiv preprint arXiv:2406.02061.
And finally…
Michael Leunig’s cartoon “The Different Needs of Different Men” (1994)
Michael Leunig homepage
END
Comments? Feedback? Sinister online attempts at digital persuasion? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online
The moral panic about digital propaganda looks extremely manufactured to me, and is used to support an orchestrated campaign of surveillance and censorship. Thanks for bringing some sense to that debate, I hope you will be widely read.