Reasonable People #22 exaggerated beliefs about the effectiveness of microtargeted ads obscure real risks, and real opportunities to foster public trust in politics
At Walthamstow dog races, Derren Brown encourages a man to bet on a dog he's sure will lose. When dog #4 wins, they stride up to the cashier's window with the ticket for #2. "It's a control game", says Brown, "at an advanced level".
The illusionist bangs on the window emphatically with the flat of his hand.
"This is the winning ticket" he says.
It is not the winning ticket.
"This is the dog you're looking for".
Looking momentarily confused, the cashier pays out the winnings.
The punter walks away with hands full of cash. Like the viewers at home, he's stunned. Is it really this easy?
You can watch the clip on YouTube the topic of the segment is announced in capitals : MIND CONTROL.
Derren Brown is a showman. The illusion works because of the story we're told about the trick. We believe - just for a moment, just possibly -, that Derren Brown really can control minds with words, that his conjuror's power can backdoor human awareness, bending behaviour to his will.
There are bigger tricksters than Darren Brown, and greater shows than Walthamstow dog races.
In the wake of the 2016 Trump election in the US and the Brexit vote in the UK a confused establishment class looked around for explanations and the Cambridge Analytica scandal appeared to provide them.
Here a shadowy world of marketing, cyberwarfare and academic psychology research seemed to collide. Facebook data, dubiously obtained, was used to construct "detailed psychological profiles" of millions of voters. "5000 data points" on each voter was the boast. The company's operation was described as “Steve Bannon’s psychological warfare mindfuck tool”, and was widely attributed with having swung the elections for Trump and for Brexit. It even became the subject of a Netflix documentary, The Great Hack (tagline "They Took Your Data. Then They Took Control").
For some, this is just the beginning. Yuval Noah Harari, prophet of the history and future of Sapiens, wrote in the Financial Times:
"If corporations and governments start harvesting our biometric data en masse, they can get to know us far better than we know ourselves, and they can then not just predict our feelings but also manipulate our feelings and sell us anything they want — be it a product or a politician. Biometric monitoring would make Cambridge Analytica’s data hacking tactics look like something from the Stone Age."
Shoshana Zuboff writes of "marauding data invaders", a "secret invasion and conquest" powered by "stealth attacks designed to trigger the inner demons of unsuspecting citizens" (in "You Are Now Remotely Controlled : Surveillance capitalists control the science and the scientists, the secrets and the truth")
Listen to these commentators, and Derren Brown's trick at the dog races looks like small change. The spectre is of mind control, automated for the masses and irredeemably embedded in our lives by online platforms.
Nordmann (2007), writing about a different topic entirely - nanotechnology - talks of a 'speculative nanoethics', warning of
ethical discourse that constructs and validates an incredible future which it only then proceeds to endorse or critique.
Nordmann's critique is of discussion which
opens by suggesting a possible technological development and continues with a consequence that demands immediate attention. What looks like an improbable, merely possible future in the first half of the sentence, appears in the second half as something inevitable. And as the hypothetical gets displaced by a supposed actual, an imagined future overwhelms the present
The effect, he says, is that our attention is consumed by fantasy concerns, to the neglect of real, immediate, ethical issues.
The last people you should trust on the effectiveness of online marketing are an online marketing company. The privacy breach may have been very real, but Cambridge Analytica's ability to swing elections is less clear. The company had a reputation for overclaiming, and reports are that the machinery the company boasted could turn voter profiles into targeted advertising, and so into votes, was incomplete or far from ready.
Our base-rate assumption about advertising should be that it is difficult to change people's behaviour. So much so that one recent review was able to claim "We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero." (Kalla & Broockman, 2018). We could grant that targeting persuadable voters may create an exception to this general rule. However, the evidence is that current psychological models of voters are unsophisticated and aren't able to drive effective persuasion. ("Widely publicized claims about the effectiveness of targeting voters by inferred personality traits, as allegedly conducted by Cambridge Analytica, were not based on randomized experiments or any other rigorous causal inference"; Aral & Eckles, 2019).
Here's how Derren Brown's dog race trick works. The first part I have to assume, based on what else I've seen of Brown's methods. This part is done off camera: before approaching the punter he buys six tickets, one for each dog in the race. Having secured a winning ticket the ordinary way, at some point he switches it in for the losing ticket. Nobody is looking closely at the ticket anymore, except the cashier who dutifully pays up. The reason she looks momentarily confused is not mind control, but because she's wondering why Brown is banging on the window and shouting.
But that's only the first part of the trick. The second part is this: we - the audience - have to have some belief that Derren Brown's mind control is possible. Perhaps "belief" is too strong. We have to have some doubt that such mind control is impossible. Only then can our fascination be held. Without some small belief in his mind control powers, the trick dissolves into the merely impossible, and we lose interest, assuming there's nothing more we're missing than some ordinary sleight of hand or editing trick.
The real trick is the misdirection over the very nature of the trick, which sets our mind spinning. He can't really be controlling minds like that can he? Can he?? And while we're speculating about this, the real trick is done in the background with more mundane methods.
This is the risk of political advertising, that the real danger is mundane, but we spin out over the marketing claims. Assuming the unreasonable effectiveness of targeted advertising propels us into a speculative future of digital mind control. We can dig into the actual evidence on advertising effectiveness, but arbitrating over how effective of targeted advertising is likely to be (not much, a bit, perhaps) only defends us against a premature leap into this speculative future. It doesn't grapple with the shape of possible ethical issues.
This doesn't mean targeted advertising isn't dangerous, only that it may not be dangerous in the way we have assumed.
There are possible worlds where targeted advertising is massively effective, but also bares no resemblance to mind control. If you could identify persuadable voters and show them convincing adverts, for example. The adverts might be truthful and honest, in a way even political opponents would be able to agree they were legitimate attempts at persuasion. A party with a strong environmental platform could find voters who didn't know that but are also worried about green issues. The ads could do no more than let them know that a party they were considering voting for also had great policies on the environment. Could we object to targeting in circumstances like these?
This kind of targeting makes political campaigning cheaper. If it was in the hands of one party, it would give them an advantage, but it doesn't seem obviously illegitimate. It certainly isn't mind control.
A step up would be targeted and customised advertising. A campaign could identify A-type persuadables and B-type persuadables. Type As get Advert #1, Type Bs get Advert #2. Legitimate? The most obvious answer seems to me to be "it depends". What is in the content of Adverts #1 and #2? If the content is all a genuine part of your political platform, is it unreasonable to promote different parts to different people? A lie or contradiction in your claims is dishonest. Lies and contradictions aren't novel to targeted political advertising - we already have this problem with political campaigns, and ethical frameworks for judging them.
What I'm trying to do by this thinking aloud is work out what, if anything, is ethically novel about targeted political advertising.
I'm on a project to investigate the nature of online political ads in the UK. Our plan is not to try and test the effectiveness of such ads, but look at how political actors actually think about online ads (and by extension targeted ads). How do campaigns deploy them? How do they think they work. The other side of this is public perception. What are people's intuitions about the legitimacy of online ads? Has the mind control story captured people's imaginations? Does targeting worry them? Does it worry them more or less when it is targeting of political messages? How could online political adverts be regulated that might shore up trust or the perception of legitimacy?
We're still early in the project, so thinking about the best questions to ask, and the best way to ask them.
My intuition is that the problem with targeted advertising is not mind control, but that it excepts political persuasion from the commons. When a political campaign targets adverts, the rest of us may not know what they are saying, and to whom they are saying it. The problem is not primarily that the adverts may be false or misleading (although they may be) but that they are excused from counter-argument. If nobody knows what you are saying, and to who you are saying it, then nobody can offer correctives to your claims.
This asymmetry is the fundamental unfairness of targeted advertising. If we only knew what political campaigners were saying, and to who they were saying it, it would go a long way to dispelling the aura of mind control from targeted advertising. Sadly, platform ad archives are inadequate and researcher access to data limited (Dommett & Bakir, 2020;Dommett, 2020).
Other concerns come into focus, once you have dismissed the spectre of mind control.
David Karpf argues that the idea of "digital propaganda wizards" does its harm through a second order effect, it undermines our faith in a politically attentive public, and this erodes the norms that the politicians need to behave consistently and be trustworthy (lest they be called out). [See also RP#14]
Gilad Edelman makes the case that targeted advertising puts a priority on platforms harvesting data and profiling their users. Even without the addition of political advertising, this creates privacy risk (to say nothing of the lock-in of the business model which sells users to advertisers).
Cory Doctorow's How to Destroy Surveillance Capitalism is a direct riposte to Zuboff''s claims and argues that the root problem with Big Tech is the 'big' - breaking up monopoly power is where our real focus should be, not concern about persuasion.
People are not irrational fools, and any model of political advertising that assumes they can be easily duped or otherwise mind controlled is a dangerous distraction. At the same time, problems of trust and legitimacy are very real, and it is obvious that people's perception of targeted political advertising will feed into this. There's a chance, perhaps, to do some real good here, in the realm of Karpf's "second order effects": not just regulating political adverts but in demonstrating that they are regulated. These are issues of what people believe isthe case, as much as of what is.
Politics is inherently about the commons, and political adverts must be part of that commons. The public must be reassured that it is known who is seeing what, and that it is possible for bad arguments to be confronted with good. Political actors must know that their campaigning behaviour matters, that they are not free to say contradictory things to different interest groups.
Over the next 3 years, as part of this project, we'll be finding out what people already believe, and how we can effectively signal trust in political adverts. So expect more on this topic in the future, and get in touch if you've suggestions for the right questions to ask about public perception of targeted political advertising.
Aral, S., & Eckles, D. (2019). Protecting elections from social media manipulation. Science, 365(6456), 858-861.
Dommett, K. (2020). Researching for Democracy? Data Access and the Study of Online Platforms. Political Insight, 11(3), 34-36.
Dommett, K., & Bakir, M. E. (2020). A Transparent Digital Election Campaign? The Insights and Significance of Political Advertising Archives for Debates on Electoral Regulation. Parliamentary Affairs, 73(Supplement_1), 208-224.
Kalla, J. L., & Broockman, D. E. (2018). The minimal persuasive effects of campaign contact in general elections: Evidence from 49 field experiments. American Political Science Review, 112(1), 148-166.
Nordmann, A. (2007). If and then: a critique of speculative nanoethics. Nanoethics, 1(1), 31-46.
Zeynep Tufuki : The Clubhouse App and the Rise of Oral Psychodynamics
Tufuki augurs the return of oral psychodynamics to the public sphere. Me, I’m just happy whenever people discuss Walter Ong’s masterful “Orality and Literacy”
Astral Codex Ten (fka Slatestarcodex): WebMD, And The Tragedy Of Legible Expertise
The essence of Moloch is that if you want to win intense competitions, you have to optimize for winning intense competitions - not for some unrelated thing like giving good medical advice. Google apparently has hard-coded into their search algorithm that WebMD should be on the front page for any medical-related search; I would say they have handily won the intense competition that they're in. They must have placated a wide variety of stakeholders and fought off a wide variety of attackers; each of those victories took a minor change to their medical information or their procedures for producing medical information. Repeat a thousand times, and they're on top of the world, and also every diagnosis is "cancer" and every drug's side effects are "everything".
WebMD is too big, too legitimate, and too canonical to be good.
An argument that it may be the best we can hope for that experts in the public sphere are widely respected, understandable and not awful. A supply side analysis of “should we trust the experts?”