When explaining becomes a sin
Reasonable People #33: A post from the mindhacks.com archives on sacred values and taboo trade-offs, research on the downstream effects of hype over misinformation. And chipmunks.
The first half of today’s newsletter is a re-post from something I wrote following the 2011 English riots. Earlier that week I had left a peaceful city and gone on holiday to Cornwall - no riots there. One quiet evening, after a few days offline, I turned on the TV news and saw footage that made it look like the whole of London was on fire. This piece isn’t about the riots, but about the commentary on it. This week, I was thinking again about the research discussed here, and would love to hear from you if you have any comments or follow-ups on Tetlock’s papers.
Two bits of context. First, I wrote this in 2011 before many of us were aware of exactly how deep the non-replication rabbit hole goes. Today I would review the experiments with a more skeptical eye. My sense is that the phenomenon of taboo trade-offs remains relevant and interesting, even if it isn’t as easy to evoke and measure as described in these papers. Second - for non-UK readers - the newspaper mentioned, the Daily Mail, is one of the most widely read papers in the UK. The paper supported fascism in the 1930s and has adopted a succession of distinct and often right-wing editorial lines ever since. In 2017 the paper was banned by wikipedia editors from use as a source for facts due to sensationalism and unreliability
Thanks for reading Reasonable People! Subscribe for free to receive new posts
When explaining becomes a sin
As the cacophony of politicians and commentators replaces that of the police sirens, look out for the particularly shrill voice of those who condemn as evil anyone with an alternative explanation for the looting than theirs. For an example, take the Daily Mail headline for Tuesday, which reads “To blame the cuts is immoral and cynical. This is criminality pure and simple”
If I’ve got them right, this means that when considering what factors contributed to the looting, identifying government spending cuts is not just incorrect, but actively harmful. For the Mail, the issue of explanations for the looting is of such urgency that they are comfortable condemning anyone who seeks an explanation beyond that of the looting being “criminality pure and simple”. What could be motivating this?
Research into moral psychology provides a lead. One of my favourite papers is “Thinking the unthinkable: sacred values and taboo cognitions” by Philip Tetlock (2003). In this paper he talks about how our notion of the sacred affect our thinking. The argument he makes is that in all cultures some values are sacred and we are motivated to not just to punish people who offend against these values, but also to punish people who even think about offending these values. The key experiment, from Tetlock et al, 2000, concerns a vignette about a sick child and hospital manager, who must decided if the hospital budget can afford an expensive treatment for the sick child. After reading about the manager’s decision, participants in the experiment are given the option to say how they felt about the manager, and to answer questions about such things as whether they think he should be removed from his job, and whether, if he were a friend of theirs, they would end their friendship with him. Unsurprisingly, if the vignette concludes by revealing that the manager decided the treatment was too expensive, participants are more keen to punish the manager than if he decided that the hospital could afford to treat the child. The explanation in terms of sacred values is straightforward: life, especially the life of a child, is a sacred value; money is not and so should not be weighed against the sacred value of life. But the most interesting contrast in the experiment is between participants who read vignettes in which the manager took a long time to make his decision and those in which he didn’t. Regardless of whether he decided for or against paying for the treatment, reading that the manager thought for a long time before making a decision provoked participants to want to punish the manager more. Tetlock argues that we are motivated to punish not just those who offend against sacred values, but also those who appear to be thinking about offending against sacred values – by weighing them against non-sacred values. In an added twist, Tetlock and colleagues also offered participants the opportunity to engage in ‘moral cleansing’ by subscribing to an organ donation scheme. Those participants who read about the manager who chose to save the money over saving the child, and those who read about the manager who took a long time to make his decision, regardless of what it was, were most motivated to donate their organs. This shows, Tetlock argues, that it is merely enough for the idea of breaking a taboo to flicker across your consciousness to provoke feelings of disgust at ourselves (which provoke the need for moral cleansing).
Tetlock’s papers are a full and nuanced treatment of how sacred values and taboo cognitions affect our thinking. I have only presented a snapshot here, and can but recommend that you read the full papers yourself, but I’d like to break from the science to draw some fairly obviously conclusions.
The Daily Mail editors feel they are in a moral community in which society is threatened by the looters and by those who give them succour, ‘the handwringing apologists on the Left’ who ‘blame the violence on poverty, social deprivation and a disaffected…youth’ (to quote from the rest of Tuesday’s editorial). For some, the looting is an immoral act of such a threatening nature that to think about it too hard, to react with anything other than a vociferous condemnation, is itself worthy of condemnation.
The sad thing about adopting this stance is that it prevents media commentators from thinking about how they themselves might have contributed to the looting. The footage on TV and in newspapers such as the Daily Mail has been vivid and hysterical. Television has shown the most dramatic footage of the looting, while headlines have screamed about the police losing control and anarchy on the streets. You don’t have to be a scholar of psychology to realise that this kind of media environment might play a role in encouraging the copycat looting sprees that sprung up outside of London (although if you were, you would be aware of the literature on how newspaper headlines and TV footage, can provoke immitation in the wider population).
Some, like the Daily Mail, see any attempt at explaining the looting as excusing the looting. The looting, like so much for them, is a moral issue of such virulence that they see people who understand society differently as part of the same threat to society as the looters. Research in moral psychology provides some clues about their style of thinking. It doesn’t, as far as I know, provide much of a clue about how to alter it.
Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320–324.
Tetlock, P.E. et al. (2000) The psychology of the unthinkable: taboo trade-offs, forbidden base rates, and heretical counterfactuals. J. Pers. Soc. Psychol. 78, 853– 870
See also Vaughan’s Riot Psychology
This originally published at mindhacks.com 2011-08-11. Below the button, the usual links and commentary on research on rationality, persuasion and argument.
A refrain of this newsletter is that ‘what we think of each other matters’, so it is nice to see a couple of paper which give evidential shape to this general claim:
PAPER: The ties that blind: Misperceptions of the opponent fringe and the miscalibration of political contempt
partisans consistently overestimated the prevalence of their opponents’ extreme, egregious political attitudes. (Over)estimation of political opponents’ agreement with extreme issues predicted cross-partisan dislike, which in turn predicted unwillingness to engage with opponents, foreclosing opportunities to correct misperceptions
We wrote about the attitude-consequences of avoidance learning in Eiser et al (2003) - once you form an opinion that something is bad you avoid it, and so don’t update your opinion. A corrective to this is that our experiences aren’t within our complete control, sometimes we try and avoid something and get to experience it anyway. Life sometimes gives you lemons, or sits you next to an ideological out-group at a wedding, or similar. When we were building RL models of this phenomenon an additional quirk was that any gradual learning mechanism didn’t produce lasting avoidance. Agents would learn that something was just bad enough to avoid, then avoid it, then the memory would decay and after a time their attitude would become neutral again. To mimic what humans did we needed to add an extra mechanism whereby the mere act of avoiding something reinforced the belief that it was bad.
Which is to say, I find Parker et al’s finding very plausible, and I wonder about the mechanisms for it. Presumably the US political environment increasingly supports social isolation from your ideological outgroup, thus blocking correction of false beliefs about how extreme they are. All of which isn’t helped by the role of the media and social media in surfacing the most extreme examples of opposing ideologies.
Parker, V. A., Feinberg, M., Tullett, A. M., & Wilson, A. E. (2021, October 1). The Ties that Blind: Misperceptions of the Opponent Fringe and the Miscalibration of Political Contempt. https://doi.org/10.31234/osf.io/cr23g
Citation for my war-story about computational modelling of attitudes:
Eiser, J. R., Fazio, R. H., Stafford, T. & Prescott, T. J. (2003), Connectionist simulation of attitude learning: Asymmetries in the acquisition of positive and negative evaluations, Personality and Social Psychology Bulletin, 29:1221-1235.
PAPER: Misinformation Is a Threat Because (Other) People are Gullible
This new preprint uses preregistered experiments to demonstrate that the extent to which individuals perceive misinformation as a threat is correlated with the extent they believe other people are unable to successfully identify false information. This is the “third person effect” (Davidson, 1983), also found with advertising (“other people are influenced by adverts, not me”).
The paper gets to the heart of an issue I’ve discussed before (RP#14), that of the second order effects. The first order effect of the existence of misinformation is that people are misinformed. A second order effect is that we come to believe that other people are misinformed. This undermines a ‘load-bearing myth of democracy’ (Karpf, 2019), the belief that the citizenry are well informed and paying attention to what our government does and how it does it. If we stop believing this, en masse, then the rationale for democractic choice of government, for democractic oversight of government crumbles.
Here’s a snip from the preprint introduction:
A growing body of research is pointing at the deleterious effect of these alarmist narratives on misinformation (Altay et al., 2020; Lee , 2021; Nisbet et al., 2021; Nyhan, 2020; Van Duyn & Collier, 2019) , and have tried to correct them (Lyons et al., 2020) . Fo r instance , alarmist narratives about deepfakes, common in the popular press, have been found to increase skepticism in both true and fake videos (Ternovski et al., 2022) . More broadly, if alarmist narratives on misinformation were to successfully increase the perceived prevalence of misinformation (which remains to be proven), they could lead to narrower media diet s, less trust in the media (Shapiro, 2020) and reduce the shar ing of reliable news on social media (Yang & Horning, 2020) . For instance, the term ‘fake news’ has been used to delegitimize reliable news outlets and to dismiss their news coverage as deeply flawed (Farhall et al., 2019) . O ne online experiment showed that exposure to elite discourse about fake news lead s to lower trust in the media and less belief in true news (V an Duyn & Collier, 2019). Similarly, excessive public attention on misinformation is suspected to erode satisfaction with democracy by making electoral processes appear less fair and just (Nisbet et al., 2021)
Altay, S., & Acerbi, A. (2022, May 20). Misinformation Is a Threat Because (Other) People are Gullible. https://doi.org/10.31234/osf.io/n4qrj
Davison, W. P. (1983). The third-person effect in communication. Public opinion quarterly, 47(1), 1-15.
Nisbet, E. C., Mortenson, C., & Li, Q. (2021). The presumed influence of election misinformation on others reduces our own satisfaction with democracy. The Harvard Kennedy School Misinformation Review.
Our online survey of 2,474 respondents in the United States shows that greater attention to political news heightens PIM [Presumed Influence of Misinformation] on others as opposed to oneself, especially among Democrats and Independents. In turn, PIM on others reduces satisfaction with American electoral democracy, eroding the “virtuous circle” between news and democracy, and possibly commitment to democracy in the long-term.
PAPER: Moralization of rationality can stimulate, but intellectual humility inhibits, sharing of hostile conspiratorial rumors
Preprint from Antoine Marie & Michael Bang Petersen which focusses on people’s expressed commitment to rationality. Here’s the scale:
The finding is that more expressed commitment to rationality correlates with higher likelihood of sharing and believing hostile conspiratorial news. The interpretation is that this expressed commitment is a form of moral grandstanding, a way of status seeking rather than a genuine epistemic commitment.
Marie, A., & Petersen, M. (2022, March 4). Moralization of rationality can stimulate, but intellectual humility inhibits, sharing of hostile conspiratorial rumors. https://doi.org/10.31219/osf.io/k7u68
Ståhl, T., Zaal, M. P., & Skitka, L. J. (2016). Moralized Rationality: Relying on Logic and
Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue.
PLOS ONE, 11(11), e0166332. https://doi.org/10.1371/journal.pone.0166332
THREAD: On “cinematic epistemology”
* * *
Usually I put a cartoon here, but I thought I’d put some audio instead. Today’s thought: if you take songs you know well, and slow them down it can bring out something extra in them. At this point I’m not sure exactly what, but definitely …extra
Not sure what I mean, try Dolly Parton’s Jolene :
If that works for you, then try this: novelty 1980s covers by Alvin and the Chipmunks (“call me”, “walk like an Egyptian”, “Heaven is a place on earth”), all massively slowed down. And it….works, somehow
Following the conversion of the Chipmunks into a slow-doom-shoegaze act, trying the same thing with Portishead seems …unadventurous:
And, finally (really), Last Christmas by 80s pop sensations Wham! slowed down by 8x to a 35 minute transcendent epic:
Comments? Feedback? I am email@example.com and on Twitter at @tomstafford
Subscribe for free to receive new posts
Yes you are right, you can measure the frequency. As for arguments relying on assumptions, I agree, but you factor that in.
I’m not sure it needs measurement. An argumentation is either coherent or not.