The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind, by Nick Chater (2018)
In The Mind is Flat, Nick Chater, insists that our beliefs are a hopeless, inconsistent, bundle of confabulations, invented on the fly to fit what is in front of us. Importantly, these beliefs provide no sound basis for rationality - how could they if they are inconsistent with each other?
"Our verbal explanations and justifications are not reports of stable pre-formed building blocks of knowledge, coherent theories over which we reason deep in an inner world. They are ad hoc, provisional and invented on the spot. We have consulted the inner oracle of common sense physics, psychology, ethics and much more hoping to uncover its hidden wisdom. But the oracle turns out to be a fraud, a fantasist, a master of confabulation"
Chater's assault is on the idea of mental depth - the folk-theory, shared with psychoanalysis, that we have a complex inner world, perhaps of multiple selves, from which powerful generating forces throw up reasons and motivations to the surface of our conscious lives.
His review of attempts in linguistics, ethics, artificial intelligence, and economics to extract simple, consistent principles of human thought leads him to the conclusion that there is no coherent inner world, that we are stuck inventing ceaselessly, with the only consistency in our beliefs borrowed from the necessary stability of the environment.
"Pre-formed beliefs, desires, motives, attitudes to risk lurking in our hidden inner depths are a fiction: we improvise our behaviour to deal with the challenges of the moment rather than to express our inner self. So there is no point wondering which way of asking the question (which you like to choose, which would you like to reject) will tell us what people really want. There are endless possible questions, and limitless possible answers. If the mind is flat, there can be no method, whether involving market research, hypnosis, psychotherapy or brain scanning that can conceivably answer this question, not because our mental motives, desire and preferences are impenetrable, but because they don't exist" (p123)
I've two responses to this skepticism.
First, I think Chater gives up too easily on finding consistency on human beliefs. In a way, Chater is treading a familiar path. The psychological school of thought called situationism takes evidence that our immediate environment strongly influences our behaviour and makes the leap to deny the importance of individual level traits. Situationism is in the textbooks, along with evidence that the strong version - that individual differences in personality play no role in behaviour - is false. Contradictions in how we reason might force us to reject the idea that human reasoning is based on a few simple consistent principles, but the door is still open to the idea that we reason based on many complex consistent principles.
Second, there is another place Chater can look for a stable, consistent, structure of knowledge and preferences, another place which could be a foundation for rationality. Not in our mental depths, but in our partial, ongoing, attempts to rationalise themselves. We try and explain ourselves, just as I am doing now. We listen to other people, maybe jumping on inconsistencies or errors, and so doing create - using the space between ourselves - ever more elaborate schema. From deciding where we'll go for lunch, to the principles of democracy, we create a fragile set of agreements over which we reason. These can be constantly revised, challenged, or (worse!) neglected but they have an existence in our common knowledge. They don't have the solidity of physical reality, but they derive consistency indirectly, from our individual and collective efforts not to be caught in inconsistency. Even the most incoherent, recklessly improvisational, actor is tricked into some coherence by a desire to agree with themselves. Once we start coordinating with other people, this process ratchets up. Argument is essentially a series of bargains, through which players explore the structure of reason. We explore the boundaries of categories, the implications of assumptions, try and reconcile different rules we've previously adopted and work out when and how they apply to new circumstances.
This is a foundation for human rationality, but not emerging from hidden depths. It is a scaffold built above each individual, held up by our collective lives.
This consistency-from-without account gives us a new interpretation of some of the most striking evidence presented in Chater's book (chapter 6, "Manufacturing Choice"). Johannsan and Hall's "choice blindness" experiments seem like a remarkable demonstrations of inconsistency in human behaviour. Over a series of studies, Johansson, Hall and colleagues tested what happens when participants are asked to make a choice and then justify it - the twist being that post-choice, using sleight of hand, they present people with the option they didn't choose.
Remarkably in situations like these, people often justify a decision they didn't make! So, in a choice of which of two photos is more attractive, participants will explain why photo B is more attractive, when moments ago they actually chose photo A as most attractive. Or they will say why they prefer more-left wing policies, moments after indicating that their political preferences are more right-wing.
These experiments seem to make a mockery of our idea of ourselves as consistent reasoners. Instead, and inline with Chater's thesis, we seem to be unprincipled improvisers, rationalising rather than rational. Yet while the results do contradict a strong account of our having strong, stable, always accessible, internal representations of our views I think they don't need to be read as showing human irrationality.
By definition the magic tricks used by the experimenters are hidden from the participants - they believe they are being confronted with the physical evidence of their recent choices. Asked to justify themselves, they weigh the reliability of the evidence in front of their eyes against the reliability of their memories. It doesn't seem so mad to me that they put some weight on what's in front of them - reality is well known for its consistency and refusal to be pushed around by what we believe or remember. Without magic tricks, this strategy would contribute to the consistency-from-without account I outlined above - it will help people establish preferences and beliefs in the common ground. Yes, it is a challenge to a strong internalist account of reasoning, but I believe that public representations can be a foundation for principled reasoning just as much as private memories.
(And just wait till you hear about the mind-altering affects of writing things down).
References
For completeness, the references on choice blindness Chater cites are:
Johansson, P., Hall, L., Sikström, S., & Olsson, A. (2005). Failure to detect mismatches between intention and outcome in a simple decision task. Science, 310(5745), 116-119.
Hall, L., Strandberg, T., Pärnamets, P., Lind, A., Tärning, B., & Johansson, P. (2013). How the polls can be both spot on and dead wrong: Using choice blindness to shift political attitudes and voter intentions. PloS one, 8(4).
Johansson, P., Hall, L., Tärning, B., Sikström, S., & Chater, N. (2014). Choice blindness and preference change: you will like this paper better if you (believe you) chose to read it!. Journal of Behavioral Decision Making, 27(3), 281-289.
Hall, L., Johansson, P., Tärning, B., Sikström, S., & Deutgen, T. (2010). Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea. Cognition, 117(1), 54-61.
The choice blindness lab pages
Finally, it is worth noting that chapter 6 of Chater's book features a number of experiments for which the original result may exaggerate the size of the effect. See this replication of the choice blindness experiment on political preference:
Rieznik, A., Moscovich, L., Frieiro, A., Figini, J., Catalano, R., Garrido, J. M., ... & Gonzalez, P. A. (2017). A massive experiment on choice blindness in political decisions: Confidence, confabulation, and unconscious detection of self-deception. PloS one, 12(2).
"contrary to what was observed in Sweden, we did not observe changes in voting intentions. Also, confidence levels in the manipulated replies where significantly lower than in non-manipulated cases even in undetected manipulation"
Chater also cites a study which primed patriotic attitudes using the US flag:
Carter, T. J., Ferguson, M. J., & Hassin, R. R. (2011). A single exposure to the American flag shifts support toward Republicanism up to 8 months later. Psychological science, 22(8), 1011-1018.
The Many Labs 1 replication project estimated the effect size of this effect to be indistinguishable from zero
Klein, R., Ratliff, K., Vianello, M., Adams Jr, R., Bahník, S., Bernstein, M., ... & Cemalcilar, Z. (2014). Data from investigating variation in replicability: A “many labs” replication project. Journal of Open Psychology Data, 2(1).
(but see response of the original authors Commentary on the Attempt to Replicate the Effect of the American Flag on Increased Republican Attitudes).
If you enjoy the newsletter, please consider forwarding it, or telling people about it by sharing this link https://tomstafford.substack.com and if you can complete or complement any of my half thoughts, please hit reply and get in touch
OTHER STUFF
Newsletter: Factually, from The International Fact Checking Network @ Poynter
The International Fact Checking Network (twitter: @factchecknet) provide a great weekly run down on events in the world of misinformation, fake news and fact checking. This issue, Coronavirus deniers are real, even if their message isn’t, leads with the story of virus denial among Brazil’s far-right. This seems to go against what we’re normally told about the far-right playbook: take advantage of people’s fear. What’s going on there then? Anyway, it’s a great newsletter and admirably global in its coverage.
Factually archives are here, subscribe link.
Science: Priming people to think about accuracy reduces their likelihood of sharing covid misinformation
David Rand and colleagues used an online testing platform and conducted an incredible 10 day sprint conception-to-preprint to show that an intervention they had previously tested for reducing the sharing of fake news also worked for covid-19 misinformation. I couldn’t summarise the work better than this thread here, from @DG_Rand himself.
Pre-print: Pennycook, G., McPhetres, J., Zhang, Y., & Rand, D. G. (2020, March 17). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention. https://doi.org/10.31234/osf.io/uhbk9
Which builds on: Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2019, November 13). Understanding and reducing the spread of misinformation online. https://doi.org/10.31234/osf.io/3n9u8
Note: the intervention builds on early work from the authors suggesting that lack of reflection, not partisan motivation, was behind most sharing of fake news. I review this work here:
Additionally, citing experts didn't enhance perceived argument quality directly (but it did enhance perception that the person making the argument was an expert).
Zemla, J. C., Sloman, S., Bechlivanidis, C., & Lagnado, D. A. (2017). Evaluating everyday explanations. Psychonomic bulletin & review, 24(5), 1488-1500.
AND FINALLY…
I watched Platoon the other day and this quote is on-topic:
Quoted without attribution in the film, but probably made up by Oliver Stone
END
(and hello Mum)