Vaccine dialogues
Reasonable People #35 a new research study by Lotty Brand and myself is published
Today a new paper, reporting a research project led by Dr Lotty Brand, is published: Covid-19 vaccine dialogues increase vaccination intentions and attitudes in a vaccine-hesitant UK population.
Here’s the story of how the project came about (including some of the details which we didn’t include in the paper, which has a more scholarly tone).
Back in January 2021, less than a year into the covid-19 pandemic, a French team including Sacha Altay and Hugo Mercier (winner of the Reasonable People most mentioned scholar, two years running) published a preprint online "Information Delivered by a Chatbot Has a Positive Impact on COVID-19 Vaccines Attitudes and Intentions".
In this paper the authors describe an experiment in which French citizens were recruited to two conditions. Half of the participants read a short text about how vaccines work. Half were given the chance to interact with a chatbot, which was programmed to suggest possible vaccine-related queries and give answers informed by the current state of best informed medical knowledge.
The chatbot worked! People who spent time with the chatbot, getting their questions about the vaccine answered, significantly shifted their attitudes towards the vaccine, and increased their intentions of actually getting vaccinated themselves.
Around this time, Lotty joined the University of Sheffield, as a part of our project on generating engaging dialogue from argument maps. We were invited to write a “rapid review” of the Altay chatbot paper.
(there’s a subplot here about how scientific publishing is changing: Altay’s paper was ‘published’ online a month after they collected the data - lightning pace in research terms - as a preprint, a form of straight-to-market publishing which precedes formal journal review and publication. We were invited to review it as a part of the Rapid Reviews Covid-19, an ‘overlay’ project from MIT press aimed at speeding up the audit of research published as preprints. For both Altay’s work and our follow up, which I’m about to describe, formal analysis plans were publicly registered before data was collected, ‘preregistration’ which clarifies interpretation of any analysis subsequently published. The acceleration, decomposition and diversification of research publishing, as well as the contest between profit and the commons, is a huge topic which will have to wait to another newsletter).
In our rapid review, we pointed out that although the Altay study showed that interacting with the chatbot changed attitudes, but it wasn’t clear exactly what it was about this experience that drove the effect. Was it the chatbot? Or that the people who spoke to the chatbot were more engaged? That they saw more information? That they trusted that information more?
So often, when interpreting experimental research, a lot hangs on the particular conditions and the contrast they allow. Altay et al’s position was that their control condition existed to gauge demand effects - the possibility that people would say they were more pro-vaccine merely because they were part of a research study, and maybe that would make the researchers happy. Their minimal control condition allows us to discount this - the people who engaged with the chatbot shifted their attitudes more than those who were just part of a study and asked the same attitude questions, but without the chatbot part. It leaves unclear, however, exactly what it was about the chatbot experience that drove the effect.
We decided to find out for ourselves.
Our experiment, described fully in the paper released today, followed Altay et al’s in using a question and answer format to provide information about the covid-19 vaccine. Key differences were
we recruited only people who had previously reported that they were “against” the covid-19 vaccines, or “neutral” (something made possible by the Prolific participant recruitment platform - thanks Prolific!). 716 of them in total took part in our experiment.
we weren’t able to put together a full chatbot, with free text dialogue and uniquely generated responses, so we used a bank of question-answer pairs. This kept the “dialogic” nature of the chatbot, but without the full experience of interactive text chat.
our two conditions balanced the amount of text they contained, the amount of time participants spent engaged with the information, and the indicators of trust (participants in both conditions were told the same thing about why they should trust us, and trust the text we used).
Our experimental contrast focussed on testing the role of choice and interactivity in shifting attitudes. Maybe, we reasoned, the power of the chatbot is in that it allows people to customise the questions they ask - focussing on the areas of their highest concern, making the information more engaging (and hence psychologically encouraging deeper processing), and maybe also triggering an epistemic IKEA effect, in which people endorse beliefs they’ve had a role in assembling out of information they sought out for themselves. The way we set to test this was to allow half our participants to choose how to navigate our branching set of question-answer pairs (choice condition), and half to be given question-answer pairs at random, without giving them the opportunity to choose (control condition).
Here’s a screenshot of what our participants would see in the choice condition (with a selected choice highlighted):
The results surprised me, but the only reason you have hypotheses so they can be disproved, so I think we have to count the enterprise as a success.
Our experiment showed no extra benefit of interactivity and choice. Despite my thought that supporting people’s intrinsic motivation to explore the topic and choose which questions to view the answers to, this didn’t evoke any greater change in attitudes to vaccines.
Yet both conditions did produce significant changes in attitudes and intentions to vaccines, changes as large as those in Altay et al’s chatbot condition. For Altay’s chatbot this mean that the 36% of the participants who did not intend to get the vaccine at the start of the experiment reduced to 29% after engaging with the chatbot. For us, averaging across both conditions, 53% did not intent to get the vaccine at the start (remember, we recruited vaccine hesitant participants), dropping to 44%. These are not huge absolute numbers but it reflects a meaningful percentage of people changing their mind, on a topic which nobody could ignore in 2020 and 2021, and all because of a relatively brief, online, intervention (the median time viewing the vaccine information was 4 minutes in our experiment).
As well as this shift in declared intentions to get the vaccine or not, we also asked people about their attitudes to the vaccine, by asking them five questions: whether they believed the covid vaccines were safe, were effective, were rushed in their development, whether it was important to take the vaccine, and whether they trusted the people who developed the vaccines.
Combining responses to these questions we can invent a single “vaccine attitude” scale, and show the distribution of attitudes shift before and after engaging with our experiment:
Digging a bit deeper, we can see something which contradicts a lazy narrative you often hear, that people who resist the vaccine are unwavering and unreflective in their rejection of both specific vaccines and vaccination in general. The idea that they are not just anti-vaccination, but “anti-vax”.
Recall that we asked five questions about vaccine attitudes. Here are the individual distributions of responses to two of them: “I think we've had enough time to develop COVID-19 vaccines” and “I think COVID-19 vaccines are effective”
Our vaccine hesitant participants don’t reject the effectiveness of vaccines (right plot: the distribution is mostly massed right, indicating endorsement). They are worried that there wasn’t enough time to develop the vaccines safely (left plot, massed left, indicating lack of endorsement). This contradicts the idea that vaccine rejectors are driven by a monomaniacal rejection of everything about vaccines, misled by misinformation. Instead, they have concerns about the safety of the vaccines, worries about the speed of development and who did the developing. You might not feel these concerns yourself, but they are obviously legitimate.
This ties in to another idea about people who don’t want the vaccine: that they are victims of conspiracy theories. Anti-vax conspiracy theories loom large in our discourse around vaccination, and obviously conspiracy theorists do exist, and are often particularly vocal around vaccinations, but perhaps - leaving aside for a moment the longer term influence of anti-vax conspiracies - they don’t play that large a role in people’s immediate reasons for rejecting vaccines.
The absence of conspiratoral thinking is clear from free text comments we collected from our participants. At the end of the experiment we asked if they had anything they wanted us to know, and many participants wrote quite a lot. We’ve collected their comments and Lotty has made them available here via this interactive Shiny app lottybrand.shinyapps.io/vaccineComments/
Comments overwhelming centre concerns about safety and issues of trust. They don’t mention popular conspiracy theories (Bill Gates, microchips and 5G). Across hundreds of comments and thousands of words from vaccine rejectors these conspiracies barely get a mention.
This is important to recognise. If you think that people who won’t get the vaccine are irrational conspiracists you might plan a particular kind of persuasion strategy, and you would be both insulting most people who reject vaccines and unlikely to be effective, because - as our study and others show - that is not what is driving vaccine hesitancy.
For me these studies - both ours and Altay et al’s - tell a positive story about human rationality. Despite the frequent proclamations from many commentators that most people are driven by prejudice, stubborn and unresponsive to evidence, people often do adjust their views when they engage with evidence and argument. The barriers to vaccine acceptance are not human irrationality, but reasonable concerns about safety and necessity, sustained by a lack of trust. These factors can be addressed, at least in part, by more dialogue which seeks to engage people as active, responsible, reasoners rather than treat them as irrational fools who need correction.
For full details of the experiment, please read our full paper via the link below:
Brand, C.,O & Stafford, T. (2022). Using dialogues to increase positive attitudes towards Covid-19 vaccinations in a vaccine-hesitant UK population. Royal Society Open Science. https://dx.doi.org/10.1098/rsos.220366
All of our data, code and analysis scripts are available at https://github.com/lottybrand/clickbot_analysis
Dr Lotty Brand, who led this project (and also did all the hard work):
twitter @LottyBrand
Altay et al’s paper has been published as:
Altay, S., Hacquin, A.-S., Chevallier, C., & Mercier, H. (2021). Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions.Journal of Experimental Psychology: Applied. Advance online publication. https://doi.org/10.1037/xap0000400
Our rapid review is
Brand, C. O. & Stafford, T. (2021). Review 1: “Information Delivered by a Chatbot Has a Positive Impact on COVID-19 Vaccines Attitudes and Intentions.” Rapid Reviews COVID-19. https://doi.org/10.1162/2e3983f5.237d4808
Workshop
In related news, our project Opening Up Minds - which funded the work described above - is also co-funding a workshop at the University of Cambridge on 15th of October: Deliberation4Good (“How can people with diverging views be supported to successfully communicate and negotiate?”). I’m looking forward to meeting Sacha Altay at the workshop (Sacha was very supportive of our replication/extension of his work, but we’ve never met in person), as well as other speakers from across academia and industry. Check out the schedule here, which may even been updated with titles for all talks by the time you read this.
It all in-person, not online, but we’re hoping to release some recordings post-workshop. Check back here (or even better sign-up to the newsletter) for updates.
And finally…
This from SMBC nicely skewers something about simplistic approaches to how the brain works.
Apologies for the run of niche cartoons/memes, but I came across this from @stephen_want and thought anyone who has a familiarity with the “replication crisis” discourse around Psychology in the 2010s would enjoy it:
Stephen has a great line in academic memes, including this warning
Comments? Feedback? Academic versions of popular memes? I am tom@idiolect.org.uk and on Twitter at @tomstafford
END