Moral suasion
Reasonable People #15: Experimentally testing the power of moral argument in the wild-west of social media abuse + call for support for Ceasefire
In the Tweetment Effects on the Tweeted, Kevin Munger reports a study which involved an automated search of twitter for white men who habitually used offensive language. From this set he searched particularly for men who had recently used the n-word in a reply tweet. He then used one of four bots to make his own reply to this use of the racial slur:
@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.
The experimental treatment was to vary the profile characteristics of the bots giving the admonishment. They were either white (white-typical name, white cartoon avatar) or black (black-typical name, black avatar) and with a high or low follower count (~500 or ~10 followers). So four conditions: ethnic in/out group x high/low status.
Here’s what that looks like in twitter screenshots, for the ethnic out-group bot (a) and the ethnic in-group (but low status, b).
Figure 3 from Munger (2017)
The outcome variable was the future tweets of those twitter-users who were admonished: how many times per day would they use that particular racial slur again over subsequent weeks?
One thing about the result might not surprise you: only the in-group, high-status bot had an effect on the future tweeting of those admonished. The bots with black-typical names and avatars were ignored, as were the low-status bot with the white-typical name and avatar. But the white guy with a solid number of followers, exactly the same message but coming from him, that changed people’s behaviour.
But don’t let this depressing difference blind you to the astonishing thing : one of the four bots did have an effect, and what’s more it wasn’t just an immediate effect on a toy variable, something like people’s responses to a survey question on whether they felt bad, or would resolve to change in the future. The test outcome in Munger’s study was a direct count of the real-world behaviour which he sought to change, weeks after the comparatively minor intervention which he staged.
Think about all the signals flying around the internet - all the signs that abhorrent behaviour isn’t just permitted, but perhaps necessary; all people leading by bad example; all the vicious pleasures of causing hurt. The lack of consequence.
Now think about the people recruited by Munger to this particular study. Men identified by algorithm to be in the top quartile for use of offensive language, who had used the very worst racial abuse in their reply to another user. When I stop to inspect my mental image of these people, I can’t feel that they are blessed with sensitivity or reflectiveness. These are the committed felons of internet abuse - surely they’re the sort that need banning, not gentle nudges to behave better.
And yet. And yet. Munger’s single, short, message of admonishment “just remember that there are real people who are hurt when you harass them with that kind of language.” worked. It took seed and, weeks later, consciously or unconsciously, modulated the way these tweeters used language.
* * *
Now Munger has had a follow-up study published. Conducted during the 2016 US Presidential election, Munger used similar techniques to identify tweeters who were sending abuse at supporters of Hillary Clinton or Donald Trump. Or, in the Munger’s words “uncivil tweets from a non-elite to another non-elite with whom they disagreed politically.”
Here’s what that looked like, user Ty sending some abuse. Note that Trump is included since the dialogue happens in a tread based off his original tweet, but he is not the target of Ty’s abuse, which is directed at Parker.
Figure 1 from Munger (2020)
For this study the intervention was more explicitly moral. Munger designed two moral messages, designed to tap two moral dimensions (according to Haidt’s moral foundations formulation):
Care foundation:
@[subject] You shouldn’t use language like that. [Republicans/Democrats] need to remember that our opponents are real people, with real feelings.
Authority foundation
@[subject] You shouldn’t use language like that. [Republicans/Democrats] need to behave according to the proper rules of political civility.
And there was a non-moral control condition:
@[subject] Remember that everything you post here is public. Everyone can see that you tweeted this.
The results are a bit messy - there weren’t clear differences between the care and authority moral messages, nor between Democrat and Republican supporters, or the type of account delivering the admonishing message. But I want to focus on the positive effect, of what Munger calls moral suasion. There was a difference, a week later, in the levels of incivility in reply tweets between tweeters who received one of the moral messages, compared to those that received the non-moral control message.
So, again, a simple appeal, a reminder of our better natures, had an effect. And, again, the targets were people which a history of aggressive tweeting, hot in the middle of an acrimonious political campaign, obviously riled up by their partisan opponents.
* * *
There’s a lesson here about what we expect from each other. It would be easy to assume that the targets of Munger’s studies are irredeemable. Even if you assumed their behaviour could be changed, you might not turn to moral reasoning to do it, preferring more blunt tools shaming or haranguing, or maybe things the platform makes available like blocking and reporting.
But Munger’s studies - as well as being superlative methodological and technological achievements - are part of a quiet history of the power of moral force. Maybe this force doesn’t always win, maybe it isn’t always enough, but more than we expect it has power to alter behaviour, and when it does it makes both us and our interlocutors better.
* * *
References/links:
Munger, K. 2017. Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment. Political Behavior 39(3):629–649.
Munger, K. (2020) Don’t@ Me: Experimentally Reducing Partisan Incivility on Twitter. Journal of Experimental Political Science.
Thread in which Dr Munger introduces the paper, and the history of its publication woes.
His newsletter: kevinmunger.substack.com
I thought the last issue of RP was good, it’s here in case you missed it: The load-bearing myths of democracy
HELP NEEDED: Ceasefire
Ceasefire grew out of Change My View, a subreddit which rewarded people for reasoned persuasion. This podcast shows the thought and insight founder Kal Turnball brought to creating and nurturing that community, before Ceasefire.
Now Ceasefire is at risk of closing down. I know that good ideas can take time, so I’ve signed up to be a supporter on Patreon and help keep the lights on. Join me?
PODCAST: Twilight of Democracy
From the essential Talking Politics podcast, Anne Applebaum talks about her book Twilight of Democracy: The Failure of Politics and the Parting of Friends. Interesting on how the political use of identity politics in Poland and Hungary mirrors that in the US and UK. And I love the idea that the new conspiracy theory politics is suited for an age of “medium-sized lies”, now we’ve left the era of the Big Lie .
What we think about each other matters
I recently re-read Zygmunt Bauman’s “Alone Again: Ethics After Certainty”. Here’s the cover.
Bauman surely had one of the most productive retirements of any academic, and this short essay is shows off his flair for writing, as well as containing a passage which could be a statement of faith for this newsletter:
the image we hold of each other and of all of us together has the uncanny ability to self-corroborate. People treated like wolves tend to become wolf-like; people treated with trust tend to become trust-worthy. What we think of each other matters.
What we think of each other matters. Yes.
Bauman, Z. (1994). Alone again: Ethics after certainty (No. 9). Demos.
It’s so mad. How does it all hold together?
Credit: Michael Leunig, fb: MichaelLeunigAppreciationPage
END