State-sponsored disinformation campaigns - coming to an acrimonious online debate near you!
Reasonable People #12: Strategies to protect public discussion from disruption by hostile actors
‘Nearly half of the Twitter accounts spreading messages on the social media platform about the coronavirus pandemic are likely bots, according to researchers at Carnegie Mellon University’. Bot features include tweeting more frequently than humanly possible, being in multiple countries in the same day, and being part of networks with other bots.
The researchers haven’t published the details of their analysis, but the essential point is surely robust - online disinformation blends organic and fake. Some of it is deliberately seeded and amplified, intended to sow division. But some of it is “organic”, people with genuinely different views, and/or whose primary purpose isn’t to sow division.
Image: Theodor Kittelsen - Sjøtrollet, 1887 (The Sea Troll). Public Domain.
This is deliberate, those behind disinformation campaigns want to blend automated and organic content. Here’s the incomparable Kate Starbird with the background you need on online disinformation campaigns:
Recognizing the role of unwitting crowds is a persistent challenge for researchers and platform designers.
(Please also check out her talk at Truth and Trust Online 2019, ‘Understanding Disinformation as Collaborative Work’ where a scheduling error - not hers! - led her to compress her 30 minute talk into 20 minutes)
Poynter warns that in the future key vectors for Russian misinformation campaigns will be domestic organisations which become unwitting hosts
we are increasingly going to see U.S. voices and U.S. organizations that will be the key disseminators of Russian malign disinformation, with messages targeting vulnerable and divided U.S. communities
Just as disinformation campaigns blend automated and organic, they mix in true information in with the false. Some disinformation bots are sleepers - tweeting standard, true, possibly interesting, content for years, before become active in a particular disinformation campaigns.
What this means is that you can’t identify disinformation campaigns solely by the content. It is not like the good guys believe X and the bad guys believe not-X (much as we might like to act like it).
Two mind blowing facts from the wave of Black Lives Matter in the US around the time of the 2016 presidential election:
The largest Black Lives Matter page on facebook was fake. It had more followers than the official BLM but was started by a white Australian.
Russian disinformation supported both pro and anti BLM campaigns. They bought anti-BLM facebook ads, but also ran false accounts tweeting information and support under the #BlackLivesMatter hashtags.
This is a persistent pattern, where state-sponsored disinformation campaigns have the goal of fostering division - and so they naturally pick fault-lines in the target society to exacerbate. Disinformation campaigns have also targeted the vaccination debate and climate science.
Here’s my bet: if you know of a controversial issue online which seems to generate more adversarial heat than argumentative light, in which some participants seem emotional, antagonistic and ideological to the point of caricature, my bet is that some significant proportion - but not all! - of the participants are bad actors or bots.
These bad actors have the purpose of amplifying dissent, without resolution or productive outcomes. Our job is to avoid getting caught up, or, once caught up, to avoid amplifying purposeless disagreement.
One strategy would be to ask: is the topic I care about being polluted by disinformation? I had a look at this, Hamilton 2.0 Dashboard, from the, Alliance for Securing Democracy, which promises to use automated search to identify state sponsored messaging on particular topics. I didn’t get much value from it, but ymmv. At root, it seems like identifying coordinated disinformation campaigns is always going to be easier after the fact, because of the likelihood that any campaign will embroil genuine actors. And it isn’t as if you need state-support to make a discussion on the internet one in which reason seems to take second place to ideological purity and acrimonious accusation.
So perhaps a second strategy would be to develop a set of heuristics, the informational public health equivalent of teaching people about the importance of washing their hands:
Check your facts, especially if a story seems too juicy to be true.
Don’t use inflammatory language. For example, if a term is only used as term of abuse - like ‘Climate Denier’, or ‘anti-vaxxer’ - you’re not going to do any persuasional work using it
…. and so on
Chris Albon has a nice recent list of ‘rules for twitter’ (although he deletes his old tweets, so get ‘em while they’re hot/there).
These tactics are epistemic actions which improve public debate regardless of whether a particular discussion is a disinformation target. We’d hope that adopting such principles reduces population-level vulnerability to disinformation, but the evidence is that acting this way online just isn’t as natural or fun as generally shooting shit, virtue signalling and flaming (sorry humanity).
A third strategy is work on how platforms host information and disinformation, and the design choices which speed or hinder the spread of both. In more specific phrasing:
Which is an interesting thought: exactly how would you build an online platform so it was easier to authenticate truths?
Read more
The Tactics & Tropes of the Internet Research Agency report to the United States Senate Select Committee on Intelligence. Reported on by Wired: How Russian Trolls Used Meme Warfare to Divide America.
“The mass public is rational only to the extent that prominent political actors provide a rational lead.”
Adam Berinsky. Assuming the Costs of War: Events, Elites, and American Public Support for Military Conflict. Journal of Politics. 2007. 69(4): 975-997.
Which makes sense in the context of Berinsky’s discussion of how disagreement among the elites cues public opinion, but does rely on a particular definition of rationality (which smuggles in the idea that taking cues from elites when they are wrong must be irrational).
Newsletter: The Interface with Casey Newton. I don’t know how he finds the time, but Casey writes voluminously and intelligently about tech platforms, almost daily. As an example, here he is on how Twitter's decision to fact-check Trump is a bigger, and more judicious, step than most assumed (May 26 2020).
What does Covid-19 mean for expertise? The case of Tomas Pueyo
My collaborator Warren Pearce on a glimpse of "a strange new world where the unruly rough and tumble of internet epistemology emerged onto the mainstream news"
Loury, Glenn C. "Self-censorship in public discourse: A theory of “political correctness” and related phenomena." Rationality and Society 6.4 (1994): 428-461.
Published in 1994
1994!
Podcast: Government vs The Robots
Back for a 3rd season. Starts with a great discussion with Peter Pomerantsev, author of This Is Not Propaganda: Adventures in the War Against Reality about the post-soviet propaganda model, and it’s particular use of free-floating nostalgia as an engine.
Nostalgia is about not being comfortable in the present, and not having any coherent strategy for future
Where this leads you:
Conspiracy theories emotionally comfort but epistemically confuse people
Slate Star Codex has deleted his blog, following threats by the NY Times that they will reveal his real name. This is a crime against intelligent discussion on the internet.
Off-topic: The Confessions of Marcus Hutchins, the Hacker Who Saved the Internet
At 22, he single-handedly put a stop to the worst cyberattack the world had ever seen. Then he was arrested by the FBI. This is his untold story.
ENDS