AI will be the biro of thought
Not in a good way
Let me tell you how ChatGPT and all its brothers will change society.
The template to follow is the humble biro pen, patented by László Bíró in 1938.
The biro is - like the artificial intelligence of language models will be - ubiquitous. They are everywhere, including in space (where - myth busted - they still work). The most common is the Bic Crystal, an object probably as recognisable as any of the major religious symbols, and of which over 100, 000, 000, 000 units had been sold as long ago as 2006:
There is probably one with in 5 meters of you right now. In that way they are like rats, but less likely to evoke strong feelings. You might love them, possibly. You might hate them, even. But you still use them. And most likely all that you feel a sort of indifference.
Biros are also as close to post-ownership as any manufactured and individually purchased object can be. You don’t feel guilty if you accidentally take home a biro you’ve been using. Businesses give them away for free. They just ambiently float around, passing from hand to hand, easily discarded.
And this gets to the core of why the biro should be our template for the adoption of AI. Biros are not better writing implements than the fountain pen that they have largely replaced. The beautiful flowing cursive, the skills of penmanship, have been replaced by the brutal scrawl of the biro. Biro writing is nasty but cheap, oh so cheap, and it has crowded out the elegant text that can be produced with a fountain pen.
Sure, a few old-fashioneds hold out. You may be one of them, preferring a fountain pen for serious writing. But how much of that do you really do, now? There will always be a niche for the fountain pen, a small niche surrounded by a swelling ocean of billions on billions of biros.
As with handwriting and biros, so with language models and thought. It doesn’t need to be true intelligence, it doesn’t even need to be good intelligence. The sheer abundance will crowd out bespoke writing, produced by effortful human labour. Most of us, most of the time, will put up with mass produced and good enough.
In the 1800s economist William Stanley Jevons observed that when innovation, in his case via the steam engine, led to a lower cost of coal per unit of power produced, this lower cost did not reduce overall coal consumption. Instead, the lower cost ( = greater efficiency) tended to increase overall coal consumption across the economy. This phenomenon, which is contrary to a common assumption of governments about the impact of raising efficiency, is now called Jevon’s Paradox.
I’m not an economist, but surely there is some analogue here with my biro model of AI. Language models dramatically lowered the production cost of grammatical, mostly sensible, if not particularly insightful, text. It would be naive to expect this to result in a lower society-wide consumption of text. Cheaper text will mean more text. Much more text, of lower quality.
The mechanism for Jevon’s Paradox is elastic demand - which is economist-speak, for people’s interest in something increasing a lot when price goes down. We’ve created a text-hungry world : search engine results, chatroom banter, company FAQs. All rich sources of demand for good-enough text, biro-language.
The abundance of artificial text will raise the relative cost of the inefficient human-produced kind, forcing human writers in to a similar economic niche as fountain pen manufacturers. Nice to have, the sort of thing you use on a special occasion like signing a wedding register, but a luxury.
POST A do-or-die moment for the scientific enterprise
Important analysis of systematic, organised, corruption in what gets published in academic journals
Here, we demonstrate through case studies that i) individuals have cooperated to publish papers that were eventually retracted in a number of journals, ii) brokers have enabled publication in targeted journals at scale, and iii), within a field of science, not all subfields are equally targeted for scientific fraud. Our results reveal some of the strategies that enable the entities promoting scientific fraud to evade interventions. Our final analysis suggests that this ability to evade interventions is enabling the number of fraudulent publications to grow at a rate far outpacing that of legitimate science.
Blog post by lead author Reese Richardson: A do-or-die moment for the scientific enterprise
Paper: R.A.K. Richardson, S.S. Hong, J.A. Byrne, T. Stoeger, & L.A.N. Amaral, The entities enabling scientific fraud at scale are large, resilient, and growing rapidly, Proc. Natl. Acad. Sci. U.S.A. 122 (32) e2420092122, https://doi.org/10.1073/pnas.2420092122 (2025).
POST: You Can't Just "Control" For Things
Short, sharp, introduction into why blindly applying statistical adjustment for possible confounders is often ineffective, and sometimes actively harmful
You Can't Just "Control" For Things
Self-promotion: Showcase for class on Data Analysis and Visualisation
For the final assessment on my MSc module on Data Analysis and Visualisation, I ask the students to choose a novel data set, and make a webpage highlighting a single, impactful, visualisation they have made. The idea is to, in microcosm, encapsulate the complete journey from research question to publication, as well as provide them with a portfolio piece. They leave the course with more than a grade, they have a piece of work which demonstrates how they’ve applied the skills the module is designed to teach.
Each year I make a showcase of some of the visualisations submitted (always with student permission, of course). Now I have put up the class of 2025 showcase. Enjoy!
Researchers Jailbreak AI by Flooding It With Bullshit Jargon
Another instalment in the chronicles of “AI alignment is hard actually” / fun ways to mess with language models
404 media: Researchers Jailbreak AI by Flooding It With Bullshit Jargon
You can trick AI chatbots like ChatGPT or Gemini into teaching you how to make a bomb or hack an ATM if you make the question complicated, full of academic jargon, and cite sources that do not exist.
ArXiv preprint: InfoFlood: Jailbreaking Large Language Models with Information Overload https://doi.org/10.48550/arXiv.2506.12274
Tom Johnson: Stop worrying about trust
Tom Johnson argues :
1. We’ve never trusted politicians, and there haven’t been any big shifts in public views of politicians’ trustworthiness since at least the 1980s.
2. In many other walks of life, trust is either high or improving or both
3. Trust isn’t universally good: in some cases, a dose of cynicism is healthy
Link: Stop worrying about trust
Recently on Reasonable People
The last three newsletters from me:
The Politicisation of Misinformation
The End of History and the Last Man
…And finally
The perfect meme for this news story
via @MOULE
END
Comments? Feedback? Counter-hot takes? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online



