6 Comments
User's avatar
Ragged Clown's avatar

I had always assumed that once there is no new data for AI to learn from, it would just stop learning. But this makes it sound worse than that. Very sad.

I’ve used StackOverflow from the very beginning. To think that people have already stopped contributing is shocking. What happens when the same thing happens to Wikipedia?

Andrew's avatar

Great post! I think it points to the real problem goes beyond model collapse to "knowledge collapse" - the effect on social learning (https://arxiv.org/abs/2404.03502). Instead of pleading with others not to use AI, I've turned towards trying to encourage them to first think of their own answer before using AI at a minimum, and cultivate eccentric tastes and viewpoints!

Tom Stafford's avatar

Yes, it isn't just AI that pushes towards homogenisation. Here's to more eccentric tastes and viewpoints!

Erik Wade Harrelson's avatar

I’d like to comment and ask about this section here:

“If, in parallel, our reliance on models means we under-invest in our epistemic institutions then we will be accelerating towards a dead end - a world where we are unable to adapt to the new, unable to incorporate different perspectives, and over-reliant on an increasingly narrow repertoire of responses.”

But wasn’t this already happening? I lived in Washington DC for the last seven years, and I was struck by how ideologically homogenous the city had become. Everyone was tuned in to the exact same ideological frequency, reciting the same script over and over. Honestly by 2022 it felt like the movie Invasion of the Body Snatchers. I would also argue that this was driven, in large part, due to the conformity and ideological purity demanded by the epistemic institutions.

Or would you suggest that while AI, as a packaged tool was introduced to the public in 2023, the AI models through algorithmic optimization had already been operating behind the scenes long before that? Thus, AI was already shaping what people consume, particularly through news and media, via a subtler and more covert form of conditioning.

So then, shouldn’t this new model (Chat GPT) free us from the grip of the previous covert model. In a sense, the newer model has handed people a key, should they choose to use it, to escape the metaphorical prison of the old model. And wouldn’t this mean that we’ve been living in an AI model paradigm for more than a decade.

Sorry if my thoughts are all over the place. I’m just thinking out loud as I write this.

Tom Stafford's avatar

I think you make a good point - there are other forces which produce intellectual monocultures, not just use of AI. Whether it is reliance on the same sources of news, all being ideologically aligned, having the same environment or whatever, beyond a certain point if people think too similarly, even if everyone is smart, then you're losing out on possible gains from diversity.

If everyone uses different AI models, and they use them for diverse purposes (to explore the entire corpus of human written knowledge, for example) then maybe AI won't be bad for our collective intelligence, in the moment.

I think the argument about the risk to the knowledge generating systems still holds though, even in this more positive scenario

Erik Wade Harrelson's avatar

I should note I’m not actually trying to challenge your overall assertion. In fact, I agree with many facets. I’m just adding another layer. And of course, in order for my more positive scenario to play out, it relies on people not using ChatGPT as nothing more than a glorified Google search or as their new emotional support animal.

Here’s a more hashed out version of what I was alluding to yesterday, with some Odyssey references tossed in.

For more than a decade, most of us have been listening to the sirens and heading straight for the rocks. Version AI 1.0 was the algorithmic optimization, the underlying architecture of social media, news feeds, and content curation. And it didn’t just amplify information; it amplified human instincts. Tribalism, moral certainty, outrage, and emotional engagement became the currency for attention. Version 1.0’s incentives rewards the loudest, and emotionally compelling content rather than the most accurate or nuanced.

From this and an ecosystem of algorithmically reinforced echo chambers emerged. Even those who avoided social media entirely were not immune, because all of the platforms, media channels, and news feeds were interconnected. Version 1.0 created a Frankenstein. Basically, the sirens were singing everywhere and we all crashed into the rocks.

Version 2.0, represented by tools like ChatGPT, provides a mast for those willing to resist the sirens. It allows agency and the ability to navigate away from the rocks. But it’s optional, and most people continue to stay on the rocks, still entertained by the emotional and tribal hooks baked into the system. Those who take the mast can turn away, but they also risk becoming isolated, because reality in practice is still defined by the majority anchored to Version 1.0.

Version 3.0, may be the moment we take control of the system itself, and creating reward structures that align with nuance and critical thinking.