The Coprophagic AI crisis

Even the mystical account of AI’s glorious future fails on its own terms.

Cory Doctorow
5 min readMar 14, 2024
A Mobius strip made of shiny metal posed on a ‘code waterfall’ background as seen in the credit sequences of the Wachowskis’ ‘Matrix’ movies. Image: Plamenart (modified) https://commons.wikimedia.org/wiki/File:Double_Mobius_Strip.JPG CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en

I’m on tour with my new, nationally bestselling novel The Bezzle! Catch me in TORONTO (Mar 22), then NYC (Mar 24), then Anaheim and more!

A yellow rectangle. On the left, in blue, are the words ‘Cory Doctorow.’ On the right, in black, is ‘The Bezzle.’ Between them is the motif from the cover of *The Bezzle*: an escheresque impossible triangle. The center of the triangle is a barred, smaller triangle that imprisons a silhouetted male figure in a suit. Two other male silhouettes in suits run alongside the top edges of the triangle.

A key requirement for being a science fiction writer without losing your mind is the ability to distinguish between science fiction (futuristic thought experiments) and predictions. SF writers who lack this trait come to fancy themselves fortune-tellers who SEE! THE! FUTURE!

The thing is, sf writers cheat. We palm cards in order to set up pulp adventure stories that let us indulge our thought experiments. These palmed cards — say, faster-than-light drives or time-machines — are narrative devices, not scientifically grounded proposals.

Historically, the fact that some people — both writers and readers — couldn’t tell the difference wasn’t all that important, because people who fell prey to the sf-as-prophecy delusion didn’t have the power to re-orient our society around their mistaken beliefs. But with the rise and rise of sf-obsessed tech billionaires who keep trying to invent the torment nexus, sf writers are starting to be more vocal about distinguishing between our made-up funny stories and predictions (AKA “cyberpunk is a warning, not a suggestion”):

https://www.antipope.org/charlie/blog-static/2023/11/dont-create-the-torment-nexus.html

In that spirit, I’d like to point to how one of sf’s most frequently palmed cards has become a commonplace of the AI crowd. That sleight of hand is: “add enough compute and the computer will wake up.” This is a shopworn cliche of sf, the idea that once a computer matches the human brain for “complexity” or “power” (or some other simple-seeming but profoundly nebulous metric), the computer will become conscious. Think of “Mike” in Heinlein’s *The Moon Is a Harsh Mistress”:

https://en.wikipedia.org/wiki/The_Moon_Is_a_Harsh_Mistress#Plot

For people inflating the current AI hype bubble, this idea that making the AI “more powerful” will correct its defects is key. Whenever an AI “hallucinates” in a way that seems to disqualify it from the high-value applications that justify the torrent of investment in the field, boosters say, “Sure, the AI isn’t good enough…yet. But once we shovel an order of magnitude more training data into the hopper, we’ll solve that, because (as everyone knows) making the computer ‘more powerful’ solves the AI problem”:

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

As the lawyers say, this “cites facts not in evidence.” But let’s stipulate that it’s true for a moment. If all we need to make the AI better is more training data, is that something we can count on? Consider the problem of “botshit,” Andre Spicer and co’s very useful coinage describing “inaccurate or fabricated content” shat out at scale by AIs:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265

“Botshit” was coined last December, but the internet is already drowning in it. Desperate people, confronted with an economy modeled on a high-speed game of musical chairs in which the opportunities for a decent livelihood grow ever scarcer, are being scammed into generating mountains of botshit in the hopes of securing the elusive “passive income”:

https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week

Botshit can be produced at a scale and velocity that beggars the imagination. Consider that Amazon has had to cap the number of self-published “books” an author can submit to a mere three books per day:

https://www.theguardian.com/books/2023/sep/20/amazon-restricts-authors-from-self-publishing-more-than-three-books-a-day-after-ai-concerns

As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated “content” in any internet core sample is dwindling to homeopathic levels. Even sources considered to be nominally high-quality, from Cnet articles to legal briefs, are contaminated with botshit:

https://theconversation.com/ai-is-creating-fake-legal-cases-and-making-its-way-into-real-courtrooms-with-disastrous-results-225080

Ironically, AI companies are setting themselves up for this problem. Google and Microsoft’s full-court press for “AI powered search” imagines a future for the web in which search-engines stop returning links to web-pages, and instead summarize their content. The question is, why the fuck would anyone write the web if the only “person” who can find what they write is an AI’s crawler, which ingests the writing for its own training, but has no interest in steering readers to see what you’ve written? If AI search ever becomes a thing, the open web will become an AI CAFO and search crawlers will increasingly end up imbibing the contents of its manure lagoon.

This problem has been a long time coming. Just over a year ago, Jathan Sadowski coined the term “Habsburg AI” to describe a model trained on the output of another model:

https://twitter.com/jathansadowski/status/1625245803211272194

There’s a certain intuitive case for this being a bad idea, akin to feeding cows a slurry made of the diseased brains of other cows:

https://www.cdc.gov/prions/bse/index.html

But “The Curse of Recursion: Training on Generated Data Makes Models Forget,” a recent paper, goes beyond the ick factor of AI that is fed on botshit and delves into the mathematical consequences of AI coprophagia:

https://arxiv.org/abs/2305.17493

Co-author Ross Anderson summarizes the finding neatly: “using model-generated content in training causes irreversible defects”:

https://www.lightbluetouchpaper.org/2023/06/06/will-gpt-models-choke-on-their-own-exhaust/

Which is all to say: even if you accept the mystical proposition that more training data “solves” the AI problems that constitute total unsuitability for high-value applications that justify the trillions in valuation analysts are touting, that training data is going to be ever-more elusive.

What’s more, while the proposition that “more training data will linearly improve the quality of AI predictions” is a mere article of faith, “training an AI on the output of another AI makes it exponentially worse” is a matter of fact.

Name your price for 18 of my DRM-free ebooks and support the Electronic Frontier Foundation with the Humble Cory Doctorow Bundle.

If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/03/14/14/inhuman-centipede#enshittibottification

--

--