Conspiratorialism as a material phenomenon

What work does a conspiracy fantasy do?

Cory Doctorow
9 min readOct 29, 2024
A man lying in a hospital bed, wearing a sinister mind-control helmet. His hands are clenched into fists and he is grimacing. Through a hole in the wall we see a prancing vaudevallian, whose head has been replaced with the head of Mark Zuckerberg’s Metaverse avatar. Behind this figure is the giant red eye of HAL9000 from Stanley Kubrick’s ‘2001: A Space Odyssey.’ At the end of the bed stand a trio — Mom, Dad and daughter — in Sunday best clothes, their backs to us, staring at the mind-controlled

I’ll be in Tucson, AZ from November 8–10: I’m the Guest of Honor at the TusCon science fiction convention.

I think it behooves us to be a little skeptical of stories about AI driving people to believe wrong things and commit ugly actions. Not that I like the AI slop that is filling up our social media, but when we look at the ways that AI is harming us, slop is pretty low on the list.

The real AI harms come from the actual things that AI companies sell AI to do. There’s the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:

https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/

Any time AI is used to predict crime — predictive policing, bail determinations, Child Protective Services red flags — they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called “empiricism-washing,” and you know you’re experiencing it when you hear some variation on “it’s just math, math can’t be racist”:

https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology

When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an “accountability sink” that allows the company to disclaim responsibility for the thefts:

https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

When AI is used to perform high-velocity “decision support” that is supposed to inform a “human in the loop,” it quickly overwhelms its human overseer, who takes on the role of “moral crumple zone,” pressing the “OK” button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:

https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat

But it’s potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to “upcode” a patient’s treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don’t have time to treat their patients:

https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

My point is that “worrying about AI” is a zero-sum game. When we train our fire on the stuff that isn’t important to the AI stock swindlers’ business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm — policing, health, security, customer service — we also focus on the AI applications that make the most money and drive the most investment.

AI hasn’t attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system — from investors, from customers, from easily gulled big-city mayors — is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they’re not as visually arresting, but discrediting them is financially arresting, and that’s what really matters.

All that said: AI slop is real, there is a lot of it, and just because it doesn’t warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering.

AI slop has turned Facebook into an anaerobic lagoon of botshit, just the laziest, grossest engagement bait, much of it the product of rise-and-grind spammers who avidly consume get rich quick “courses” and then churn out a torrent of “shrimp Jesus” and fake chainsaw sculptures:

https://www.404media.co/email/1cdf7620-2e2f-4450-9cd9-e041f4f0c27f/

For poor engagement farmers in the global south chasing the fractional pennies that Facebook shells out for successful clickbait, the actual content of the slop is beside the point. These spammers aren’t necessarily tuned into the psyche of the wealthy-world Facebook users who represent Meta’s top monetization subjects. They’re just trying everything and doubling down on anything that moves the needle, A/B splitting their way into weird, hyper-optimized, grotesque crap:

https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/

In other words, Facebook’s AI spammers are laying out a banquet of arbitrary possibilities, like the letters on a Ouija board, and the Facebook users’ clicks and engagement are a collective ideomotor response, moving the algorithm’s planchette to the options that tug hardest at our collective delights (or, more often, disgusts).

So, rather than thinking of AI spammers as creating the ideological and aesthetic trends that drive millions of confused Facebook users into condemning, praising, and arguing about surreal botshit, it’s more true to say that spammers are discovering these trends within their subjects’ collective yearnings and terrors, and then refining them by exploring endlessly ramified variations in search of unsuspected niches.

(If you know anything about AI, this may remind you of something: a Generative Adversarial Network, in which one bot creates variations on a theme, and another bot ranks how closely the variations approach some ideal. In this case, the spammers are the generators and the Facebook users they evince reactions from are the discriminators)

https://en.wikipedia.org/wiki/Generative_adversarial_network

I got to thinking about this today while reading User Mag, Taylor Lorenz’s superb newsletter, and her reporting on a new AI slop trend, “My neighbor’s ridiculous reason for egging my car”:

https://www.usermag.co/p/my-neighbors-ridiculous-reason-for

The “egging my car” slop consists of endless variations on a story in which the poster (generally a figure of sympathy, canonically a single mother of newborn twins) complains that her awful neighbor threw dozens of eggs at her car to punish her for parking in a way that blocked his elaborate Hallowe’en display. The text is accompanied by an AI-generated image showing a modest family car that has been absolutely plastered with broken eggs, dozens upon dozens of them.

According to Lorenz, variations on this slop are topping very large Facebook discussion forums totalling millions of users, like “Movie Character…,USA Story, Volleyball Women, Top Trends, Love Style, and God Bless.” These posts link to SEO sites laden with programmatic advertising.

The funnel goes:

i. Create outrage and hence broad reach;

ii, A small percentage of those who see the post will click through to the SEO site;

iii. A small fraction of those users will click a low-quality ad;

iv. The ad will pay homeopathic sub-pennies to the spammer.

The revenue per user on this kind of scam is next to nothing, so it only works if it can get very broad reach, which is why the spam is so designed for engagement maximization. The more discussion a post generates, the more users Facebook recommends it to.

These are very effective engagement bait. Almost all AI slop gets some free engagement in the form of arguments between users who don’t know they’re commenting an AI scam and people hectoring them for falling for the scam. This is like the free square in the middle of a bingo card.

Beyond that, there’s multivalent outrage: some users are furious about food wastage; others about the poor, victimized “mother” (some users are furious about both). Not only do users get to voice their fury at both of these imaginary sins, they can also argue with one another about whether, say, food wastage even matters when compared to the petty-minded aggression of the “perpetrator.” These discussions also offer lots of opportunity for violent fantasies about the bad guy getting a comeuppance, offers to travel to the imaginary AI-generated suburb to dole out a beating, etc. All in all, the spammers behind this tedious fiction have really figured out how to rope in all kinds of users’ attention.

Of course, the spammers don’t get much from this. There isn’t such a thing as an “attention economy.” You can’t use attention as a unit of account, a medium of exchange or a store of value. Attention — like everything else that you can’t build an economy upon, such as cryptocurrency — must be converted to money before it has economic significance. Hence that tooth-achingly trite high-tech neologism, “monetization.”

The monetization of attention is very poor, but AI is heavily subsidized or even free (for now), so the largest venture capital and private equity funds in the world are spending billions in public pension money and rich peoples’ savings into CO2 plumes, GPUs, and botshit so that a bunch of hustle-culture weirdos in the Pacific Rim can make a few dollars by tricking people into clicking through engagement bait slop — twice.

The slop isn’t the point of this, but the slop does have the useful function of making the collective ideomotor response visible and thus providing a peek into our hopes and fears. What does the “egging my car” slop say about the things that we’re thinking about?

Lorenz cites Jamie Cohen, a media scholar at CUNY Queens, who points out that subtext of this slop is “fear and distrust in people about their neighbors.” Cohen predicts that “the next trend, is going to be stranger and more violent.”

This feels right to me. The corollary of mistrusting your neighbors, of course, is trusting only yourself and your family. Or, as Margaret Thatcher liked to say, “There is no such thing as society. There are individual men and women and there are families.”

We are living in the tail end of a 40 year experiment in structuring our world as though “there is no such thing as society.” We’ve gutted our welfare net, shut down or privatized public services, all but abolished solidaristic institutions like unions.

This isn’t mere aesthetics: an atomized society is far more hospitable to extreme wealth inequality than one in which we are all in it together. When your power comes from being a “wise consumer” who “votes with your wallet,” then all you can do about the climate emergency is buy a different kind of car — you can’t build the public transit system that will make cars obsolete.

When you “vote with your wallet” all you can do about animal cruelty and habitat loss is eat less meat. When you “vote with your wallet” all you can do about high drug prices is “shop around for a bargain.” When you vote with your wallet, all you can do when your bank forecloses on your home is “choose your next lender more carefully.”

Most importantly, when you vote with your wallet, you cast a ballot in an election that the people with the thickest wallets always win. No wonder those people have spent so long teaching us that we can’t trust our neighbors, that there is no such thing as society, that we can’t have nice things. That there is no alternative.

The commercial surveillance industry really wants you to believe that they’re good at convincing people of things, because that’s a good way to sell advertising. But claims of mind-control are pretty goddamned improbable — everyone who ever claimed to have managed the trick was lying, from Rasputin to MK-ULTRA:

https://pluralistic.net/HowToDestroySurveillanceCapitalism

Rather than seeing these platforms as convincing people of things, we should understand them as discovering and reinforcing the ideology that people have been driven to by material conditions. Platforms like Facebook show us to one another, let us form groups that can imperfectly fill in for the solidarity we’re desperate for after 40 years of “no such thing as society.”

The most interesting thing about “egging my car” slop is that it reveals that so many of us are convinced of two contradictory things: first, that everyone else is a monster who will turn on you for the pettiest of reasons; and second, that we’re all the kind of people who would stick up for the victims of those monsters.

Tor Books just published two new, free “Little Brother” stories: “Vigilant,” a about creepy surveillance in distance education; and “Spill,” about oil pipelines and indigenous landback.

If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/10/29/hobbesian-slop/#cui-bono

--

--

Cory Doctorow
Cory Doctorow

Written by Cory Doctorow

Writer, blogger, activist. Blog: https://pluralistic.net; Mailing list: https://pluralistic.net/plura-list; Mastodon: @pluralistic@mamot.fr

Responses (6)