AI can’t do your job
But an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job.
I’m on a 20+ city book tour for my new novel Picks and Shovels. Catch me in SAN DIEGO on Mar 24 at MYSTERIOUS GALAXY, and in CHICAGO with PETER SAGAL on Apr 2. More tour stops here.
AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job:
https://www.pcmag.com/news/amid-job-cuts-doge-accelerates-rollout-of-ai-tool-to-automate-government
If you pay attention to the hype, you’d think that all the action on “AI” (an incoherent grab-bag of only marginally related technologies) was in generating text and images. Man, is that ever wrong. The AI hype machine could put every commercial illustrator alive on the breadline and the savings wouldn’t pay the kombucha budget for the million-dollar-a-year techies who oversaw Dall-E’s training run. The commercial market for automated email summaries is likewise infinitesimal.
The fact that CEOs overestimate the size of this market is easy to understand, since “CEO” is the most laptop job of all laptop jobs. Having a chatbot summarize the boss’s email is the 2025 equivalent of the 2000s gag about the boss whose secretary printed out the boss’s email and put it in his in-tray so he could go over it with a red pen and then dictate his reply.
The smart AI money is long on “decision support,” whereby a statistical inference engine suggests to a human being what decision they should make. There’s bots that are supposed to diagnose tumors, bots that are supposed to make neutral bail and parole decisions, bots that are supposed to evaluate student essays, resumes and loan applications.
The narrative around these bots is that they are there to help humans. In this story, the hospital buys a radiology bot that offers a second opinion to the human radiologist. If they disagree, the human radiologist takes another look. In this tale, AI is a way for hospitals to make fewer mistakes by spending more money. An AI assisted radiologist is less productive (because they re-run some x-rays to resolve disagreements with the bot) but more accurate.
In automation theory jargon, this radiologist is a “centaur” — a human head grafted onto the tireless, ever-vigilant body of a robot
Of course, no one who invests in an AI company expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot. The real pitch to hospital is, “Fire all but one of your radiologists and then put that poor bastard to work reviewing the judgments our robot makes at machine scale.”
No one seriously thinks that the reverse-centaur radiologist will be able to maintain perfect vigilance over long shifts of supervising automated process that rarely go wrong, but when they do, the error must be caught:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
The role of this “human in the loop” isn’t to prevent errors. That human’s is there to be blamed for errors:
https://pluralistic.net/2024/10/30/a-neck-in-a-noose/#is-also-a-human-in-the-loop
The human is there to be a “moral crumple zone”:
https://estsjournal.org/index.php/ests/article/view/260
The human is there to be an “accountability sink”:
https://profilebooks.com/work/the-unaccountability-machine/
But they’re not there to be radiologists.
This is bad enough when we’re talking about radiology, but it’s even worse in government contexts, where the bots are deciding who gets Medicare, who gets food stamps, who gets VA benefits, who gets a visa, who gets indicted, who gets bail, and who gets parole.
That’s because statistical inference is intrinsically conservative: an AI predicts the future by looking at its data about the past, and when that prediction is also an automated decision, fed to a Chapl0inesque reverse-centaur trying to keep pace with a torrent of machine judgments, the prediction becomes a directive, and thus a self-fulfilling prophecy:
AIs want the future to be like the past, and AIs make the future like the past. If the training data is full of human bias, then the predictions will also be full of human bias, and then the outcomes will be full of human bias, and when those outcomes are copraphagically fed back into the training data, you get new, highly concentrated human/machine bias:
https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification
By firing skilled human workers and replacing them with spicy autocomplete, Musk is assuming his final form as both the kind of boss who can be conned into replacing you with a defective chatbot and as the fast-talking sales rep who cons your boss. Musk is transforming key government functions into high-speed error-generating machines whose human minders are only the payroll to take the fall for the coming tsunami of robot fuckups.
This is the equivalent to filling the American government’s walls with asbestos, turning agencies into hazmat zones that we can’t touch without causing thousands to sicken and die:
https://pluralistic.net/2021/08/19/failure-cascades/#dirty-data
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete
Image:
Krd (modified)
https://commons.wikimedia.org/wiki/File:DASA_01.jpg
CC BY-SA 3.0
https://creativecommons.org/licenses/by-sa/3.0/deed.en
—
Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en