Even if you think AI search could be good, it won’t be good

It’s just too goddamned easy to cheat.

Cory Doctorow
7 min readMay 15, 2024
A cane-waving carny barker in a loud checked suit and straw boater. His mouth has been replaced with the staring red eye of HAL9000 from Kubrick’s ‘2001: A Space Odyssey.’ He stands on a backdrop composed of many knobs, switches and jacks. The knobs have all been replaced with HAL’s eye, too. Above his head hovers a search-box and two buttons reading ‘Google Search’ and ‘I’m feeling lucky.’ The countertop he leans on has been replaced with a code waterfall effect as seen in the credit sequences

TODAY (May 15), I’m in North Hollywood for a screening of Stephanie Kelton’s Finding the Money; on Friday (May 17) I’m in San Francisco at the Internet Archive to keynote the tenth anniversary of the Authors Alliance.

The big news in search this week is that Google is continuing its transition to “AI search” — instead of typing in search terms and getting links to websites, you’ll ask Google a question and an AI will compose an answer based on things it finds on the web:

https://blog.google/products/search/generative-ai-google-search-may-2024/

Google bills this as “let Google do the googling for you.” Rather than searching the web yourself, you’ll delegate this task to Google. Hidden in this pitch is a tacit admission that Google is no longer a convenient or reliable way to retrieve information, drowning as it is in AI-generated spam, poorly labeled ads, and SEO garbage:

https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse

Googling used to be easy: type in a query, get back a screen of highly relevant results. Today, clicking the top links will take you to sites that paid for placement at the top of the screen (rather than the sites that best match your query). Clicking further down will get you scams, AI slop, or bulk-produced SEO nonsense.

AI-powered search promises to fix this, not by making Google search results better, but by having a bot sort through the search results and discard the nonsense that Google will continue to serve up, and summarize the high quality results.

Now, there are plenty of obvious objections to this plan. For starters, why wouldn’t Google just make its search results better? Rather than building a LLM for the sole purpose of sorting through the garbage Google is either paid or tricked into serving up, why not just stop serving up garbage? We know that’s possible, because other search engines serve really good results by paying for access to Google’s back-end and then filtering the results:

https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi

Another obvious objection: why would anyone write the web if the only purpose for doing so is to feed a bot that will summarize what you’ve written without sending anyone to your webpage? Whether you’re a commercial publisher hoping to make money from advertising or subscriptions, or — like me — an open access publisher hoping to change people’s minds, why would you invite Google to summarize your work without ever showing it to internet users? Nevermind how unfair that is, think about how implausible it is: if this is the way Google will work in the future, why wouldn’t every publisher just block Google’s crawler?

A third obvious objection: AI is bad. Not morally bad (though maybe morally bad, too!), but technically bad. It “hallucinates” nonsense answers, including dangerous nonsense. It’s a supremely confident liar that can get you killed:

https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai

The promises of AI are grossly oversold, including the promises Google makes, like its claim that its AI had discovered millions of useful new materials. In reality, the number of useful new materials Deepmind had discovered was zero:

https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

This is true of all of AI’s most impressive demos. Often, “AI” turns out to be low-waged human workers in a distant call-center pretending to be robots:

https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins

Sometimes, the AI robot dancing on stage turns out to literally be just a person in a robot suit pretending to be a robot:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

The AI video demos that represent “an existential threat to Hollywood filmmaking” turn out to be so cumbersome as to be practically useless (and vastly inferior to existing production techniques):

https://www.wheresyoured.at/expectations-versus-reality/

But let’s take Google at its word. Let’s stipulate that:

a) It can’t fix search, only add a slop-filtering AI layer on top of it; and

b) The rest of the world will continue to let Google index its pages even if they derive no benefit from doing so; and

c) Google will shortly fix its AI, and all the lies about AI capabilities will be revealed to be premature truths that are finally realized.

AI search is still a bad idea. Because beyond all the obvious reasons that AI search is a terrible idea, there’s a subtle — and incurable — defect in this plan: AI search — even excellent AI search — makes it far too easy for Google to cheat us, and Google can’t stop cheating us.

Remember: enshittification isn’t the result of worse people running tech companies today than in the years when tech services were good and useful. Rather, enshittification is rooted in the collapse of constraints that used to prevent those same people from making their services worse in service to increasing their profit margins:

https://pluralistic.net/2024/03/26/glitchbread/#electronic-shelf-tags

These companies always had the capacity to siphon value away from business customers (like publishers) and end-users (like searchers). That comes with the territory: digital businesses can alter their “business logic” from instant to instant, and for each user, allowing them to change payouts, prices and ranking. I call this “twiddling”: turning the knobs on the system’s back-end to make sure the house always wins:

https://pluralistic.net/2023/02/19/twiddler/

What changed wasn’t the character of the leaders of these businesses, nor their capacity to cheat us. What changed was the consequences for cheating. When the tech companies merged to monopoly, they ceased to fear losing your business to a competitor.

Google’s 90% search market share was attained by bribing everyone who operates a service or platform where you might encounter a search box to connect that box to Google. Spending tens of billions of dollars every year to make sure no one ever encounters a non-Google search is a cheaper way to retain your business than making sure Google is the very best search engine:

https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task

Competition was once a threat to Google; for years, its mantra was “competition is a click away.” Today, competition is all but nonexistent.

Then the surveillance business consolidated into a small number of firms. Two companies dominate the commercial surveillance industry: Google and Meta, and they collude to rig the market:

https://en.wikipedia.org/wiki/Jedi_Blue

That consolidation inevitably leads to regulatory capture: shorn of competitive pressure, the companies that dominate the sector can converge on a single message to policymakers and use their monopoly profits to turn that message into policy:

https://pluralistic.net/2022/06/05/regulatory-capture/

This is why Google doesn’t have to worry about privacy laws. They’ve successfully prevented the passage of a US federal consumer privacy law. The last time the US passed a federal consumer privacy law was in 1988. It’s a law that bans video store clerks from telling the newspapers which VHS cassettes you rented:

https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act

In Europe, Google’s vast profits lets it fly an Irish flag of convenience, thus taking advantage of Ireland’s tolerance for tax evasion and violations of European privacy law:

https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town

Google doesn’t fear competition, it doesn’t fear regulation, and it also doesn’t fear rival technologies. Google and its fellow Big Tech cartel members have expanded IP law to allow it to prevent third parties from reverse-engineer, hacking, or scraping its services. Google doesn’t have to worry about ad-blocking, tracker blocking, or scrapers that filter out Google’s lucrative, low-quality results:

https://locusmag.com/2020/09/cory-doctorow-ip/

Google doesn’t fear competition, it doesn’t fear regulation, it doesn’t fear rival technology and it doesn’t fear its workers. Google’s workforce once enjoyed enormous sway over the company’s direction, thanks to their scarcity and market power. But Google has outgrown its dependence on its workers, and lays them off in vast numbers, even as it increases its profits and pisses away tens of billions on stock buybacks:

https://pluralistic.net/2023/11/25/moral-injury/#enshittification

Google is fearless. It doesn’t fear losing your business, or being punished by regulators, or being mired in guerrilla warfare with rival engineers. It certainly doesn’t fear its workers.

Making search worse is good for Google. Reducing search quality increases the number of queries, and thus ads, that each user must make to find their answers:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

If Google can make things worse for searchers without losing their business, it can make more money for itself. Without the discipline of markets, regulators, tech or workers, it has no impediment to transferring value from searchers and publishers to itself.

Which brings me back to AI search. When Google substitutes its own summaries for links to pages, it creates innumerable opportunities to charge publishers for preferential placement in those summaries.

This is true of any algorithmic feed: while such feeds are important — even vital — for making sense of huge amounts of information, they can also be used to play a high-speed shell-game that makes suckers out of the rest of us:

https://pluralistic.net/2024/05/11/for-you/#the-algorithm-tm

When you trust someone to summarize the truth for you, you become terribly vulnerable to their self-serving lies. In an ideal world, these intermediaries would be “fiduciaries,” with a solemn (and legally binding) duty to put your interests ahead of their own:

https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet

But Google is clear that its first duty is to its shareholders: not to publishers, not to searchers, not to “partners” or employees.

AI search makes cheating so easy, and Google cheats so much. Indeed, the defects in AI give Google a readymade excuse for any apparent self-dealing: “we didn’t tell you a lie because someone paid us to (for example, to recommend a product, or a hotel room, or a political point of view). Sure, they did pay us, but that was just an AI ‘hallucination.’”

The existence of well-known AI hallucinations creates a zone of plausible deniability for even more enshittification of Google search. As Madeleine Clare Elish writes, AI serves as a “moral crumple zone”:

https://estsjournal.org/index.php/ests/article/view/260

That’s why, even if you’re willing to believe that Google could make a great AI-based search, we can nevertheless be certain that they won’t.

If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/#ai-search

--

--