My latest Locus Magazine column is “Unpersoned.” It’s about the implications of putting critical infrastructure into the private, unaccountable hands of tech giants:

https://locusmag.com/2024/07/cory-doctorow-unpersoned/

The column opens with the story of romance writer K Renee, as reported by Madeline Ashby for Wired:

https://www.wired.com/story/what-happens-when-a-romance-author-gets-locked-out-of-google-docs/

Renee is a prolific writer who used Google Docs to compose her books, and share them among early readers for feedback and revisions. Last March, Renee’s Google account was locked, and she was no longer able to access ten manuscripts for her unfinished books, totaling over 220,000 words. Google’s famously opaque customer service — a mix of indifferently monitored forums, AI chatbots, and buck-passing subcontractors — would not explain to her what rule she had violated, merely that her work had been deemed “inappropriate.”

Renee discovered that she wasn’t being singled out. Many of her peers had also seen their accounts frozen and their documents locked, and none of them were able to get an explanation out of Google. Renee and her similarly situated victims of Google lockouts were reduced to developing folk-theories of what they had done to be expelled from Google’s walled garden; Renee came to believe that she had tripped an anti-spam system by inviting her community of early readers to access the books she was working on.

There’s a normal way that these stories resolve themselves: a reporter like Ashby, writing for a widely read publication like Wired, contacts the company and triggers a review by one of the vanishingly small number of people with the authority to undo the determinations of the Kafka-as-a-service systems that underpin the big platforms. The system’s victim gets their data back and the company mouths a few empty phrases about how they take something-or-other “very seriously” and so forth.

But in this case, Google broke the script. When Ashby contacted Google about Renee’s situation, Google spokesperson Jenny Thomson insisted that the policies for Google accounts were “clear”: “we may review and take action on any content that violates our policies.” If Renee believed that she’d been wrongly flagged, she could “request an appeal.”

But Renee didn’t even know what policy she was meant to have broken, and the “appeals” went nowhere.

This is an underappreciated aspect of “software as a service” and “the cloud.” As companies from Microsoft to Adobe to Google withdraw the option to use software that runs on your own computer to create files that live on that computer, control over our own lives is quietly slipping away. Sure, it’s great to have all your legal documents scanned, encrypted and hosted on GDrive, where they can’t be burned up in a house-fire. But if a Google subcontractor decides you’ve broken some unwritten rule, you can lose access to those docs forever, without appeal or recourse.

That’s what happened to “Mark,” a San Francisco tech workers whose toddler developed a UTI during the early covid lockdowns. The pediatrician’s office told Mark to take a picture of his son’s infected penis and transmit it to the practice using a secure medical app. However, Mark’s phone was also set up to synch all his pictures to Google Photos (this is a default setting), and when the picture of Mark’s son’s penis hit Google’s cloud, it was automatically scanned and flagged as Child Sex Abuse Material (CSAM, better known as “child porn”):

https://pluralistic.net/2022/08/22/allopathic-risk/#snitches-get-stitches

Without contacting Mark, Google sent a copy of all of his data — searches, emails, photos, cloud files, location history and more — to the SFPD, and then terminated his account. Mark lost his phone number (he was a Google Fi customer), his email archives, all the household and professional files he kept on GDrive, his stored passwords, his two-factor authentication via Google Authenticator, and every photo he’d ever taken of his young son.

The SFPD concluded that Mark hadn’t done anything wrong, but it was too late. Google had permanently deleted all of Mark’s data. The SFPD had to mail a physical letter to Mark telling him he wasn’t in trouble, because he had no email and no phone.

Mark’s not the only person this happened to. Writing about Mark for the New York Times, Kashmir Hill described other parents, like a Houston father identified as “Cassio,” who also lost their accounts and found themselves blocked from fundamental participation in modern life:

https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

Note that in none of these cases did the problem arise from the fact that Google services are advertising-supported, and because these people weren’t paying for the product, they were the product. Buying a $800 Pixel phone or paying more than $100/year for a Google Drive account means that you’re definitely paying for the product, and you’re still the product.

What do we do about this? One answer would be to force the platforms to provide service to users who, in their judgment, might be engaged in fraud, or trafficking in CSAM, or arranging terrorist attacks. This is not my preferred solution, for reasons that I hope are obvious!

We can try to improve the decision-making processes at these giant platforms so that they catch fewer dolphins in their tuna-nets. The “first wave” of content moderation appeals focused on the establishment of oversight and review boards that wronged users could appeal their cases to. The idea was to establish these “paradigm cases” that would clarify the tricky aspects of content moderation decisions, like whether uploading a Nazi atrocity video in order to criticize it violated a rule against showing gore, Nazi paraphernalia, etc.

This hasn’t worked very well. A proposal for “second wave” moderation oversight based on arms-length semi-employees at the platforms who gather and report statistics on moderation calls and complaints hasn’t gelled either:

https://pluralistic.net/2022/03/12/move-slow-and-fix-things/#second-wave

Both the EU and California have privacy rules that allow users to demand their data back from platforms, but neither has proven very useful (yet) in situations where users have their accounts terminated because they are accused of committing gross violations of platform policy. You can see why this would be: if someone is accused of trafficking in child porn or running a pig-butchering scam, it would be perverse to shut down their account but give them all the data they need to go one committing these crimes elsewhere.

But even where you can invoke the EU’s GDPR or California’s CCPA to get your data, the platforms deliver that data in the most useless, complex blobs imaginable. For example, I recently used the CCPA to force Mailchimp to give me all the data they held on me. Mailchimp — a division of the monopolist and serial fraudster Intuit — is a favored platform for spammers, and I have been added to thousands of Mailchimp lists that bombard me with unsolicited press pitches and come-ons for scam products.

Mailchimp has spent a decade ignoring calls to allow users to see what mailing lists they’ve been added to, as a prelude to mass unsubscribing from those lists (for Mailchimp, the fact that spammers can pay it to send spam that users can’t easily opt out of is a feature, not a bug). I thought that the CCPA might finally let me see the lists I’m on, but instead, Mailchimp sent me more than 5900 files, scattered through which were the internal serial numbers of the lists my name had been added to — but without the names of those lists any contact information for their owners. I can see that I’m on more than 1,000 mailing lists, but I can’t do anything about it.

Mailchimp shows how a rule requiring platforms to furnish data-dumps can be easily subverted, and its conduct goes a long way to explaining why a decade of EU policy requiring these dumps has failed to make a dent in the market power of the Big Tech platforms.

The EU has a new solution to this problem. With its 2024 Digital Markets Act, the EU is requiring platforms to furnish APIs — programmatic ways for rivals to connect to their services. With the DMA, we might finally get something parallel to the cellular industry’s “number portability” for other kinds of platforms.

If you’ve ever changed cellular platforms, you know how smooth this can be. When you get sick of your carrier, you set up an account with a new one and get a one-time code. Then you call your old carrier, endure their pathetic begging not to switch, give them that number and within a short time (sometimes only minutes), your phone is now on the new carrier’s network, with your old phone-number intact.

This is a much better answer than forcing platforms to provide service to users whom they judge to be criminals or otherwise undesirable, but the platforms hate it. They say they hate it because it makes them complicit in crimes (“if we have to let an accused fraudster transfer their address book to a rival service, we abet the fraud”), but it’s obvious that their objection is really about being forced to reduce the pain of switching to a rival.

There’s a superficial reasonableness to the platforms’ position, but only until you think about Mark, or K Renee, or the other people who’ve been “unpersonned” by the platforms with no explanation or appeal.

The platforms have rigged things so that you must have an account with them in order to function, but they also want to have the unilateral right to kick people off their systems. The combination of these demands represents more power than any company should have, and Big Tech has repeatedly demonstrated its unfitness to wield this kind of power.

This week, I lost an argument with my accountants about this. They provide me with my tax forms as links to a Microsoft Cloud file, and I need to have a Microsoft login in order to retrieve these files. This policy — and a prohibition on sending customer files as email attachments — came from their IT team, and it was in response to a requirement imposed by their insurer.

The problem here isn’t merely that I must now enter into a contractual arrangement with Microsoft in order to do my taxes. It isn’t just that Microsoft’s terms of service are ghastly. It’s not even that they could change those terms at any time, for example, to ingest my sensitive tax documents in order to train a large language model.

It’s that Microsoft — like Google, Apple, Facebook and the other giants — routinely disconnects users for reasons it refuses to explain, and offers no meaningful appeal. Microsoft tells its business customers, “force your clients to get a Microsoft account in order to maintain communications security” but also reserves the right to unilaterally ban those clients from having a Microsoft account.

There are examples of this all over. Google recently flipped a switch so that you can’t complete a Google Form without being logged into a Google account. Now, my ability to purse all kinds of matters both consequential and trivial turn on Google’s good graces, which can change suddenly and arbitrarily. If I was like Mark, permanently banned from Google, I wouldn’t have been able to complete Google Forms this week telling a conference organizer what sized t-shirt I wear, but also telling a friend that I could attend their wedding.

Now, perhaps some people really should be locked out of digital life. Maybe people who traffick in CSAM should be locked out of the cloud. But the entity that should make that determination is a court, not a Big Tech content moderator. It’s fine for a platform to decide it doesn’t want your business — but it shouldn’t be up to the platform to decide that no one should be able to provide you with service.

This is especially salient in light of the chaos caused by Crowdstrike’s catastrophic software update last week. Crowdstrike demonstrated what happens to users when a cloud provider accidentally terminates their account, but while we’re thinking about reducing the likelihood of such accidents, we should really be thinking about what happens when you get Crowdstruck on purpose.

The wholesale chaos that Windows users and their clients, employees, users and stakeholders underwent last week could have been pieced out retail. It could have come as a court order (either by a US court or a foreign court) to disconnect a user and/or brick their computer. It could have come as an insider attack, undertaken by a vengeful employee, or one who was on the take from criminals or a foreign government. The ability to give anyone in the world a Blue Screen of Death could be a feature and not a bug.

It’s not that companies are sadistic. When they mistreat us, it’s nothing personal. They’ve just calculated that it would cost them more to run a good process than our business is worth to them. If they know we can’t leave for a competitor, if they know we can’t sue them, if they know that a tech rival can’t give us a tool to get our data out of their silos, then the expected cost of mistreating us goes down. That makes it economically rational to seek out ever-more trivial sources of income that impose ever-more miserable conditions on us. When we can’t leave without paying a very steep price, there’s practically a fiduciary duty to find ways to upcharge, downgrade, scam, screw and enshittify us, right up to the point where we’re so pissed that we quit.

Google could pay competent decision-makers to review every complaint about an account disconnection, but the cost of employing that large, skilled workforce vastly exceeds their expected lifetime revenue from a user like Mark. The fact that this results in the ruination of Mark’s life isn’t Google’s problem — it’s Mark’s problem.

The cloud is many things, but most of all, it’s a trap. When software is delivered as a service, when your data and the programs you use to read and write it live on computers that you don’t control, your switching costs skyrocket. Think of Adobe, which no longer lets you buy programs at all, but instead insists that you run its software via the cloud. Adobe used the fact that you no longer own the tools you rely upon to cancel its Pantone color-matching license. One day, every Adobe customer in the world woke up to discover that the colors in their career-spanning file collections had all turned black, and would remain black until they paid an upcharge:

https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process

The cloud allows the companies whose products you rely on to alter the functioning and cost of those products unilaterally. Like mobile apps — which can’t be reverse-engineered and modified without risking legal liability — cloud apps are built for enshittification. They are designed to shift power away from users to software companies. An app is just a web-page wrapped in enough IP to make it a felony to add an ad-blocker to it. A cloud app is some Javascript wrapped in enough terms of service clickthroughs to make it a felony to restore old features that the company now wants to upcharge you for.

Google’s defenstration of K Renee, Mark and Cassio may have been accidental, but Google’s capacity to defenstrate all of us, and the enormous cost we all bear if Google does so, has been carefully engineered into the system. Same goes for Apple, Microsoft, Adobe and anyone else who traps us in their silos. The lesson of the Crowdstrike catastrophe isn’t merely that our IT systems are brittle and riddled with single points of failure: it’s that these failure-points can be tripped deliberately, and that doing so could be in a company’s best interests, no matter how devastating it would be to you or me.

If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/07/22/degoogled/#kafka-as-a-service

--

--

Cory Doctorow
Cory Doctorow

Written by Cory Doctorow

Writer, blogger, activist. Blog: https://pluralistic.net; Mailing list: https://pluralistic.net/plura-list; Mastodon: @pluralistic@mamot.fr

Responses (18)