How to design a tech regulation
A modest proposal for a transatlantic self-equilibriating system.
(Tonight) June 21, I’m doing an online reading for the Locus Awards at 16hPT. On June 22, I’m keynoting the Locus Awards in Oakland, CA. On July 14, I’m giving the closing keynote for the fifteenth Hackers On Planet Earth, in Queens, NY.
It’s not your imagination: tech really is underregulated. There are plenty of avoidable harms that tech visits upon the world, and while some of these harms are mere negligence, others are self-serving, creating shareholder value and widespread public destruction.
Making good tech policy is hard, but not because “tech moves too fast for regulation to keep up with,” nor because “lawmakers are clueless about tech.” There are plenty of fast-moving areas that lawmakers manage to stay abreast of (think of the rapid, global adoption of masking and social distancing rules in mid-2020). Likewise we generally manage to make good policy in areas that require highly specific technical knowledge (that’s why it’s noteworthy and awful when, say, people sicken from badly treated tapwater, even though water safety, toxicology and microbiology are highly technical areas outside the background of most elected officials).
That doesn’t mean that technical rigor is irrelevant to making good policy. Well-run “expert agencies” include skilled practitioners on their payrolls — think here of large technical staff at the FTC, or the UK Competition and Markets Authority’s best-in-the-world Digital Markets Unit:
https://pluralistic.net/2022/12/13/kitbashed/#app-store-tax
The job of government experts isn’t just to research the correct answers. Even more important is experts’ role in evaluating conflicting claims from interested parties. When administrative agencies make new rules, they have to collect public comments and counter-comments. The best agencies also hold hearings, and the very best go on “listening tours” where they invite the broad public to weigh in (the FTC has done an awful lot of these during Lina Khan’s tenure, to its benefit, and it shows):
But when an industry dwindles to a handful of companies, the resulting cartel finds it easy to converge on a single talking point and to maintain strict message discipline. This means that the evidentiary record is starved for disconfirming evidence that would give the agencies contrasting perspectives and context for making good policy.
Tech industry shills have a favorite tactic: whenever there’s any proposal that would erode the industry’s profits, self-serving experts shout that the rule is technically impossible and deride the proposer as “clueless.”
This tactic works so well because the proposers sometimes are clueless. Take Europe’s on-again/off-again “chat control” proposal to mandate spyware on every digital device that will screen everything you upload for child sex abuse material (CSAM, better known as “child pornography”). This proposal is profoundly dangerous, as it will weaken end-to-end encryption, the key to all secure and private digital communication:
It’s also an impossible-to-administer mess that incorrectly assumes that killing working encryption in the two mobile app stores run by the mobile duopoly will actually prevent bad actors from accessing private tools:
When technologists correctly point out the lack of rigor and catastrophic spillover effects from this kind of crackpot proposal, lawmakers stick their fingers in their ears and shout “NERD HARDER!”
But this is only half the story. The other half is what happens when tech industry shills want to kill good policy proposals, which is the exact same thing that advocates say about bad ones. When lawmakers demand that tech companies respect our privacy rights — for example, by splitting social media or search off from commercial surveillance, the same people shout that this, too, is technologically impossible.
That’s a lie, though. Facebook started out as the anti-surveillance alternative to Myspace. We know it’s possible to operate Facebook without surveillance, because Facebook used to operate without surveillance:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3247362
Likewise, Brin and Page’s original Pagerank paper, which described Google’s architecture, insisted that search was incompatible with surveillance advertising, and Google established itself as a non-spying search tool:
http://infolab.stanford.edu/pub/papers/google.pdf
Even weirder is what happens when there’s a proposal to limit a tech company’s power to invoke the government’s powers to shut down competitors. Take Ethan Zuckerman’s lawsuit to strip Facebook of the legal power to sue people who automate their browsers to uncheck the millions of boxes that Facebook requires you to click by hand in order to unfollow everyone:
https://pluralistic.net/2024/05/02/kaiju-v-kaiju/#cda-230-c-2-b
Facebook’s apologists have lost their minds over this, insisting that no one can possibly understand the potential harms of taking away Facebook’s legal right to decide how your browser works. They take the position that only Facebook can understand when it’s safe and proportional to use Facebook in ways the company didn’t explicitly design for, and that they should be able to ask the government to fine or even imprison people who fail to defer to Facebook’s decisions about how its users configure their computers.
This is an incredibly convenient position, since it arrogates to Facebook the right to order the rest of us to use our computers in the ways that are most beneficial to its shareholders. But Facebook’s apologists insist that they are not motivated by parochial concerns over the value of their stock portfolios; rather, they have objective, technical concerns, that no one except them is qualified to understand or comment on.
There’s a great name for this: “scalesplaining.” As in “well, actually the platforms are doing an amazing job, but you can’t possibly understand that because you don’t work for them.” It’s weird enough when scalesplaining is used to condemn sensible regulation of the platforms; it’s even weirder when it’s weaponized to defend a system of regulatory protection for the platforms against would-be competitors.
Just as there are no atheists in foxholes, there are no libertarians in government-protected monopolies. Somehow, scalesplaining can be used to condemn governments as incapable of making any tech regulations and to insist that regulations that protect tech monopolies are just perfect and shouldn’t ever be weakened. Truly, it’s impossible to get someone to understand something when the value of their employee stock options depends on them not understanding it.
None of this is to say that every tech regulation is a good one. Governments often propose bad tech regulations (like chat control), or ones that are technologically impossible (like Article 17 of the EU’s 2019 Digital Single Markets Directive, which requires tech companies to detect and block copyright infringements in their users’ uploads).
But the fact that scalesplainers use the same argument to criticize both good and bad regulations makes the waters very muddy indeed. Policymakers are rightfully suspicious when they hear “that’s not technically possible” because they hear that both for technically impossible proposals and for proposals that scalesplainers just don’t like.
After decades of regulations aimed at making platforms behave better, we’re finally moving into a new era, where we just make the platforms less important. That is, rather than simply ordering Facebook to block harassment and other bad conduct by its users, laws like the EU’s Digital Markets Act will order Facebook and other VLOPs (Very Large Online Platforms, my favorite EU-ism ever) to operate gateways so that users can move to rival services and still communicate with the people who stay behind.
Think of this like number portability, but for digital platforms. Just as you can switch phone companies and keep your number and hear from all the people you spoke to on your old plan, the DMA will make it possible for you to change online services but still exchange messages and data with all the people you’re already in touch with.
I love this idea, because it finally grapples with the question we should have been asking all along: why do people stay on platforms where they face harassment and bullying? The answer is simple: because the people — customers, family members, communities — we connect with on the platform are so important to us that we’ll tolerate almost anything to avoid losing contact with them:
https://locusmag.com/2023/01/commentary-cory-doctorow-social-quitting/
Platforms deliberately rig the game so that we take each other hostage, locking each other into their badly moderated cesspits by using the love we have for one another as a weapon against us. Interoperability — making platforms connect to each other — shatters those locks and frees the hostages:
https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs
But there’s another reason to love interoperability (making moderation less important) over rules that require platforms to stamp out bad behavior (making moderation better). Interop rules are much easier to administer than content moderation rules, and when it comes to regulation, administratability is everything.
The DMA isn’t the EU’s only new rule. They’ve also passed the Digital Services Act, which is a decidedly mixed bag. Among its provisions are a suite of rules requiring companies to monitor their users for harmful behavior and to intervene to block it. Whether or not you think platforms should do this, there’s a much more important question: how can we enforce this rule?
Enforcing a rule requiring platforms to prevent harassment is very “fact intensive.” First, we have to agree on a definition of “harassment.” Then we have to figure out whether something one user did to another satisfies that definition. Finally, we have to determine whether the platform took reasonable steps to detect and prevent the harassment.
Each step of this is a huge lift, especially that last one, since to a first approximation, everyone who understands a given VLOP’s server infrastructure is a partisan, scalesplaining engineer on the VLOP’s payroll. By the time we find out whether the company broke the rule, years will have gone by, and millions more users will be in line to get justice for themselves.
So allowing users to leave is a much more practical step than making it so that they’ve got no reason to want to leave. Figuring out whether a platform will continue to forward your messages to and from the people you left there is a much simpler technical matter than agreeing on what harassment is, whether something is harassment by that definition, and whether the company was negligent in permitting harassment.
But as much as I like the DMA’s interop rule, I think it is badly incomplete. Given that the tech industry is so concentrated, it’s going to be very hard for us to define standard interop interfaces that don’t end up advantaging the tech companies. Standards bodies are extremely easy for big industry players to capture:
https://pluralistic.net/2023/04/30/weak-institutions/
If tech giants refuse to offer access to their gateways to certain rivals because they seem “suspicious,” it will be hard to tell whether the companies are just engaged in self-serving smears against a credible rival, or legitimately trying to protect their users from a predator trying to plug into their infrastructure. These fact-intensive questions are the enemy of speedy, responsive, effective policy administration.
But there’s more than one way to attain interoperability. Interop doesn’t have to come from mandates, interfaces designed and overseen by government agencies. There’s a whole other form of interop that’s far nimbler than mandates: adversarial interoperability:
https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
“Adversarial interoperability” is a catch-all term for all the guerrilla warfare tactics deployed in service to unilaterally changing a technology: reverse engineering, bots, scraping and so on. These tactics have a long and honorable history, but they have been slowly choked out of existence with a thicket of IP rights, like the IP rights that allow Facebook to shut down browser automation tools, which Ethan Zuckerman is suing to nullify:
https://locusmag.com/2020/09/cory-doctorow-ip/
Adversarial interop is very flexible. No matter what technological moves a company makes to interfere with interop, there’s always a countermove the guerrilla fighter can make — tweak the scraper, decompile the new binary, change the bot’s behavior. That’s why tech companies use IP rights and courts, not firewall rules, to block adversarial interoperators.
At the same time, adversarial interop is unreliable. The solution that works today can break tomorrow if the company changes its back-end, and it will stay broken until the adversarial interoperator can respond.
But when companies are faced with the prospect of extended asymmetrical war against adversarial interop in the technological trenches, they often surrender. If companies can’t sue adversarial interoperators out of existence, they often sue for peace instead. That’s because high-tech guerrilla warfare presents unquantifiable risks and resource demands, and, as the scalesplainers never tire of telling us, this can create real operational problems for tech giants.
In other words, if Facebook can’t shut down Ethan Zuckerman’s browser automation tool in the courts, and if they’re sincerely worried that a browser automation tool will uncheck its user interface buttons so quickly that it crashes the server, all it has to do is offer an official “unsubscribe all” button and no one will use Zuckerman’s browser automation tool.
We don’t have to choose between adversarial interop and interop mandates. The two are better together than they are apart. If companies building and operating DMA-compliant, mandatory gateways know that a failure to make them useful to rivals seeking to help users escape their authority is getting mired in endless hand-to-hand combat with trench-fighting adversarial interoperators, they’ll have good reason to cooperate.
And if lawmakers charged with administering the DMA notice that companies are engaging in adversarial interop rather than using the official, reliable gateway they’re overseeing, that’s a good indicator that the official gateways aren’t suitable.
It would be very on-brand for the EU to create the DMA and tell tech companies how they must operate, and for the USA to simply withdraw the state’s protection from the Big Tech companies and let smaller companies try their luck at hacking new features into the big companies’ servers without the government getting involved.
Indeed, we’re seeing some of that today. Oregon just passed the first ever Right to Repair law banning “parts pairing” — basically a way of using IP law to make it illegal to reverse-engineer a device so you can fix it.
https://www.opb.org/article/2024/03/28/oregon-governor-kotek-signs-strong-tech-right-to-repair-bill/
Taken together, the two approaches — mandates and reverse engineering — are stronger than either on their own. Mandates are sturdy and reliable, but slow-moving. Adversarial interop is flexible and nimble, but unreliable. Put ’em together and you get a two-part epoxy, strong and flexible.
Governments can regulate well, with well-funded expert agencies and smart, adminstratable remedies. It’s for that reason that the administrative state is under such sustained attack from the GOP and right-wing Dems. The illegitimate Supreme Court is on the verge of gutting expert agencies’ power:
It’s never been more important to craft regulations that go beyond mere good intentions and take account of adminsitratability. The easier we can make our rules to enforce, the less our beleaguered agencies will need to do to protect us from corporate predators.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/20/scalesplaining/#administratability