This column is forthcoming in Locus Magazine.
Last summer, the pandemic was in its first wave and the nation was in chaos. A lack of federal leadership left each state to figure out how to interpret the science, and many states punted public health decisions to counties or cities or even smaller units, like universities.
Leaders, left to their own, often winged it, letting wishful thinking trump prudence in the drive to find ways to “reopen safely.” The novelty of the virus and the chaos of the response opened a space for all kinds of experts to weigh in on best way to balance conflicting imperatives and evidence.
That’s how the University of Illinois at Urbana-Champaign came to be the epicenter of a massive outbreak. UI reopened with confident statements that a set of health measures — distancing, masking, testing, and an app — would keep the total number of cases on campus that semester below 500, with no more than 100 cases at any time.
Within weeks, there were 780 active cases on campus and the university sent everyone home again.
UI’s plan was based on a model produced by a pair of physicists who took a break from their discipline to work on epidemiology. When the plan was unveiled, the physicists made disparaging remarks about epidemiology to the press, saying that modeling human interactivity lacked the “intellectual thrill” of their usual fare.
How did the model go so very awry? How did UI immediately blow past the “worst case scenario” of 100 cases and rise to 780 cases?
Simple: the model did not account for the students attending drunken parties where they breathed on each other. A lot.
Anyone who studies public health knows the importance of qualitative factors. Even seemingly precise, quantitative figures, like the infamous R0 — describing the rate of spread of a pathogen — is heavily dependent on qualitative factors that you just can’t do math on. R0 doesn’t just depend on things like, “How many virus particles must you inhale before you are likely to become infected,” it depends every bit as much on things like “do people trust public health authorities enough to report their contacts after they are diagnosed with an infection.”
But mathematical models operate on quantitative elements. To do math on a qualitative measurement, you must first quantize it, assigning a numeric value to it. This is also a qualitative exercise, because “how much does this hurt?” or “how intense does this shade of blue appear to you?” or “how much do you trust the CDC?” are not questions with precise, deterministic answers.
Quantitative disciplines — physics, math, and (especially) computer science — make a pretense of objectivity. They make very precise measurements of everything that can be measured precisely, assign deceptively precise measurements to things that can’t be measured precisely, and jettison the rest on the grounds that you can’t do mathematical operations on it.
This is the quant’s version of the drunkard’s search for car-keys under the lamp-post: we can’t add, subtract, multiply or divide qualitative elements, so we just incinerate them, sweep up the dubious quantitative residue that remains, and do math on that, and simply assert that nothing important was lost in the process.
This is one of the reasons that “contact tracing” apps were such a bust. When a public health worker does “contact tracing,” they call patients and the people who may have been exposed to them, establish a person-to-person rapport with those people, win their trust, and both question them about other contacts and give advice on how to get tested and avoid potential further spread.
By contrast, the “contact tracing” apps we were urged to install were purely quantitative. They measured whether two low-powered Bluetooth radios were within range of one another, and for how long. If your Bluetooth device was within range of a device that belonged to someone with a positive test, you would get a notification that you had been exposed.
“Exposure notification” is the residue that’s left behind when you put “contact tracing” in the quantitative incinerator. Shriven of context and connection, the numeric facts that your device was in contact with another device for a clinically significant duration does very little to contain the disease. It doesn’t distinguish between devices that sensed one another in adjacent, sealed automobiles in slow-moving traffic and devices that made contact while their owners were competing to set the all-time Fort Lauderdale record for the longest eyeball-licking session.
We knew how to make apps that notified people of exposure. We didn’t — and don’t — know how to make apps that trace contacts. So we made exposure notification apps and declared that exposure notification was equivalent to contact tracing.
I can’t look for my keys where I dropped them, fella, the light’s no good over there.
There’s nothing wrong with the urge to do math. The nondeterministic, chaotic nature of the universe sometimes serves up happy accidents, but it produces a lot of unpredictable, scary outcomes, too: cancer, tsnuamis, meteor strikes, pandemics.
Forty years ago, a group of legal and economics scholars decided that even if determinism is hard to find in the natural world, we should at least strive for predictable outcomes in our human institutions. If a fundamental tenet of justice is that we’re all equal before the law, then shouldn’t the law render the same verdict whenever it is faced with the same facts?
The “Law and Economics” movement of the University of Chicago was a joint legal-economic project, and it set about removing those elements of the law that are “political” (that is, elements of the law that turn on fuzzy, qualitative questions) and replace them with crisp, “economic” answers.
If the law seeks to produce the greatest public benefit, then “public benefit” must be a calculable number, not a social judgment. Otherwise, two large grocery chains seeking to merge may find themselves blocked under an antitrust ruling that declares the merger “against the public interest” while two very similar chains in a different court may find that their merger is approved because it will not violate the “public interest.”
The “econ” side of the Law and Econ movement also came out of the Chicago School, where they pioneered the use of dense, mathematically complex models. Conveniently, only supporters of the Law and Econ movement really understood these models well enough to construct or critique them.
For antitrust, that meant that monopolies were “provably,” “objectively” good for society and therefore permissible under antitrust law.
When two companies proposed a merger, they could pay University of Chicago-trained law-and-econ specialists to construct a model showing that the merger would not give rise to higher prices or other forms of social harm.
If the new, enlarged firm went on to raise prices, then it could pay the same experts to construct a new model that proved that the price-rise was not attributable to “market power” (that is, monopoly) but related instead to “exogenous” factors like rising wages or energy prices.
Everyone qualified to analyze these models for flaws was on the side of the law-and-econ bunch. Outsiders brought in to pick apart the models were met with contemptuous sneers from the models’ master builders.
And if any outsider had the temerity to insist that mathematical models couldn’t capture the harms of monopolies (for example, that an industry dominated by a small number of companies could lobby for favorable rules, bribing politicians to allow monopolists to maim or poison the voters with impunity) was dismissed as unrealistic and out-of-step for demanding that the law be sullied with qualitative, “political” considerations, rather than quantitative, “economic” ones.
Law-and-econ destroyed an antitrust world where anyone was entitled to be a part of the outcomes by describing how monopolies affected their lives. They replaced it with antitrust world that was guarded by a priesthood that would answer all questions by constructing a model and pronouncing its conclusions, this being the modern equivalent to a priesthood that answers all important questions by slaughtering an ox and reading its entrails.
40 years later, the failures with law and econ can no longer be ignored. A series of ever-worsening financial crises have demonstrated that economics is not a “science” and that its models make wildly incorrect predictions about how people will actually behave. Indeed, the most significant change to economics in a generation was the advent of “behavioral economics,” a field whose innovation was to actually check to see if people behaved in the ways that economic models predict (they don’t).
Add to that the outcomes of law-and-econ antitrust (mal)practice: every industry has been concentrated down to a handful of giant companies. These companies claim to be “efficient” but they are anything but: they rip us off, screw us over, poison and maim us, and you can’t even get anyone on the phone when it happens. Worse still, they’ve bribed politicians on both sides of the aisle and countries all over the world to let them get away with it. Are you going to fly in one of Southwest’s new Boeing 737 Max airplanes, certified 100% not-gonna-crash by the same agencies and internal compliance department that made that claim last time around?
I’m not sure I will.
If that wasn’t enough to shatter the law-and-econ mind-palace and its claims to neutral empiricism, there is the growing realization that even if the law could be made neutral, that would not make it just.
Take the question of price-fixing, the one monopolistic sin that law-and-econ is willing to punish. Companies that collude to raise prices break the law. Remember, law-and-econ’s version of antitrust concerns itself solely with “consumer welfare.” It’s self-evident that consumers are not better off when they pay higher prices.
In an “objective” world, we treat all industry price-fixing collusion the same. When the Big Six publishers got together to push Amazon to set the price of new-release ebooks at $10, they got slaughtered by the DoJ.
But price-fixing is only illegal if it’s collusive. Now that the Big Six are the Big Four, with Penguin-Random House-Simon & Schuster (which is actually Viking-Putnam-Berkeley-Avery-Ace-Avon-Grosset & Dunlop-Playboy Press-New American Library-Dutton-Jove-Dial-Warne-Ladybird-Pelican-Hamish Hamilton-Tarcher-Bantam-Doubleday-Dell-Knopf-Harold Shaw-Multnomah-Pocket-Esquire-Allyn & Bacon-Quercus-Fearon-Janus-Penguin-Random House-Simon & Schuster), they can set prices “internally” without raising antitrust issues.
When the CEO of Penguin and the CEO of Random House and the CEO of Simon & Schuster hatch a joint plan to raise ebook prices, it’s illegal.
But! When the President of Penguin (a division of Penguin-Random House-Simon & Schuster) and the President of Random House (a division of Penguin-Random House-Simon & Schuster) and the President of Simon & Schuster (a division of Penguin-Random House-Simon & Schuster) hatch a plan to raise ebook prices, that’s the “internal efficiencies” of a monopolist at work.
Even more absurd, consider what happens when the “companies” doing the “price-fixing” are actually employees who’ve been misclassified as “independent contractors,” like truckers, or Uber drivers, or gig-company delivery workers.
Each of these workers — typically earning far less than the minimum wage, without benefits or basic protections — is considered to be an independent company. If they form a collective to demand higher wages, they are “price fixing” and breaking antitrust law.
Meanwhile, if the gig work sector — highly concentrated in the hands of a few dominant firms like Uber and Lyft, which have gobbled up many of the other gig-work platforms — gets together to spend $200m to pass California’s Prop 22, which bars California lawmakers from forcing companies to treat these misclassified workers as employees, that’s not a violation. That’s just lobbying. In theory, all of those thousands of Uber drivers could have formed an “industry association,” and raised hundreds of millions to fight against Prop 22.
All that’s standing in the way of such a course of action is the fact that it’s impossible. It’s impossible for hundreds of thousands of desperate, abused, precarious workers to fight two giant, massively capitalized, multi-billion-dollar companies by lobbying for laws favorable to their “industry.” The way workers push back against their employers is by forming a union and going on strike. Prop 22 takes that off the table.
The absurdity of using antitrust law to threaten thousands of exploited drivers for demanding a living wage — but not to address a duopoly of giant, global companies for denying it to them — is the final nail in law-and-econ’s coffin.
It exposes law-and-econ’s “neutrality” for a sham. Treating all parties as equal before the law sounds good, but consider what it really means: treating a boss who sexually propositions his employee the same way you would treat a teenager who asked another teenager out on a date. Treating a “cartel” of individual Uber drivers the same way you treat a “cartel” of giant, global publishers.
Discarding the qualitative is a qualitative act. Not all incinerators are created equal: the way you produce your dubious quantitative residue is a choice, a decision, not an equation.
But math is good.
We say we want “evidence-based policy” — rules that try to produce the objectively best outcomes. Does the irreducible nature of qualitative factors in human institutions mean that we can’t ever have objectivity?
Consider the tale of David Nutt, an eminent neuropsychopharmacologist who served as the “drug czar” to the British government in 2008. Nutt was in charge of the Advisory Council on the Misuse of Drugs, the body that sets out the rules for which drugs should be made illegal and under what circumstances.
As part of a review of drug rules, Nutt convened an expert panel and asked them to classify an array of substances, ranking each one based on how harmful it was to its users, to their families, and to wider society. He used this quantitative data to group drugs into three categories:
1) drugs that would be considered very dangerous irrespective of how you ranked harms to society, family, and self;
2) drugs that would be considered not very dangerous irrespective of how you ranked harms to society, family, and self; and
3) drugs whose danger-rating would change substantially based on how you ranked harms to society, family and self.
Nutt then took his categories to the UK Parliament and asked them to tell him how they prioritized these different forms of harm: the question of whether we want to protect individuals, families or society is a political one, irreducibly qualitative, without an empirical solution.
However, once those subjective political priorities have been established, there is an empirical solution to how drugs should be classified in light of these priorities.
Qualitative elements are important, but they’re not everything. And just because we can’t rid ourselves of the subjective, it doesn’t follow that we must abandon the objective.
David Nutt isn’t the UK drug czar anymore.
He got fired after he refused to retract a speech in which he stated that cannabis and other “recreational” drugs are less dangerous than legal drugs, particularly alcohol and tobacco.
I think it was the comparison with alcohol that really did it. After all, booze is one of the most concentrated industries in the world, with just a few companies producing nearly all the beer and spirits we drink.
What’s more, Nutt had made a particular enemy of the booze industry. It’s been long understood that the UK alcohol industry’s profitability was entirely dependent on unsafe binge drinking. If everyone in Britain “enjoyed responsibly” as the industry’s ads urged, they would no longer be in profit.
In response, the UK drinks industry has created its own anti-binge-drinking education program, which is presented in schools and universities. It is wildly ineffective. The alcohol dealers claimed that this was proof that binge drinking arises naturally out of the recklessness of drinkers, and that there’s nothing the booze companies can do to reduce it.
So Nutt produced his own curriculum, and he conducted a trial, exposing like-for-like audiences to either his program or the industry’s program, and followed up to see whether he fared any better.
It will not surprise you to learn that he did much, much better. When young people were exposed to Nutt’s anti-binge-drinking curriculum, their binge-drinking fell off a cliff.
It will also not surprise you to learn that the industry’s ineffective (but profit-preserving) curriculum wasn’t replaced with Nutt’s effective (but profit-destroying) version.
We can’t disregard qualitative factors, sure. But there are empirical truths:
* Alcohol is more dangerous than cannabis
* The booze industry’s monopoly gives it the profitability it needs to lobby for policies that kill people by the millions
* Treating everyone the same doesn’t produce justice
Cory Doctorow (craphound.com) is a science fiction author, activist, and blogger. He has a podcast, a newsletter, a Twitter feed, a Mastodon feed, and a Tumblr feed. He was born in Canada, became a British citizen and now lives in Burbank, California. His latest nonfiction book is How to Destroy Surveillance Capitalism. His latest novel for adults is Attack Surface. His latest short story collection is Radicalized. His latest picture book is Poesy the Monster Slayer. His latest YA novel is Pirate Cinema. His latest graphic novel is In Real Life. His forthcoming books include The Shakedown (with Rebecca Giblin), a book about artistic labor market and excessive buyer power; Red Team Blues, a noir thriller about cryptocurrency, corruption and money-laundering; and The Lost Cause, a utopian post-GND novel about truth and reconciliation with white nationalist militias.