Conspiratorialism and the epistemological crisis
We may not know what’s in the box, but we can tell if it’s been damaged in transit.
I’m on tour with my new, nationally bestselling novel The Bezzle! Catch me next weekend (Mar 30–31) at Wondercon in Anaheim, then Boston (Apr 11) with Randall “XKCD” Munroe, Providence (Apr 12) and beyond!
Last year, Ed Pierson was supposed to fly from Seattle to New Jersey on Alaska Airlines. He boarded his flight, but then he had an urgent discussion with the flight attendant, explaining that as a former senior Boeing engineer, he’d specifically requested that flight because the aircraft wasn’t a 737 Max:
https://www.cnn.com/travel/boeing-737-max-passenger-boycott/index.html
But for operational reasons, Alaska had switched out the equipment on the flight and there he was on a 737 Max, about to travel cross-continent, and he didn’t feel safe doing so. He demanded to be let off the flight. His bags were offloaded and he walked back up the jetbridge after telling the spooked flight attendant, “I can’t go into detail right now, but I wasn’t planning on flying the Max, and I want to get off the plane.”
Boeing, of course, is a flying disaster that was years in the making. Its planes have been falling out of the sky since 2019. Floods of whistleblowers have come forward to say its aircraft are unsafe. Pierson’s not the only Boeing employee to state — both on and off the record — that he wouldn’t fly on a specific model of Boeing aircraft, or, in some cases any recent Boeing aircraft:
https://pluralistic.net/2024/01/22/anything-that-cant-go-on-forever/#will-eventually-stop
And yet, for years, Boeing’s regulators have allowed the company to keep turning out planes that keep turning out lemons. This is a pretty frightening situation, to say the least. I’m not an aerospace engineer, I’m not an aircraft safety inspector, but every time I book a flight, I have to make a decision about whether to trust Boeing’s assurances that I can safely board one of its planes without dying.
In an ideal world, I wouldn’t even have to think about this. I’d be able to trust that publicly accountable regulators were on the job, making sure that airplanes were airworthy. “Caveat emptor” is no way to run a civilian aviation system.
But even though I don’t have the specialized expertise needed to assess the airworthiness of Boeing planes, I do have the much more general expertise needed to assess the trustworthiness of Boeing’s regulator. The FAA has spent years deferring to Boeing, allowing it to self-certify that its aircraft were safe. Even when these assurances led to the death of hundreds of people, the FAA continued to allow Boeing to mark its own homework:
https://www.youtube.com/watch?v=Q8oCilY4szc
What’s more, the FAA boss who presided over those hundreds of deaths was an ex-Boeing lobbyist, whom Trump subsequently appointed to run Boeing’s oversight. He’s not the only ex-insider who ended up a regulator, and there’s plenty of ex-regulators now on Boeing’s payroll:
https://therevolvingdoorproject.org/boeing-debacle-shows-need-to-investigate-trump-era-corruption/
You don’t have to be an aviation expert to understand that companies have conflicts of interest when it comes to certifying their own products. “Market forces” aren’t going to keep Boeing from shipping defective products, because the company’s top brass are more worried about cashing out with this quarter’s massive stock buybacks than they are about their successors’ ability to manage the PR storm or Congressional hearings after their greed kills hundreds and hundreds of people.
You also don’t have to be an aviation expert to understand that these conflicts persist even when a Boeing insider leaves the company to work for its regulators, or vice-versa. A regulator who anticipates a giant signing bonus from Boeing after their term in office, or a an ex-Boeing exec who holds millions in Boeing stock has an irreconcilable conflict of interest that will make it very hard — perhaps impossible — for them to hold the company to account when it trades safety for profit.
It’s not just Boeing customers who feel justifiably anxious about trusting a system with such obvious conflicts of interest: Boeing’s own executives, lobbyists and lawyers also refuse to participate in similarly flawed systems of oversight and conflict resolution. If Boeing was sued by its shareholders and the judge was also a pissed off Boeing shareholder, they would demand a recusal. If Boeing was looking for outside counsel to represent it in a liability suit brought by the family of one of its murder victims, they wouldn’t hire the firm that was suing them — not even if that firm promised to be fair. If a Boeing executive’s spouse sued for divorce, that exec wouldn’t use the same lawyer as their soon-to-be-ex.
Sure, it takes specialized knowledge and training to be a lawyer, a judge, or an aircraft safety inspector. But anyone can look at the system those experts work in and spot its glaring defects. In other words, while acquiring expertise is hard, it’s much easier to spot weaknesses in the process by which that expertise affects the world around us.
And therein lies the problem: aviation isn’t the only technically complex, potentially lethal, and utterly, obviously untrustworthy system we all have to navigate. How about the building safety codes that governed the structure you’re in right now? Plenty of people have blithely assumed that structural engineers carefully designed those standards, and that these standards were diligently upheld, only to discover in tragic, ghastly ways that this was wrong:
https://www.bbc.com/news/64568826
There are dozens — hundreds! — of life-or-death, highly technical questions you have to resolve every day just to survive. Should you trust the antilock braking firmware in your car? How about the food hygiene rules in the factories that produced the food in your shopping cart? Or the kitchen that made the pizza that was just delivered? Is your kid’s school teaching them well, or will they grow up to be ignoramuses and thus economic roadkill?
Hell, even if I never get into another Boeing aircraft, I live in the approach path for Burbank airport, where Southwest lands 50+ Boeing flights every day. How can I be sure that the next Boeing 737 Max that falls out of the sky won’t land on my roof?
This is the epistemological crisis we’re living through today. Epistemology is the process by which we know things. The whole point of a transparent, democratically accountable process for expert technical deliberation is to resolve the epistemological challenge of making good choices about all of these life-or-death questions. Even the smartest person among us can’t learn to evaluate all those questions, but we can all look at the process by which these questions are answered and draw conclusions about its soundness.
Is the process public? Are the people in charge of it forthright? Do they have conflicts of interest, and, if so, do they sit out any decision that gives even the appearance of impropriety? If new evidence comes to light — like, say, a horrific disaster — is there a way to re-open the process and change the rules?
The actual technical details might be a black box for us, opaque and indecipherable. But the box itself can be easily observed: is it made of sturdy material? Does it have sharp corners and clean lines? Or is it flimsy, irregular and torn? We don’t have to know anything about the box’s contents to conclude that we don’t trust the box.
For example: we may not be experts in chemical engineering or water safety, but we can tell when a regulator is on the ball on these issues. Back in 2019, the West Virginia Department of Environmental Protection sought comment on its water safety regs. Dow Chemical — the largest corporation in the states’s largest industry — filed comments arguing that WV should have lower standards for chemical contamination in its drinking water.
Now, I’m perfectly prepared to believe that there are safe levels of chemical runoff in the water supply. There’s a lot of water in the water supply, after all, and “the dose makes the poison.” What’s more, I use the products whose manufacture results in that chemical waste. I want them to be made safely, but I do want them to be made — for one thing, the next time I have surgery, I want the anesthesiologist to start an IV with fresh, sterile plastic tubing.
And I’m not a chemist, let alone a water chemist. Neither am I a toxicologist. There are aspects of this debate I am totally unqualified to assess. Nevertheless, I think the WV process was a bad one, and here’s why:
https://www.wvma.com/press/wvma-news/4244-wvma-statement-on-human-health-criteria-development
That’s Dow’s comment to the regulator (as proffered by its mouthpiece, the WV Manufacturers’ Association, which it dominates). In that comment, Dow argues that West Virginians safely can absorb more poison than other Americans, because the people of West Virginia are fatter than other Americans, and so they have more tissue and thus a better ratio of poison to person than the typical American. But they don’t stop there! They also say that West Virginians don’t drink as much water as their out-of-state cousins, preferring to drink beer instead, so even if their water is more toxic, they’ll be drinking less of it:
https://washingtonmonthly.com/2019/03/14/the-real-elitists-looking-down-on-trump-voters/
Even without any expertise in toxicology or water chemistry, I can tell that these are bullshit answers. The fact that the WV regulator accepted these comments tells me that they’re not a good regulator. I was in WV last year to give a talk, and I didn’t drink the tap water.
It’s totally reasonable for non-experts to reject the conclusions of experts when the process by which those experts resolve their disagreements is obviously corrupt and irredeemably flawed. But some refusals carry higher costs — both for the refuseniks and the people around them — than my switching to bottled water when I was in Charleston.
Take vaccine denial (or “hesitancy”). Many people greeted the advent of an extremely rapid, high-tech covid vaccine with dread and mistrust. They argued that the pharma industry was dominated by corrupt, greedy corporations that routinely put their profits ahead of the public’s safety, and that regulators, in Big Pharma’s pocket, let them get away with mass murder.
The thing is, all that is true. Look, I’ve had five covid vaccinations, but not because I trust the pharma industry. I’ve had direct experience of how pharma sacrifices safety on greed’s altar, and narrowly avoided harm myself. I have had chronic pain problems my whole life, and they’ve gotten worse every year. When my daughter was on the way, I decided this was going to get in the way of my ability to parent — I wanted to be able to carry her for long stretches! — and so I started aggressively pursuing the pain treatments I’d given up on many years before.
My journey led me to many specialists — physios, dieticians, rehab specialists, neurologists, surgeons — and I tried many, many therapies. Luckily, my wife had private insurance — we were in the UK then — and I could go to just about any doctor that seemed promising. That’s how I found myself in the offices of a Harley Street quack, a prominent pain specialist, who had great news for me: it turned out that opioids were way safer than had previously been thought, and I could just take opioids every day and night for the rest of my life without any serious risk of addiction. It would be fine.
This sounded wrong to me. I’d lost several friends to overdoses, and watched others spiral into miserable lives as they struggled with addiction. So I “did my own research.” Despite not having a background in chemistry, biology, neurology or pharmacology, I struggled through papers and read commentary and came to the conclusion that opioids weren’t safe at all. Rather, corrupt billionaire pharma owners like the Sackler family had colluded with their regulators to risk the lives of millions by pushing falsified research that was finding publication in some of the most respected, peer-reviewed journals in the world.
I became an opioid denier, in other words.
I decided, based on my own research, that the experts were wrong, and that they were wrong for corrupt reasons, and that I couldn’t trust their advice.
When anti-vaxxers decried the covid vaccines, they said things that were — in form at least — indistinguishable from the things I’d been saying 15 years earlier, when I decided to ignore my doctor’s advice and throw away my medication on the grounds that it would probably harm me.
For me, faith in vaccines didn’t come from a broad, newfound trust in the pharmaceutical system: rather, I judged that there was so much scrutiny on these new medications that it would overwhelm even pharma’s ability to corruptly continue to sell a medication that they secretly knew to be harmful, as they’d done so many times before:
https://www.npr.org/2007/11/10/5470430/timeline-the-rise-and-fall-of-vioxx
But many of my peers had a different take on anti-vaxxers: for these friends and colleagues, anti-vaxxers were being foolish. Surprisingly, these people I’d long felt myself in broad agreement with began to defend the pharmaceutical system and its regulators. Once they saw that anti-vaxx was a wedge issue championed by right-wing culture war shitheads, they became not just pro-vaccine, but pro-pharma.
There’s a name for this phenomenon: “schismogenesis.” That’s when you decide how you feel about an issue based on who supports it. Think of self-described “progressives” who became cheerleaders for the America’s cruel, ruthless and lawless “intelligence community” when it seemed that US spooks were bent on Trump’s ouster:
https://pluralistic.net/2021/12/18/schizmogenesis/
The fact that the FBI didn’t like Trump didn’t make them allies of progressive causes. This was and is the same entity that (among other things) tried to blackmail Martin Luther King, Jr into killing himself:
https://en.wikipedia.org/wiki/FBI%E2%80%93King_suicide_letter
But schismogenesis isn’t merely a reactionary way of flip-flopping on issues based on reflexive enmity. It’s actually a reasonable epistemological tactic: in a world where there are more issues you need to be clear on than you can possibly inform yourself about, you need some shortcuts. One shortcut — a shortcut that’s failing — is to say, “Well, I’ll provisionally believe whatever the expert system tells me is true.” Another shortcut is, “I will provisionally disbelieve in whatever the people I know to act in bad faith are saying is true.” That is, “schismogenesis.”
Schismogenesis isn’t a great tactic. It would be far better if we had a set of institutions we could all largely trust — if the black boxes where expert debate took place were sturdy, rectilinear and sharp-cornered.
But they’re not. They’re just not. Our regulatory process sucks. Corporate concentration makes it trivial for cartels to capture their regulators and steer them to conclusions that benefit corporate shareholders even if that means visiting enormous harm — even mass death — on the public:
https://pluralistic.net/2022/06/05/regulatory-capture/
No one hates Big Tech more than I do, but many of my co-belligerents in the war on Big Tech believe that the rise of conspiratorialism can be laid at tech platforms’ feet. They say that Big Tech boasts of how good they are at algorithmically manipulating our beliefs, and attribute Qanons, flat earthers, and other outlandish conspiratorial cults to the misuse off those algorithms.
“We built a Big Data mind-control ray” is one of those extraordinary claims that requires extraordinary evidence. But the evidence for Big Tech’s persuasion machines is very poor: mostly, it consists of tech platforms’ own boasts to potential investors and customers for their advertising products. “We can change peoples’ minds” has long been the boast of advertising companies, and it’s clear that they can change the minds of customers for advertising.
Think of department store mogul John Wanamaker, who famously said “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Today — thanks to commercial surveillance — we know that the true proportion of wasted advertising spending is more like 99.9%. Advertising agencies may be really good at convincing John Wanamaker and his successors, through prolonged, personal, intense selling — but that doesn’t mean they’re able to sell so efficiently to the rest of us with mass banner ads or spambots:
http://pluralistic.net/HowToDestroySurveillanceCapitalism
In other words, the fact that Facebook claims it is really good at persuasion doesn’t mean that it’s true. Just like the AI companies who claim their chatbots can do your job: they are much better at convincing your boss (who is insatiably horny for firing workers) than they are at actually producing an algorithm that can replace you. What’s more, their profitability relies far more on convincing a rich, credulous business executive that their product works than it does on actually delivering a working product.
Now, I do think that Facebook and other tech giants play an important role in the rise of conspiratorial beliefs. However, that role isn’t using algorithms to persuade people to mistrust our institutions. Rather Big Tech — like other corporate cartels — has so corrupted our regulatory system that they make trusting our institutions irrational.
Think of federal privacy law. The last time the US got a new federal consumer privacy law was in 1988, when Congress passed the Video Privacy Protection Act, a law that prohibits video store clerks from leaking your VHS rental history:
https://www.eff.org/deeplinks/2008/07/why-vppa-protects-youtube-and-viacom-employees
It’s been a minute. There are very obvious privacy concerns haunting Americans, related to those tech giants, and yet the closest Congress can come to doing something about it is to attempt the forced sale of the sole Chinese tech giant with a US footprint to a US company, to ensure that its rampant privacy violations are conducted by our fellow Americans, and to force Chinese spies to buy their surveillance data on millions of Americans in the lawless, reckless swamp of US data-brokerages:
https://www.npr.org/2024/03/14/1238435508/tiktok-ban-bill-congress-china
For millions of Americans — especially younger Americans — the failure to pass (or even introduce!) a federal privacy law proves that our institutions can’t be trusted. They’re right:
https://www.tiktok.com/@pearlmania500/video/7345961470548512043
Occam’s Razor cautions us to seek the simplest explanation for the phenomena we see in the world around us. There’s a much simpler explanation for why people believe conspiracy theories they encounter online than the idea that the one time Facebook is telling the truth is when they’re boasting about how well their products work — especially given the undeniable fact that everyone else who ever claimed to have perfected mind-control was a fantasist or a liar, from Rasputin to MK-ULTRA to pick-up artists.
Maybe people believe in conspiracy theories because they have hundreds of life-or-death decisions to make every day, and the institutions that are supposed to make that possible keep proving that they can’t be trusted. Nevertheless, those decisions have to be made, and so something needs to fill the epistemological void left by the manifest unsoundness of the black box where the decisions get made.
For many people — millions — the thing that fills the black box is conspiracy fantasies. It’s true that tech makes finding these conspiracy fantasies easier than ever, and it’s true that tech makes forming communities of conspiratorial belief easier, too. But the vulnerability to conspiratorialism that algorithms identify and target people based on isn’t a function of Big Data. It’s a function of corruption — of life in a world in which real conspiracies (to steal your wages, or let rich people escape the consequences of their crimes, or sacrifice your safety to protect large firms’ profits) are everywhere.
Progressives — which is to say, the coalition of liberals and leftists, in which liberals are the senior partners and spokespeople who control the Overton Window — used to identify and decry these conspiracies. But as right wing “populists” declared their opposition to these conspiracies — when Trump damned free trade and the mainstream media as tools of the ruling class — progressives leaned into schismogenesis and declared their vocal support for these old enemies of progress.
This is the crux of Naomi Klein’s brilliant 2023 book Doppelganger: that as the progressive coalition started supporting these unworthy and broken institutions, the right spun up “mirror world” versions of their critique, distorted versions that focus on scapegoating vulnerable groups rather than fighting unworthy institutions:
https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine
This is a long tradition in politics: hundreds of years ago, some leftists branded antisemitism “the socialism of fools.” Rather than condemning the system’s embrace of the finance sector and its wealthy beneficiaries, anti-semites blame a disfavored group of people — people who are just as likely as anyone to suffer under the system:
https://en.wikipedia.org/wiki/Antisemitism_is_the_socialism_of_fools
It’s an ugly, shallow, cartoon version of socialism’s measured and comprehensive analysis of how the class system actually works and why it’s so harmful to everyone except a tiny elite. Literally cartoonish: the shadow-world version of socialism co-opts and simplifies the iconography of class struggle. And schismogenesis — “if the right likes this, I don’t” — sends “progressive” scolds after anyone who dares to criticize finance as the crux of our world’s problems as popularizing “antisemetic dog-whistles.”
This is the problem with “horseshoe theory” — the idea that the far right and the far left bend all the way around to meet each other:
https://pluralistic.net/2024/02/26/horsehoe-crab/#substantive-disagreement
When the right criticizes pharma companies, they tell us to “do our own research” (e.g. ignore the systemic problems of people being forced to work under dangerous conditions during a pandemic while individually assessing conflicting claims about vaccine safety, ideally landing on buying “supplements” from a grifter). When the left criticizes pharma, it’s to argue for universal access to medicine and vigorous public oversight of pharma companies. These aren’t the same thing:
https://pluralistic.net/2021/05/25/the-other-shoe-drops/#quid-pro-quo
Long before opportunistic right wing politicians realized they could get mileage out of pointing at the terrifying epistemological crisis of trying to make good choices in an age of institutions that can’t be trusted, the left was sounding the alarm. Conspiratorialism — the fracturing of our shared reality — is a serious problem, weakening our ability to respond effectively to endless disasters of the polycrisis.
But by blaming the problem of conspiratorialism on the credulity of believers (rather than the deserved disrepute of the institutions they have lost faith in) we adopt the logic of the right: “conspiratorialism is a problem of individuals believing wrong things,” rather than “a system that makes wrong explanations credible — and a schismogenic insistence that these institutions are sound and trustworthy.”
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/03/25/black-boxes/#when-you-know-you-know
Image:
Nuclear Regulatory Commission (modified)
https://www.flickr.com/photos/nrcgov/15993154185/
meanwell-packaging.co.uk
https://www.flickr.com/photos/195311218@N08/52159853896