The CHIPS Act treats the symptoms, but not the causes
Monopoly destroyed America’s high-tech capacity.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/07/farewell-mr-chips/#we-used-to-make-things
There’s this great throwaway line in 1992’s Sneakers, where Dan Aykroyd, playing a conspiracy-addled hacker/con-man, is feverishly telling Sydney Poitier (playing an ex-CIA spook) about a 1958 meeting Eisenhower had with aliens where Ike said, “hey, look, give us your technology, and we’ll give you all the cow lips you want.”
Poitier dismisses Aykroyd (“Don’t listen to this man. He’s certifiable”). We’re meant to be on Poitier’s side here, but I’ve always harbored some sympathy for Aykroyd in this scene.
That’s because I often hear echoes of Aykroyd’s theory in my own explanations of the esoteric bargains and plots that produced the world we’re living in today. Of course, in my world, it’s not presidents bargaining for alien technology in exchange for cow-lips — it’s the world’s wealthy nations bargaining to drop trade restrictions on the Global South in exchange for IP laws.
These bargains — which started as a series of bilateral and then multilateral agreements like NAFTA, and culminated in the WTO agreement of 1999 — were the most important step in the reordering of the world’s economy around rent-extraction, cheap labor exploitation, and a brittle supply chain that is increasingly endangered by the polycrisis of climate and its handmaidens, like zoonotic plagues, water wars, and mass refugee migration.
Prior to the advent of “free trade,” the world’s rich countries fashioned debt into a whip-hand over poor, post-colonial nations. These countries had been bankrupted by their previous colonial owners, and the price of their freedom was punishing debts to the IMF and other rich-world institutions in exchange for loans to help these countries “develop.”
Like all poor debtors, these countries were said to have gotten into their predicament through moral failure — they’d “lived beyond their means.”
(When rich people get into debt, bankruptcy steps in to give them space to “restructure” according to their own plans. When poor people get into debt, bankruptcy strips them of nearly everything that might help them recover, brands them with a permanent scarlet letter, and subjects them to humiliating micro-management whose explicit message is that they are not competent to manage their own affairs):
https://pluralistic.net/2021/08/07/hr-4193/#shoppers-choice
So the poor debtor nations were ordered to “deregulate.” They had to sell off their state assets, run their central banks according to the dictates of rich-world finance authorities, and reorient their production around supplying raw materials to rich countries, who would process these materials into finished goods for export back to the poor world.
Naturally, poor countries were not allowed to erect “trade barriers” that might erode the capacity of this North-South transfer of high-margin goods, but this was not the era of free trade. It wasn’t the free trade era because, while the North-South transfer was largely unrestricted, the South-North transfer was subject to tight regulation in the rich world.
In other words, poor countries were expected to export, say, raw ore to the USA and reimport high-tech goods, with low tariffs in both directions. But if a poor country processed that ore domestically and made its own finished goods, the US would block those goods at the border, slapping them with high tariffs that made them more expensive than Made-in-the-USA equivalents.
The argument for this unidirectional trade was that the US — and other rich countries — had a strategic need to maintain their manufacturing industries as a hedge against future geopolitical events (war, but also pandemics, extreme weather) that might leave the rich world unable to provide for itself. This rationale had a key advantage: it was true.
A country that manages its own central bank can create as much of its own currency as it wants, and use that money to buy anything for sale in its own currency.
This may not be crucial while global markets are operating to the country’s advantage (say, while the rest of the world is “willingly” pricing its raw materials in your country’s currency), but when things go wrong — war, plague, weather — a country that can’t make things is at the rest of the world’s mercy.
If you had to choose between being a poor post-colonial nation that couldn’t supply its own technological needs except by exporting raw materials to rich countries, and being a rich country that had both domestic manufacturing capacity and a steady supply of other countries’ raw materials, you would choose the second, every time.
What’s not to like?
Here’s what.
The problem — from the perspective of America’s ultra-wealthy — was that this arrangement gave the US workforce a lot of power. As US workers unionized, they were able to extract direct concessions from their employers through collective bargaining, and they could effectively lobby for universal worker protections, including a robust welfare state — in both state and federal legislatures. The US was better off as a whole, but the richest ten percent were much poorer than they could be if only they could smash worker power.
That’s where free trade comes in. Notwithstanding racist nonsense about “primitive” countries, there’s no intrinsic defect that stops the global south from doing high-tech manufacturing. If the rich world’s corporate leaders were given free rein to sideline America’s national security in favor of their own profits, they could certainly engineer the circumstances whereby poor countries would build sophisticated factories to replace the manufacturing facilities that sat behind the north’s high tariff walls.
These poor-country factories could produce goods ever bit as valuable as the rich world’s shops, but without the labor, environmental and financial regulations that constrained their owners’ profits. They slavered for a business environment that let them kill workers; poison the air, land and water; and cheat the tax authorities with impunity.
For this plan to work, the wealthy needed to engineer changes in both the rich world and the poor world. Obviously, they would have to get rid of the rich world’s tariff walls, which made it impossible to competitively import goods made in the global south, no matter how cheaply they were made.
But free trade wasn’t just about deregulation in the north — it also required a whole slew of new, extremely onerous regulations in the global south. Corporations that relocated their manufacturing to poor — but nominally sovereign — countries needed to be sure that those countries wouldn’t try to replicate the American plan of becoming actually sovereign, by exerting control over the means of production within their borders.
Recall that the American Revolution was inspired in large part by fury over the requirement to ship raw materials back to Mother England and then buy them back at huge markups after they’d been processed by English workers, to the enrichment of English aristocrats. Post-colonial America created new regulations (tariffs on goods from England), and — crucially — they also deregulated.
Specifically, post-revolutionary America abolished copyrights and patents for English persons and firms. That way, American manufacturers could produce sophisticated finished goods without paying rent to England’s wealthy making those goods cheaper for American buyers, and American publishers could subsidize their editions of American authors’ books by publishing English authors on the cheap, without the obligation to share profits with English publishers or English writers.
The surplus produced by ignoring the patents and copyrights of the English was divided (unequally) among American capitalists, workers, and shoppers. Wealthy Americans got richer, even as they paid their workers more and charged less for their products. This incubated a made-in-the-USA edition of the industrial revolution. It was so successful that the rest of the world — especially England — began importing American goods and literature, and then American publishers and manufacturers started to lean on their government to “respect” English claims, in order to secure bilateral protections for their inventions and books in English markets.
This was good for America, but it was terrible for English manufacturers. The US — a primitive, agricultural society — “stole” their inventions until they gained so much manufacturing capacity that the English public started to prefer American goods to English ones.
This was the thing that rich-world industrialists feared about free trade. Once you build your high-tech factories in the global south, what’s to stop those people from simply copying your plans — or worse, seizing your factories! — and competing with you on a global scale? Some of these countries had nominally socialist governments that claimed to explicitly elevate the public good over the interests of the wealthy. And all of these countries had the same sprinkling of sociopaths who’d gladly see a million children maimed or the land poisoned for a buck — and these “entrepreneurs” had unbeatable advantages with their countries’ political classes.
For globalization to work, it wasn’t enough to deregulate the rich world — capitalists also had to regulate the poor world. Specifically, they had to get the poor world to adopt “IP” laws that would force them to willingly pay rent on things they could get for free: patents and other IP, even though it was in the short-term, medium-term, and long-term interests of both the nation and its politicians and its businesspeople.
Thus, the bargain that makes me sympathetic to Dan Aykroyd: not cow lips for alien tech; but free trade for IP law. When the WTO was steaming towards passage in the late 1990s, there was (rightly) a lot of emphasis on its deregulatory provisions: weakening of labor, environmental and financial laws in the poor world, and of tariffs in the rich world.
But in hindsight, we all kind of missed the main event: the TRIPS (Agreement on Trade-Related Aspects of Intellectual Property Rights). This actually started before the WTO treaty (it was part of the GATT, a predecessor to the WTO), but the WTO spread it to countries all over the world. Under the TRIPS, poor countries are required to honor the IP claims of rich countries, on pain of global sanction.
That was the plan: instead of paying American workers to make Apple computers, say, Apple could export the “IP” for Macs and iPhones to countries like China, and these countries would produce Apple products that were “designed in California, assembled in China.” China would allow Apple to treat Chinese workers so badly that they routinely committed suicide, and would lock up or kill workers who tried to unionize. China would accept vast shipments of immortal, toxic e-waste. And China wouldn’t let its entrepreneurs copy Apple’s designs, be they software, schematics or trademarks.
Apple isn’t the only company that pursued this strategy, but no company has executed it as successfully. It’s not for nothing that Steve Jobs’s hand-picked successor was Tim Cook, who oversaw the transfer of even the most exacting elements of Apple manufacturing to Chinese facilities, striking bargains with contractors like Foxconn that guaranteed that workers would be heavily — lethally! — surveilled and controlled to prevent the twin horrors of unionization and leaks.
For the first two decades of the WTO era, the most obvious problems with this arrangement was wage erosion (for American workers) and leakage (for the rich). China’s “socialist” government was only too happy to help Foxconn imprison workers who demanded better wages and working conditions, but they were far more relaxed about knockoffs, be they fake iPods sold in market stalls or US trade secrets working their way into Huawei products.
These were problems for the American aristocracy, whose investments depended on China disciplining both Chinese workers and Chinese businesses. For the American people, leakage was a nothingburger. Apple’s profits weren’t shared with its workforce beyond the relatively small number of tech workers at its headquarters. The vast majority of Apple employees, who flogged iPhones and scrubbed the tilework in gleaming white stores across the nation, would get the same minimal (or even minimum) wage no matter how profitable Apple grew.
It wasn’t until the pandemic that the other shoe dropped for the American public. The WTO arrangement — cow lips for alien technology — had produced a global system brittle supply chains composed entirely of weakest links. A pandemic, a war, a ship stuck in the Suez Canal or Houthi paramilitaries can cripple the entire system, perhaps indefinitely.
For two decades, we fought over globalization’s effect on wages. We let our corporate masters trick us into thinking that China’s “cheating” on IP was a problem for the average person. But the implications of globalization for American sovereignty and security were banished to the xenophobic right fringe, where they were mixed into the froth of Cold War 2.0 nonsense. The pandemic changed that, creating a coalition that is motivated by a complex and contradictory stew of racism, environmentalism, xenophobia, labor advocacy, patriotism, pragmatism, fear and hope.
Out of that stew emerged a new American political tendency, mostly associated with Bidenomics, but also claimed in various guises by the American right, through its America First wing. That tendency’s most visible artifact is the CHIPS Act, through which the US government proposes to use policy and subsidies to bring high-tech manufacturing back to America’s shores.
This week, the American Economic Liberties Project published “Reshoring and Restoring: CHIPS Implementation for a Competitive Semiconductor Industry,” a fascinating, beautifully researched and detailed analysis of the CHIPS Act and the global high-tech manufacturing market, written by Todd Achilles, Erik Peinert and Daniel Rangel:
Crucially, the report lays out the role that the weakening of antitrust, the dismantling of tariffs and the strengthening of IP played in the history of the current moment. The failure to enforce antitrust law allowed for monopolization at every stage of the semiconductor industry’s supply-chain. The strengthening of IP and the weakening of tariffs encouraged the resulting monopolies to chase cheap labor overseas, confident that the US government would punish host countries that allowed their domestic entrepreneurs to use American designs without permission.
The result is a financialized, “capital light” semiconductor industry that has put all its eggs in one basket. For the most advanced chips (“leading-edge logic”), production works like this: American firms design a chip and send the design to Taiwan where TSMC foundry turns it into a chip. The chip is then shipped to one of a small number of companies in the poor world where they are assembled, packaged and tested (AMP) and sent to China to be integrated into a product.
Obsolete foundries get a second life in the commodity chip (“mature-node chips”) market — these are the cheap chips that are shoveled into our cars and appliances and industrial systems.
Both of these systems are fundamentally broken. The advanced, “leading-edge” chips rely on geopolitically uncertain, heavily concentrated foundries. These foundries can be fully captured by their customers — as when Apple prepurchases the entire production capacity of the most advanced chips, denying both domestic and offshore competitors access to the newest computation.
Meanwhile, the less powerful, “mature node” chips command minuscule margins, and are often dumped into the market below cost, thanks to subsidies from countries hoping to protect their corner of the high-tech sector. This makes investment in low-power chips uncertain, leading to wild swings in cost, quality and availability of these workhorse chips.
The leading-edge chipmakers — Nvidia, Broadcom, Qualcomm, AMD, etc — have fully captured their markets. They like the status quo, and the CHIPS Act won’t convince them to invest in onshore production. Why would they?
2022 was Broadcom’s best year ever, not in spite of its supply-chain problems, but because of them. Those problems let Broadcom raise prices for a captive audience of customers, who the company strong-armed into exclusivity deals that ensured they had nowhere to turn. Qualcomm also profited handsomely from shortages, because its customers end up paying Qualcomm no matter where they buy, thanks to Qualcomm ensuring that its patents are integrated into global 4G and 5G standards.
That means that all standards-conforming products generate royalties for Qualcomm, and it also means that Qualcomm can decide which companies are allowed to compete with it, and which ones will be denied licenses to its patents. Both companies are under orders from the FTC to cut this out, and both companies ignore the FTC.
The brittleness of mature-node and leading-edge chips is not inevitable. Advanced memory chips (DRAM) roughly comparable in complexity to leading-edge chips, while analog-to-digital chips are as easily commodified as mature-node chips, and yet each has a robust and competitive supply chain, with both onshore and offshore producers. In contrast with leading-edge manufacturers (who have been visibly indifferent to the CHIPS incentives), memory chip manufacturers responded to the CHIPS Act by committing hundreds of billions of dollars to new on-shore production facilities.
Intel is a curious case: in a world of fabless leading-edge manufacturers, Intel stands out for making its own chips. But Intel is in a lot of trouble. Its advanced manufacturing plans keep foundering on cost overruns and delays. The company keeps losing money. But until recently, its management kept handing its shareholders billions in dividends and buybacks — a sign that Intel bosses assume that the US public will bail out its “national champion.” It’s not clear whether the CHIPS Act can save Intel, or whether financialization will continue to hollow out a once-dominant pioneer.
The CHIPS Act won’t undo the concentration — and financialization — of the semiconductor industry. The industry has been awash in cheap money since the 2008 bailouts, and in just the past five years, US semiconductor monopolists have paid out $239b to shareholders in buybacks and dividends, enough to fund the CHIPS Act five times over. If you include Apple in that figure, the amount US corporations spent on shareholder returns instead of investing in capacity rises to $698b. Apple doesn’t want a competitive market for chips. If Apple builds its own foundry, that just frees up capacity at TSMC that its competitors can use to improve their products.
The report has an enormous amount of accessible, well-organized detail on these markets, and it makes a set of key recommendations for improving the CHIPS Act and passing related legislation to ensure that the US can once again make its own microchips. These run a gamut from funding four new onshore foundries to requiring companies receiving CHIPS Act money to “dual-source” their foundries. They call for NIST and the CPO to ensure open licensing of key patents, and for aggressive policing of anti-dumping rules for cheap chips. They also seek a new law creating an “American Semiconductor Supply Chain Resiliency Fee” — a tariff on chips made offshore.
Fundamentally, these recommendations seek to end the outsourcing made possible by restrictive IP regimes, to undercut Wall Street’s power to demand savings from offshoring, and to smash the market power of companies like Apple that make the brittleness of chip manufacturing into a feature, rather than a bug. This would include a return to previous antitrust rules, which limited companies’ ability to leverage patents into standards, and to previous IP rules, which limited exclusive rights chip topography and design (“mask rights”).
All of this will is likely to remove the constraints that stop poor countries from doing to America the same things that postcolonial America did to England — that is, it will usher in an era in which lots of countries make their own chips and other high-tech goods without paying rent to American companies. This is good! It’s good for poor countries, who will have more autonomy to control their own technical destiny. It’s also good for the world, creating resiliency in the high-tech manufacturing sector that we’ll need as the polycrisis overwhelms various places with fire and flood and disease and war. Electrifying, solarizing and adapting the world for climate resilience is fundamentally incompatible with a brittle, highly concentrated tech sector.
Pluralizing high-tech production will make America less vulnerable to the gamesmanship of other countries — and it will also make the rest of the world less vulnerable to American bullying. As Henry Farrell and Abraham Newman describe so beautifully in their 2023 book Underground Empire, the American political establishment is keenly aware of how its chokepoints over global finance and manufacturing can be leveraged to advantage the US at the rest of the world’s expense:
https://pluralistic.net/2023/10/10/weaponized-interdependence/#the-other-swifties
Look, I know that Eisenhower didn’t trade cow-lips for alien technology — but our political and commercial elites really did trade national resiliency away for IP laws, and it’s a bargain that screwed everyone, except the one percenters whose power and wealth have metastasized into a deadly cancer that threatens the country and the planet.
Image:
Mickael Courtiade (modified)
https://www.flickr.com/photos/197739384@N07/52703936652/