Chip War: The Fight for the World’s Most Critical Technology 
by Chris Miller.
Simon and Schuster, 431 pp., £20, October 2022, 978 1 3985 0409 7
Show More
Show More

Picture​ the following age-old scene: a writer sitting at a kitchen table, pretending to work. Set it forty years ago. The Conservatives are in power and everything is broken, but our subject is the writer’s stuff. On the table is a typewriter; to one side is a radio, to another is a phone; also in the room are a fridge, an oven, a hob, a toaster, a set of car keys and a vacuum cleaner. Now fast-forward to the same scene forty years later. The Conservatives are in power again and everything is broken again; the room (and perhaps the writer) is a little shinier, but the stuff in the room is more or less the same. At least, it serves the same functions, if you swap laptop for typewriter, mobile for landline, Dyson for Hoover.

One big thing, however, is different. In 1983, that kitchen contained just a handful of transistors, all of which lived in the – there’s a clue in the name – transistor radio. In 2023, every item on that list of domestic objects uses microchips which are each made up of thousands, millions, billions of transistors. Ovens, fridges, vacuums, car keys, radios, speakers: all of them now contain microchips. An ordinary car contains dozens of them. A posh car contains a thousand. And those are just the standard consumer items of the mid-20th century. As for the things we think of as being this century’s new technology, they are some of the most complicated and beautiful artefacts humanity has ever made, mainly because of the chips they contain. The writer’s phone is an iPhone 12, which uses a chip for the modem, a chip to control Bluetooth, a chip to detect motion and orientation, a chip for image sensing, chips for wireless charging and battery management and audio, and a couple of memory chips. All of these are bought by Apple from other companies, and all are simple beasts compared to the principal logic chip in that phone, Apple’s own-designed A14, which contains 11,800,000,000 transistors. The writer’s laptop, a MacBook Air, uses another ‘system on a chip’, Apple’s M2. That single chip contains 20,000,000,000 transistors. The laptop contains so many transistors that if the writer travelled back in time to 1983, he could give every single person on the planet a transistor radio and still have a billion of them left over.

If you want a guide to how we got here, you won’t do better than Chris Miller’s comprehensive, eye-opening Chip War. Insofar as we work, live and think differently from forty years ago, we do so thanks to the revolutions in economics and communication whose enabling technology are those microchips, which have been both the necessary and the proximate cause of humanity’s pivot to the digital. This process began with the vacuum tube,

a lightbulb-like metal filament enclosed in glass. The electric current running through the tube could be switched on and off, performing a function not unlike an abacus bead moving back and forth across a wooden rod. A tube turned on was coded as a 1 while the vacuum tube turned off was a 0. These two digits could produce any number using a system of binary counting – and therefore could theoretically execute many types of computation.

Vacuum tubes could allow systems to be reprogrammed; they could be used repeatedly and flexibly. The tubes made complex computations possible, but they were unwieldy, both prone to breakage and laborious to repair. ENIAC, the US army’s world-leading computer, introduced in 1946, used 18,000 vacuum tubes to calculate artillery trajectories faster and more accurately than any human. That made it revolutionary, but its utility was limited by the fact that it was the size of a room, and that whenever a single tube failed, which happened on average every two days, the whole machine broke down.

The man who improved on the vacuum tube was the London-born American physicist William Shockley. After the war, Shockley was employed at Bell Labs, the research branch of the US telephone monopoly, AT&T. He realised that certain chemical elements could perform a similar function of encoding and transmitting 1s and 0s. Conducting materials conduct electricity; non-conducting materials don’t; semiconductors both do and don’t, and that ability to be in two different states makes binary computations possible. Shockley worked out first the theory of semiconduction, and then set his colleagues John Bardeen and Walter Brattain to work on a practical device to manipulate electrical current on a semiconductor. On 23 December 1947 they demonstrated the first working transistor. That invention won the three men the Nobel Prize for physics in 1956.

Shockley seems to have been peeved that it was Bardeen and Brattain who created that first circuit. Because Shockley ran the lab, he was able gradually to stop them working on transistors. Bardeen left for the University of Illinois, where he went on to do foundational work on superconductivity, becoming the first and only person to win a second Nobel Prize in physics.* Shockley set out to be rich. He quit Bell Labs with his Nobel in his pocket and set off to found a new company, Shockley Semiconductor. And this is where his mother comes into it. May Bradford Shockley, who grew up in back-country Missouri, was the daughter of mining engineers; in 1904 she had become the only female deputy surveyor of minerals in the US. Her affection for Palo Alto – she had gone to university at Stanford – led her to retire there. That fact in turn led Shockley in 1956 to found his company down the road in Mountain View, now better known as the home of Google. In those days that part of the world was called the Santa Clara Valley. It goes by a different name today. May Bradford Shockley, who spent the latter part of her life as a rather good painter and who died in 1977 at the age of 97, is the reason Silicon Valley is where it is.

There’s no way round the fact that the founder of Silicon Valley was an outstandingly horrible human being. Shockley was a terrible manager and a passionate racist, who devoted his post-Nobel decades to publicising home-brewed theories about ‘dysgenics’ or genetic degradation and racial differences being a form of natural ‘colour-coding’ to warn about low intelligence. It is striking that the National Academy of Sciences’ official memorial of him, by his old friend John Moll, contains not a single example of kindness or charm or goodwill, or indeed any anecdote which reflects any human credit on its subject. Instead Moll observes that Shockley’s ‘technical insights were counterbalanced by his lack of insight into human relations’. That had consequences.

Inventing transistors was one thing, a clever piece of cutting-edge physics, but making them useful was another matter. They quickly replaced vacuum tubes, but thousands of transistors meant thousands of wires to connect them, and the resulting devices were inelegant tangles. Improving that was an engineering challenge, solved by two men, working separately. The first was Jack Kilby, an engineer newly hired by Texas Instruments. The company had gone into business making equipment to look for oil deposits using seismic waves, pivoted during the war to making sonar for the navy, and after the war was looking to expand into other electronic systems for the military. Kilby arrived at TI in the summer of 1958, when the lab was empty. Because he was a new employee he had no holiday allocation, so he set himself to work noodling around with transistors, focusing on the semiconducting element germanium. He came up with a breakthrough: instead of connecting transistors to one another, he built the wiring into germanium itself, so that a single piece of metal – germanium or the other useful semiconductor material, silicon – could have multiple transistors baked in. Kilby’s innovation won him the Nobel Prize for physics in 2000.

Long before that, Shockley Transistors had imploded. Shockley had hired the most talented engineers in the new field, but fell out with them all so badly that in 1957 they left to set up Fairchild Semiconductor. That group of men – the ‘traitorous eight’, Shockley called them – were the founders of the modern semiconductor industry. Fairchild Semiconductor was where the microchip industry was born, and the traitorous eight were the people who put the silicon in Silicon Valley. Of the eight, Eugene Kleiner set up the venture capital firm Kleiner Perkins, which helped create much of the modern technology industry (Amazon, AOL, Electronic Arts, Google and Twitter, inter much alia) and set the template for the modern US venture capital industry; Gordon Moore was the driving force behind the accelerating capacity of microchips (Moore’s law was named after him as a result); and the visionary genius Robert Noyce was the man who, along with Jack Kilby, invented the microchip.

The Texas Instrument chip, Kilby’s invention, looked like a mesa, the stacked layer of rocks familiar to fans of Westerns from shots of the American desert. The layers of wiring were built on top of one another, vertically. Noyce invented a new kind of chip, using what amounted to stencils to paint lines of metal through holes in the protective coating on top. This technique could put multiple transistors on a single chip, and improved on Kilby’s version because there were no free-standing wires: the chip was entirely self-contained. ‘The transistors were built into a single block of material. Soon, the “integrated circuits” that Kilby and Noyce had developed would become known as “semiconductors” or, more simply, “chips”.’ The people at Fairchild immediately realised that their chip was an improvement on the mesa variety: it was smaller, needed less electricity, and was capable of further and further miniaturisation. These chips had the potential to be a dramatic new multipurpose technology. The only problem was that they were fifty times more expensive than simpler chips. So what? ‘Everyone agreed Noyce’s invention was clever, even brilliant. All it needed was a market.’

Anyone with an interest in the history of technology will know who the first customer for new inventions tends to be. As the biophysicist Luca Turin once said to me, ‘the military are the only people who know how to fund research, because the military are the only people who really know how to waste money.’ Three days after the foundation of Fairchild Semiconductor, Sputnik 1 zoomed into orbit, and the company suddenly had its market. Nasa, tasked with overtaking the Soviet Union in the race for space, made the first significant order for Noyce’s new chip. Texas Instruments did most of its important early business with the US air force, which was looking for a way to increase the accuracy of its missiles. ‘Within a year, TI’s shipments to the air force accounted for 60 per cent of all dollars spent buying chips to date.’ By 1965, 72 per cent of all integrated circuits were being bought by the US Department of Defence.

The first beneficiary of the military’s spending was the military. The scandal of the US bombing campaign in Vietnam is widely known: in three and a half years, Operation Rolling Thunder dropped more ordnance on Vietnam than the Allies used in the entire Pacific theatre during the Second World War. What is less well known is that most of it missed. The average Vietnam bomb landed 420 feet from its target. Miller cites the example of the Thanh Hóa Bridge, a vital transport artery in North Vietnam, which in 1965 was the target of 638 bombs, every one of which missed. Seven years later, the TI chips were incorporated in the same bombs, and the final set of air raids, on 13 May 1972, destroyed the bridge – a confirmation of the importance of the new technology in war, even if it was broadly ignored in the context of the US defeat. (The other wider significance of the Thanh Hóa Bridge was that the first big raid there was the occasion for an aerial dogfight in which the US, to its astonishment, lost a number of its most advanced aircraft to Vietnamese fighters. That shock to the system eventually led to the foundation of the fighter school memorialised in Top Gun, which in turn led to the 2022 sequel which was such a big-screen success that Steven Spielberg recently told Tom Cruise his movie had ‘saved the entire theatrical industry’. It’s the Thanh Hóa Bridge’s world – we’re just living in it.)

From this point on, the US military was committed to the microchip as a central part of its strategic planning. The Soviet Union had more men and materiel than the US, so the US embarked on a plan to offset those advantages through superior technology. They have more men and more stuff, but our weapons hit the target – that was the idea, and the first time it could be seen in use was in the 1991 Gulf War. That first astonishing blizzard of bombing and cruise missiles in the attack on Baghdad, which nobody who watched it live on television will ever forget, was based on a huge technological superiority which was in turn based on the ubiquitous microchip. As Miller puts it, ‘the Cold War was over; Silicon Valley had won.’

That wouldn’t have happened if the Soviet Union had been able to match US chip production. Its failure to do so was based partly on the fact that ever since Shockley’s initial breakthrough, the Soviet Union had relied on industrial espionage to keep up with the US. An entire department of the KGB specialised in stealing and copying US chips. The problem was that the speed of advances in the microchip industry was so rapid that by the time you had successfully copied an existing chip, you were far behind the current state of the art. Gordon Moore had foreseen that chips would double in power or halve in price every eighteen months, and although this was not a law but a prediction, it was proving true.

Moore’s law gave the chip industry a special character. Nothing else that humanity has ever invented or created continually doubles in power every eighteen months. This was the result of remorseless, fanatical engineering ingenuity. As a result, the business attracted a particular kind of person, exemplified by Intel, the company that span off from Fairchild, in the same way Fairchild had spun off from Shockley. Once again, Moore and Noyce left the mothership. The most driven of the men at Intel – even by the standards of tech, the story of microchips is male-dominated – was Andy Grove, who joined the company on its first day, and rose to become the boss. The Wasp neutrality of Grove’s name is deceptive. He was born András Gróf in Hungary in 1936. His early life was not easy. As he says in a memoir, by the time he was twenty he had ‘lived through a Hungarian fascist dictatorship, German military occupation, the Nazis’ “Final Solution”, the siege of Budapest by the Soviet Red Army, a period of chaotic democracy in the years immediately after the war, a variety of repressive communist regimes, and a popular uprising that was put down at gunpoint’. His Jewish father was sent to a labour camp, and his mother was raped by Russian soldiers when Budapest fell at the end of the war. The penniless, non-English speaking Gróf escaped to Austria, went to the US, trained as a chemical engineer, got a job at Fairchild, joined Intel and ended up running the company. In that role he created a distinctive culture which became hugely influential throughout the world of technology. He wrote a book, named after his guiding maxim: Only the Paranoid Survive. That’s a pretty bad rule for life, but it was a useful mantra in the microchip industry, thanks to the permanent acceleration of Moore’s law.

Grove meant what he said. Intel was all about paranoia, and the drive to create the next thing before the competition overtook you. The company’s great innovation was the 4004, the first general purpose microprocessor, which doubled the density of transistors on existing chips, and made them five times faster. A cover story in Popular Electronics about a computer based on the next generation of Intel chips, the Altair 8800, caught the eye of a 19-year-old Harvard undergraduate who immediately realised that there were fortunes to be made in writing software for the new machines. Bill Gates in that moment decided to drop out of Harvard and found a company devoted to exploiting the possibilities created by Intel’s invention. He has said that his main worry when he took the leap wasn’t that his idea was wrong, but that Microsoft, which he hadn’t yet created, would be beaten by someone else in making an operating system for the as yet unborn personal computer.

If the first important beneficiary of the Shockley/Fairchild/Intel revolution was the military, the second was the rest of us. The initial burst of military spending made chips cheaper and made their inventors focus on ruggedness and reliability, which, as it turns out, are what every industry wants from microchips. Intel launched a series of regular updates to its chip architecture, the ubiquitous x86 series which for decades powered most of the world’s PCs. The x86 architecture is no longer at the forefront of computing, but even now those Intel chips are the fundamental technology underpinning the cloud services that run more or less everything digital, up to and including this LRB piece, if you’re reading it online.

Step back a bit,​ as Miller does, and the story of the microchip has two main strands, one about their creation and another about their manufacture. The Genesis myth is about Silicon Valley and Shockley’s intellectual descendants. The rest of the story is much more global. The focus of the manufacturing strand is East Asia, where the overwhelming majority of the world’s chips are made. The process of outsourcing manufacturing began semi-accidentally. The man in charge of the process was a Fairchild executive called Charlie Sporck. He had a problem. American workers were expensive, not least because – boo! – they tended to belong to unions. ‘Chip firms hired women,’ Miller writes, ‘because they could be paid lower wages and were less likely than men to demand better working conditions. Production managers also believed women’s smaller hands made them better at assembling and testing finished semiconductors.’ But demand kept increasing, and the supply of competent, affordable, tiny-handed female US labour couldn’t keep up. ‘Wherever they looked across California, semiconductor executives like Sporck couldn’t find enough cheap workers. Fairchild scoured the US, eventually opening facilities in Maine – where workers had “a hatred for the labour unions”, Sporck reported – and on a Navajo reservation in New Mexico that provided tax incentives.’ The solution came in Asia, where Fairchild opened its first assembly plant near the airport in Hong Kong in 1963. ‘We had union problems in Silicon Valley,’ Sporck said. ‘We never had any union problems in the Orient.’ In Hong Kong the relevant labour cost 25 cents an hour, a tenth of the American price.

The first Asian country to benefit at scale from the new inventions was Japan, and the first company to do so was Sony, which successfully exploited the transistor with its colossally successful line of transistor radios. The still cute TR-55, Sony’s first radio, was an immediate global hit, and formed the basis of what was to be one of the greatest consumer electronics companies. It also reflected a failure on the part of Texas Instruments, which had designed a radio of its own but cocked up the pricing and launch, leaving the field to the Japanese, who were happy to pay the licence fees for using American technology. Something similar happened with the pocket calculator: Jack Kilby had designed a prototype, but was stymied by the marketers’ view that there was no demand for it – a mistake that left the field open for the Japanese company Sharp. Eventually Akio Morita, the boss of Sony, won the right to open a Texas Instruments plant in Japan.

This was good business, but it was more than business. ‘For foreign policy strategists in Washington, more trade and investment links between the two countries tied Tokyo ever more tightly into a US-led system.’ In 1960, the US and Japan had signed a revised version of their 1951 security treaty, imposed on Japan as a condition for ending the occupation. The revision saw huge protests, which led the prime minister, Hayato Ikeda, to announce a plan to double Japan’s GDP by the end of the decade. Japan hit Ikeda’s target two years early, thanks in no small part to the contribution made by its transistor and microchip-based industries. Something similar happened in South Korea, Singapore and Taiwan. Chips weren’t just business, they were politics too. Thanks to the economic boom they brought, the US came close to creating the Greater East Asia Co-Prosperity Sphere the Japanese claimed they were trying to establish during the Second World War.

Taiwan​ is at the heart of this story. The country’s engagement with the new technology was, at the beginning, fractious. In a meeting with two senior executives from Texas Instruments in 1968, the economy minister, K.T. Li, told the Americans that ‘intellectual property was something “imperialists used to bully less advanced countries”.’ That’s not an unusual view in developing economies, but it’s seldom that anyone expresses it quite so forthrightly. Li was quick to adjust to the realities of the situation, however. If Taiwan wanted American investment and access to American markets, it would just have to suck up its concerns on the intellectual property question and get with the programme. The potential economic upsides to doing so were large, and Li saw them clearly.

Taiwan and the US had been treaty allies since 1955, but with the defeat in Vietnam, America’s security promises were looking shaky. From South Korea to Taiwan, Malaysia to Singapore, anti-communist governments were seeking assurance that the US retreat from Vietnam wouldn’t leave them unprotected. They were also seeking jobs and investment that could address the economic dissatisfaction driving some of their populations towards communism. Li realised that Texas Instruments could help Taiwan solve both problems at once.

US investment would help create a chip industry in Taiwan and support the training of the engineers who would run the fabs. (Chip manufactories are called fabs, short for fabrication plants, I think mainly to avoid the olde worlde, unionised, hard-hat-wearing vibe of ‘factories’. Fabs are a place of hazmat suits and positive air pressure, in which air blows out of the room rather than into it, so that not a single micron of potentially catastrophic dust can contaminate the manufacturing process – a long way from 19th-century Manchester.) The investment would also give the US a stake in defending Taiwan, at a time when America’s enthusiasm for Asian military adventures was at a low ebb. TI committed to build their Taiwan fab in 1968. In 1980 they shipped their billionth chip. A new strategy was in place.

From South Korea to Taiwan, Singapore to the Philippines, a map of semiconductor assembly facilities looked much like a map of American military bases across Asia. Yet even after the US finally admitted defeat in Vietnam and drew down its military presence in the region, these trans-Pacific supply chains endured. By the end of the 1970s, rather than dominoes falling to communism, America’s allies in Asia were even more deeply integrated with the US.

One of the two TI managers at that scratchy meeting with Li was Morris Chang, a significant figure in the interlinked global history of technology and politics. Chang was born in mainland China in 1931. He grew up in Hong Kong and China during the Civil War and the Second World War, emigrated to the US in 1949, went to Harvard, then joined TI and rose to the top of the company. After stepping down in 1983 at the age of 52, he was lured to Taiwan by Li, now a minister without portfolio, with a brief to develop the country’s technology industries. Taiwan was deeply foreign to Chang, who by now, as Miller says, ‘was arguably more Texan than Chinese’. But the new job was irresistible. Chang was charged with creating a world-leading chip industry. In order to do that he founded TSMC, the Taiwan Semiconductor Manufacturing Company, which is today the unchallenged global champion in the manufacturer of advanced microchips.

Among the super-astute things Chang did was to realise that the competitive advantage for TSMC would not lie in designing chips. Companies such as Intel and Samsung and Motorola did that, and took wild risks in the process. It can cost $500 million to design – not to build, just to design – a cutting-edge chip. And as soon as it is designed, Moore’s law kicks in and the race is on for the next incremental improvement in speed and power. TSMC’s plan was to devote itself instead to chip manufacture: to build the world’s most advanced fabs and create a competitive advantage at a stage of the production where nobody else could possibly compete. TSMC would be so far ahead in manufacture, would spend so much money being so good at it, that nobody could catch what they were doing today, let alone match what they would be able to do tomorrow, powered by an annual $4 billion-plus in research and development spending. Remember our writer with the Apple-designed A14 and M2 microprocessors? Those were made in TSMC’s Fab 18, a single building in Tainan City which produces many of the world’s most advanced microprocessors. As Miller says, ‘the text etched onto the back of each iPhone – “designed by Apple in California. Assembled in China” – is highly misleading. The iPhone’s most irreplaceable components are indeed designed in California and assembled in China. But they can only be made in Taiwan.’

Themodern landscape of microchips is, broadly speaking, divided in two. On the one hand, there are the commodity chips present in more or less everything. The complicated supply chains involved in making and distributing these chips only came to most people’s attention during the pandemic, when a spike in demand in some sectors – all those screens and gadgets we used when working from home – was matched by a collapse in demand from others, especially the ones we weren’t using because we were WFH, such as cars. When the pandemic ended, the source of the demand swapped around, and suddenly cars were in short supply because their manufacturers couldn’t get the chips they needed. (You’ll have noticed a knock-on from this if you’ve rented a car since the pandemic: car rental is much more expensive because the rental companies wound down the size of their fleets during Covid, on the assumption that they would be able to scale them up easily, as they have done in the past. This assumption collided with the reality that everyone else was doing the same thing. The average increase in car hire prices globally was 47 per cent last year. That’s the microchip supply and demand curve at work.) Most of the chips involved in all these industries are still made in East Asia.

The other part of the landscape involves the high end of the business. Before we get to the geopolitics, can we have a moment to inhabit the technological sublime? Microchips are some of the most extraordinary objects humanity has ever made. Miller has a good illustration of this: the coronavirus is tiny, about a hundred billionths of a metre across, but it is a galumphing heifer of a beast compared to the smallest transistors being made in Fab 18, which are half that size. TSMC is now talking about transistor nodes in terms of three billionths of a metre. This is so small that quantum effects, which happen mostly at the subatomic level, become relevant.

The machinery needed to manufacture these extraordinarily delicate artefacts has got bigger and more complicated as the microchips have shrunk in size and increased in power. The silicon is etched onto the chips with a new technique called extreme ultraviolet lithography. Think of a microscope, which makes small things big. Now turn it round, so that the lens is making big things small. And now use that process to take a super-complex design and etch it onto an infinitesimally small microchip. That is lithography, which has been the basis of microchip manufacture ever since Jay Lathrop at TI invented it in 1958. But as the chips have got smaller, the lithography process has got more and more difficult.

At the far limit of the technology is the Dutch company ASML, the only firm in the world to have mastered EUV lithography. This process involves the production of EUV light, which in turn involves

a tiny ball of tin measuring thirty millionths of a metre moving through a vacuum at a speed of around two hundred miles per hour. The tin is then struck twice with a laser, the first pulse to warm it up, the second to blast it into a plasma with a temperature around half a million degrees, many times hotter than the surface of the sun. This process of blasting tin is then repeated fifty thousand times per second to produce EUV light in the quantities necessary to fabricate chips.

The company that learned how to do this is an American firm called Cymer. Their process depended on a laser so powerful it produced too much heat unless it could be cooled with fans; but the fans ran so fast they burned out their bearings; so engineers invented a process for holding the fans in mid-air, suspended by magnets. The company that invented the new laser is a German firm distractingly called Trumpf. Its development took a decade. Each laser consists of 457,329 parts. The next stage in EUV was the manufacture of a new kind of mirror, made by the German company Zeiss, the smoothest mirror ever made: if it was the same size as Germany, its largest irregularity would be 0.1 millimetre. But the most complicated laser ever made and the smoothest mirror ever made are just two components of ASML’s lithography device. Look back over that chain: the Taiwanese company (TSMC) commissions the Dutch company (ASML) which commissions the US company (Cymer) which commissions the German company (Trumpf) and also the other German company (Zeiss). It is no wonder that ASML’s latest EUV device is ‘the most expensive mass-produced machine tool in history’.

At this point, the technological sublime and geopolitics merge. Chips are ubiquitous, but top-end chips are not: they are the product of a highly concentrated manufacturing process in which a tiny number of companies constitute an impassable global choke point. If you can’t work with ASML, you can’t make a high-end chip. If you can’t get your top-of-the-range chip made by TSMC, Samsung or Intel, there’s no point designing it, because nobody else can manufacture it.

This matters because high-end chips are indispensable in a range of industries, especially ones which are military or military-adjacent, and it matters particularly to China. We are used to the idea that everything is made in China, up to and including many products from well-known Western brands. In her 2007 book A Year without ‘Made in China’, the American journalist Sara Bongioni and her family tried to live without Chinese-made goods for a year, and found that some items simply were not available from anywhere else. Bongioni also made the point that her experiment would have been unthinkable if her children had been digitally connected teenagers instead of amenable toddlers. Everything is made in China: that’s a truism.

For high-end microchips, however, the truism isn’t true. China has to import powerful microchips. The numbers involved are substantial. For most of this century, China has spent more money on importing microchips than it has on importing oil. ‘China’s import of chips – $260 billion in 2017 … was far larger than Saudi Arabia’s export of oil or Germany’s export of cars. China spends more money buying chips each year than the entire global trade in aircraft.’ China is acutely aware of its dependence on the West in this area, and has been spending frantically to catch up. Miller describes the dawning consciousness of this fact as China’s ‘Sputnik moment’ – the point at which the superpower realises it has fallen behind and needs to catch up. That’s an interesting metaphor, because at the time of Sputnik, the US thought of itself as the global superpower; to say that China is having a Sputnik moment is to say that it thinks of itself as America did in the mid-1950s.

Miller writes at the end of his book that, ‘absent a major change in US export restrictions, the People’s Liberation Army will acquire much of the computing power it needs by simply buying it from Silicon Valley.’ In October 2022, with little notice before and not nearly enough attention afterwards, that ‘major change’ came. The Biden administration announced a ban on microchip exports to China, targeting both the US companies that deal with China, and any overseas companies that use US-made semiconductor technology. That means everyone, everywhere – a hugely broad ban, whose intention is to cripple the Chinese semiconductor industry. Trump talked a good game about trade war with China, but when it comes to intentionally damaging China’s strategic interest, nothing he did was within a country mile of Biden’s new policy.

This is the next big twist in the global story of the microchip. The chip ban has been described as a ‘declaration of economic war’. And perhaps not only economic war. The assumption in military circles is that AI is going to be crucial to the next wave of innovation in warfare. The AI revolution will depend on new chip technology. The second Cold War is going to be a military-technological contest just like the first one, and once again semiconductors are going to be central. We are starting to get glimpses of what that might look like, with the first arrivals of drone swarms on battlefields. Coming soon: unmanned vehicles, fire-and-forget missiles, ‘loitering munition systems’ and facial recognition assassination drones. Advanced chips are as crucial to the process of designing new weapons systems as to the weapons themselves, because the majority of testing for these systems is done on computers. Fingers crossed that all this helps with avoiding World War Three.

As for the technological dividend that will eventually devolve to the rest of us – who knows? The new technology will be everywhere, from energy to medicine to transportation, just as it was for the first six decades of the semiconductor. To take just one example, we’re starting to get a glimpse of what consumer-facing AI will look like thanks to the arrival of ChatGPT and its competitors. When I asked the latest incarnation of the chatbot to list the benefits of the technology, three of its four answers focused on communication between companies and the individual. I suspect that will mean, fairly imminently, customer service systems which replace phone-automation hell with chat services that are much better, most of the time, but offer no exit from their closed systems, are incapable of admitting error and won’t let you engage with a human being, ever. So it’ll be a lot better, except when it’s a lot worse. Let’s hope there will be also be some cool consumer stuff, to help take our minds off the rest of it.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences