The Thinking Machine: Jensen Huang, Nvidia and the World’s Most Coveted Microchip 
by Stephen Witt.
Bodley Head, 248 pp., £25, April, 978 1 84792 827 6
Show More
The Nvidia Way: Jensen Huang and the Making of a Tech Giant 
by Tae Kim.
Norton, 261 pp., £25, December 2024, 978 1 324 08671 0
Show More
Empire of AI: Inside the Reckless Race for Total Domination 
by Karen Hao.
Allen Lane, 482 pp., £25, May, 978 0 241 67892 3
Show More
Supremacy: AI, ChatGPT and the Race that Will Change the World 
by Parmy Olson.
Pan Macmillan, 319 pp., £10.99, July, 978 1 0350 3824 4
Show More
Show More

The tulip bubble​ is the most famous financial bubble in history, but as historical examples go it is also, in one crucial respect, misleading. That’s because anyone can see the flagrant irrationality which was at work. At peak tulip madness in 1637, rare bulbs were so expensive that a single one was worth as much as a fancy canalside house in Amsterdam. You don’t have to be Warren Buffett to see that the disconnect between price and value was based on delusional thinking.

Most bubbles aren’t like that. Even the South Sea Bubble, the event which gave its name to financial bubbles, had an underlying rationale: who can deny that the expansion of global networks of trade and capital turned out to be a vitally important and vastly lucrative event? Even if all the investors in the original bubble – including Isaac Newton, who realised it was a bubble, but got caught up in the excitement anyway – lost their shirts. The historical pattern is typically that a big, genuine innovation is spotted on the horizon. Money floods in to take advantage. Too much money. The flow of capital is so great that it is impossible to allocate it correctly, and distinctions disappear between what is likely and what is impossible, what is prudent and what is reckless, what might happen and what never could. After the flood of money, the doubts; after the doubts, the crash; and after the crash, the gradual emergence of the phenomenon that got all the speculators so excited in the first place. It happened with the South Sea Bubble, with the many railway manias of the mid-19th century, with the electrification mania of fifty years later and with the dot-com bubble at the turn of this century.

That is where we are now with AI. In the deep historical past of 2018, Apple became the first public company in the world to have a market capitalisation of more than a trillion dollars. Today, each of the ten biggest companies in the world is worth more than $1 trillion. Only one of them, the Saudi oil monopoly, Aramco, has nothing to do with the future value of AI. The top company, Nvidia, is worth $4.45 trillion. Not by coincidence, Nvidia shares are the purest bet you can make on the impact of AI. The leading firms are lending money to one another in circular patterns, propping up turnover and valuations. Colossal amounts of money are pouring in. Is it a bubble? Of course it’s a bubble. The salient questions are how we got here, and what happens next.

How did we get here? That story is among other things a narrative about two men, who gratifyingly correspond to the two main character types of the tech age: academically overachieving immigrant (Elon Musk, Sergey Brin, Sundar Pichai, Satya Nadella) and US-born college dropout (Steve Jobs, Bill Gates, Mark Zuckerberg). Companies founded or run by such men are the first, second, third, fourth, fifth and seventh most valuable in the world. Their combined value is $20.94 trillion – one sixth of the entire world economy.

Let’s begin in medias res. In the spring of 1993, three nerds visited a lawyer in Silicon Valley with the intention of setting up a company to make computer chips. The men were Curtis Priem, Chris Malachowsky and the person they had chosen to be their CEO, Jensen Huang, a Taiwanese-born electrical engineer with a talent for management and business. Malachowsky and Priem, according to Stephen Witt’s Thinking Machine, had complementary skills – they were, respectively, an architect and a chip mechanic. They wanted to make a new kind of chip, optimised for a rapidly growing sector: video games. Their employer, the large chip company LSI Logic, didn’t like the idea, so the three men cooked up a business plan, working mainly in a branch of the 24-hour chain restaurant Denny’s that was accessorised with bullet holes from drive-by shootings. Huang didn’t think the new company was worth launching until they had a credible chance of making $50 million a year in revenue. Fiddling with spreadsheets over long sessions at Denny’s, he eventually made the numbers add up. The three amigos went to see Jim Gaither, a lawyer well known in the Valley. Gaither filled out the paperwork, with the company’s name left as NV, for New Venture. Malachowsky and Priem were entertained by that: they had been playing around with company names that suggested their chip would leave competitors sick with envy. The coincidence was too good to resist. They decided to call their company Nvision. When the lawyer checked, it turned out that Nvision was already taken. They chose a backup: Nvidia.

Good choice of CEO, good choice of name. A third of a century later, Huang is the longest-serving CEO in the industry and Nvidia is the most valuable company in the world. Nvidia’s share of global stock market value is historically unprecedented: its shares make up a greater part of global indices than the entire UK stock market.

Huang had a hard start in life. He arrived in the US in 1973 aged nine, small for his age and not speaking much English. His parents, Hokkien-speakers from Tainan who had emigrated to Bangkok, had attempted to teach him and his brothers English by making them learn ten words a day, chosen at random from the dictionary. They sent Huang to the Oneida Baptist Institute in Kentucky under the mistaken impression it was a posh boarding school. In fact, it was a reform school for unruly boys whom the regular US education system couldn’t handle. Huang’s academic abilities meant that he was put in a class with boys a year older. If you were designing a formula to make a child a target for bullying, you couldn’t do much better. On his first night, Huang’s roommate pulled up his shirt to show him the scars he had accumulated from knife wounds. The newcomer, who stayed at school during the holidays because he had nowhere else to go, was given the job of cleaning the toilets.

This might sound like a deprivation narrative. Huang doesn’t tell it that way. He taught his roommate to read, and his roommate taught him to do push-ups – a hundred a day. The bullies stopped trying to tip him off the rope bridge he had to cross on the way to school. Huang says it toughened him up and, according to Witt, in a commencement speech in 2020 he ‘said that his time at the school was one of the best things ever to happen to him’. After two years in Kentucky, Huang moved to Oregon, where his parents had immigrated. He went to school and university and married there, before starting his career at the Silicon Valley microchip design company AMD. Many promotions and one job move later he met Malachowsky and Priem through LSI.

The trio’s new venture was far from an overnight success. There were at least 35 companies competing to build a specialised chip for video games, and it was evident that most of them were going to fail. When Nvidia’s first chip, the NV1, bombed, it looked as if their company was going to be one of them. ‘We missed everything,’ Huang later said. ‘Every single decision we made was wrong.’ He laid off most of the company’s employees and bet the shop on the successful design of their next chip, the NV3. (The NV2 was cancelled before launch.) Rather than build the chip the traditional way – they couldn’t do that, because they would have run out of money before it was finished – they used an emulator, a machine designed to mimic chip designs in software rather than silicon, to test it virtually. When the first real NV3 chip arrived, there was a crucial test. If even one of the 3.5 million transistors on the chip was flawed, it would be dead on arrival and Nvidia would disappear. It wasn’t and it didn’t. ‘To this day we are the largest user of emulators in the world,’ Huang says.

By this point, in 1997, Huang had made two big bets: one on video games’ insatiable demand for better graphics, and one on the emulator. Those successful bets kept Nvidia alive, and growing. He would make three more. The first was on a type of computing known as parallel processing. A traditional computer chip, such as the one inside the laptop I’m using, runs with a Central Processing Unit, a CPU, which works through computations in sequence. As chips have grown in power, the length and complexity of the computations have too. But chips had become so small that they were starting to run up against the laws of physics.

Parallel processing instead performs calculations not in sequence, but simultaneously. Rather than working through one huge calculation, it works through lots of small calculations at the same time. On YouTube, you can find the MythBusters, an excitable duo of American science-explainers, demonstrating the difference at an Nvidia conference in 2008 (Huang commissioned the demo). The MythBusters set up a robot gun to fire paintballs at a canvas. The first run works like a CPU: the robot fires a rapid sequence of blue paintballs, adjusting its aim after every shot to paint a smiley face. It takes about thirty seconds. Then they set up another robot gun, this time shooting 1100 paintballs simultaneously. The guns cough and in a fraction of a second – eighty milliseconds, to be precise – on the canvas appears a paintball copy of the Mona Lisa. The instant Mona Lisa is a visual metaphor for the way the new chips worked: instead of huge calculations done in sequence, a vast number of short calculations done at the same time. Parallel processing.

The video games industry loved the new chips, and demanded an update every six months, to render the ever more complex visual environments inside their games. Keeping up with that appetite was demanding and expensive, but it took Nvidia to a leading position in the chip industry. In The Nvidia Way, Tae Kim describes Huang’s relentlessness in keeping ahead of the competition. ‘The number one feature of any product is the schedule,’ Huang said, marking a difference between engineering elegance and Nvidia’s emphasis on getting-it-done, getting-it-shipped. The company’s chips were by this point so powerful that it began to seem bizarre that their only use was in allowing people to go online and shoot one another in increasingly complex and well-rendered sci-fi settings. At this point, Huang made another of his bets. He set Nvidia to develop a new kind of chip architecture, which he gave the deliberately obscure name CUDA, an acronym of Compute Unified Device Architecture.

The term doesn’t really mean anything, which was part of the point – Huang didn’t want the competition to realise what Nvidia was doing. Its engineers were developing a new kind of architecture for a new kind of customer: ‘doctors, astronomers, geologists and other scientists – highly educated academic specialists who were skilled in specific domains, but who maybe didn’t know how to code at all’. In Witt’s metaphor, the CPU is like a kitchen knife, ‘a beautiful multipurpose tool that can make any kind of cut. It can julienne, batonnet, chop, slice, dice, or hack … but the knife can only ever chop one vegetable at a time.’ Nvidia’s processor, which the company was now calling a GPU, or Graphics Processing Unit, was more like a food processor: ‘loud, indelicate and power-intensive. It cannot chiffonade tarragon or score a crosshatch on a tube of calamari. But to mince a bunch of vegetables quickly, the GPU is the tool.’ The CUDA architecture took this tool and repurposed it for a new audience. In effect, gamers were paying for the chip development costs of the scientific users who Huang believed would show up. It was a version of ‘if you build it, they will come.’

They didn’t, or not in sufficient numbers to make CUDA a success. Demand failed to surge, and so did the company’s share price. There are many examples in the history of technology of an invention waiting for a ‘killer app’ – an application or function that suddenly gives the invention an irresistibly compelling purpose. The killer app for the PC, for instance, was the spreadsheet: overnight, a new technology that allowed a user to experiment with numbers and parameters and see what would happen if you tweaked a and b with the intention of arriving at z. It’s no exaggeration to say that spreadsheets remade capitalism in the 1980s by making it easy to run multiple alternative business scenarios and continue until you’d come up with something that made sense. Nvidia’s amazing new chips and their CUDA architecture were waiting for a killer app.

Salvation arrived in the form of an unfashionable branch of computing called neural networks. This was a field dedicated to the idea that computers could copy the structure of the brain by creating artificial neurons and connecting them in networks. Early neural networks were trained on labelled datasets, where the answer for each image was known in advance. The network made a prediction, compared it with the correct label, and adjusted itself using an algorithm called backpropagation. The major breakthrough came when researchers learned how to train networks with many layers of artificial neurons – ‘deep learning’. These deep networks could detect increasingly complex patterns in data, which led to dramatic progress in image recognition and many other areas. A computer scientist at Google, for instance,

fed their deep learning net a random sampling of ten million still images taken from YouTube and let it decide which patterns occurred frequently enough for the net to ‘remember’ them. The model was exposed to so many videos of cats that it independently developed a composite image of a cat’s face without human intervention. From then on, it could reliably identify cats in images that were not part of its training set.

Three things came together: algorithms, datasets and hardware. Computer scientists had developed the first two. It was Nvidia’s chips that brought the third – because, as it happened, the parallel processing of these chips was perfectly adapted for the new El Dorado of deep learning. Multiple calculations happening at the same time was exactly what constituted neural nets. These neural nets are the foundational technology for what was once called machine learning and is now generally referred to as AI. (Machine learning is a more accurate and helpful term, in my view, but that’s a topic for another day.)

The head scientist at Nvidia was a man called David Kirk. As he told Witt,

‘with parallel computing, it really took us a fair amount of convincing to talk Jensen into it … Same with CUDA. We really had to make the business case.’ But with AI, Huang experienced a Damascene epiphany. ‘He got it immediately, before anybody … He was the first to see what it could be. He really was the first.’

Huang reasoned that if neural nets could solve visual learning, they had the potential to solve everything else too. He sent out a company-wide email one Friday saying that Nvidia were no longer a graphics company. One colleague recalled: ‘By Monday morning, we were an AI company. Literally, it was that fast.’ That was in 2014. It was the fifth and most successful of Huang’s five bets, and the one that has made Nvidia the planet-bestriding colossus it is today.

Soon after this, if you were a nerd or nerd-adjacent, you started to hear about AI. Talking to people who knew more than I did about technology and economics, I would often ask questions along the lines of ‘What’s next?’ or ‘What’s the next big thing?’ and increasingly the answer that came back would involve AI. I have a particular memory of talking to an astute tech investor a few days after the Brexit vote, and asking him what he thought was going to happen. I’ve forgotten the details of his answer – we were drinking martinis – but the gist was AI advances in China. What struck me most was that I was asking about Brexit, but he regarded AI as so much more important that it didn’t even occur to him that was what I meant.

There was, however, a frustrating aspect to these conversations, and to pretty much everything I read about AI. People seemed convinced it was going to be a huge deal. But they were short on details. It was easier to feel the heat than to see the fire. That remained the case even after the high-profile triumph of an AI, AlphaGo, over the world Go champion, Lee Sedol. Games happen within a fixed set of parameters. Because of that, the idea of solving them through algorithms is not astonishing a priori. The closest glimpse I had of the new possibilities came in the unlikely setting of a hotel room in Kobe in November 2016. I was woken up by a text in Japanese. I opened Google Translate, hoping that I would get at least a rough and ready version of the message, and instead found a complete and clear warning that a powerful tsunami had occurred just off the Hyogo coast, right next to Kobe. The good news came at the bottom of the text: ‘This is a training message.’ That was my introduction to Google’s new neural-net-based translation, which by bizarre coincidence had launched in Japan that very day. It was the neural net that took my screenshot of the message and turned it from incomprehensible katakana characters into panic-inducing English. That was a vivid lesson in what neural nets could do – but it was also, for me at any rate, an isolated lesson. Daily life did not suddenly fill with new evidence for the power of AI.

What made me, and much of the rest of the world, wake up to the power and potential of the new technology was the launch, in November 2022, of ChatGPT. The people at OpenAI, the company that created ChatGPT, saw it as a low-key rollout of a new customer interface. Instead, it was the fastest growing tech launch of all time. AI went from a niche interest to the top of the news. It has stayed there. This paradigm-shifting event brings us to our second protagonist, the co-founder and head of OpenAI, Sam Altman.

If Huang is​ modern tech overlord type one, the academically overachieving immigrant, Altman is type two, the American college dropout. He was born in 1985, the son of a dermatologist and a real-estate broker. Altman had a conventional clever kid’s childhood at the local posh school in St Louis, with the distinguishing feature that he came out as gay in his teens and spoke to a school assembly on the subject. He went from there to Stanford and caught the start-up bug, dropping out in his second year to found a location-based social app, Loopt. The name doesn’t mean anything but at that point successful start-ups tended to have two Os in their name: Google, Yahoo, Facebook. (They still do: witness Goop, Noom, Zoopla, and my personal favourite, the ‘technology platform that connects the faith ecosystem’, Gloo. One day soon I plan to launch a nonsense-detecting start-up called Booloocks.)

More important than Loopt was the world to which it introduced Altman. His mentor was a British-American software mage called Paul Graham, who had written a famous programming textbook and then made his fortune selling a web company to Yahoo. In 2005, he and his wife, Jessica Livingston, set up a project called Y Combinator, based where they lived in Cambridge, Massachusetts. The idea was to offer funding, mentorship and support to start-ups. The target audience was superbright college kids, and the idea was that instead of doing some boring, CV-polishing internship over the summer, they could come to Y Combinator, which would pay them $6000 and put them through a start-up boot camp, with Graham and his circle of contacts giving them education, advice and networking opportunities.

Y Combinator has been a huge success, and many of the companies it launched are internet-age household names: Airbnb, Reddit, Stripe, Dropbox. The central lesson of the ‘incubator’, as Y Combinator called itself, was that the character and talent of the founder were more important than the specific idea they were working on. Example number one: Sam Altman. He applied to join the first batch of Y Combinator recruits. Graham tried to put him off for a year, on the grounds that, at nineteen, he was too young. Altman didn’t take no for an answer – a key personality trait. In that and in other respects, he made a big impact on Graham. ‘Within about three minutes of meeting him, I remember thinking, “Ah, so this is what Bill Gates must have been like when he was nineteen.”’ He once said: ‘Sam is extremely good at becoming powerful.’ In case that doesn’t make his view of Altman clear: ‘You could parachute him into an island full of cannibals and come back in five years and he’d be the king.’

Parmy Olson’s lively Supremacy is essentially positive about Altman, whereas Keach Hagey’s thorough and clear-headed The Optimist is more equivocal. She makes Graham’s remark sound light-hearted, joshing and essentially complimentary. Karen Hao’s much more sceptical Empire of AI makes Graham’s words illustrative of an unprincipled ambition so intense it has a tinge of sociopathy. This duality of perspectives runs through Altman’s story. At almost every point, it is possible to see his actions as benign, if sometimes ‘conflict averse’ in a manner that causes misunderstandings. It is also possible to see him as a much darker figure. In the case of that apparently ordinary childhood, there is his version, and then there is the version of his sister, Annie, who in 2021, under the influence of memories recovered through therapy, said on Twitter that she had ‘experienced sexual, physical, emotional, verbal, financial and technological abuse from my biological siblings, mostly Sam Altman and some from Jack Altman’. Altman’s mother told Hao that the allegations were ‘horrible, deeply heartbreaking and untrue’. No outsider can adjudicate that sad story. But the existence of radically different versions of the same events is a recurrent theme in Altman’s life.

This is true even of the founding of OpenAI. By 2014, Altman was king of the cannibal island. Loopt hadn’t taken off, but when Graham stepped down as head of Y Combinator he chose, to general shock, the unknown 28-year-old as his successor. Altman was already unfathomably rich, thanks to a venture capital fund he had set up, Hydrazine, which invested in the star graduates from the start-up incubator. To take just one example, he owned 2 per cent of the payment company Stripe, which at the time of writing is worth roughly $107 billion. He was on his own account no longer motivated by money, but by a longstanding ambition to make an impact in the non-digital world – and to be fair, that was what he demonstrated at Y Combinator during his leadership.

Altman’s interest in technologies with consequences in the physical world led him to the subject of AI. Specifically, it led him to the idea that a rogue AI could prove an existential threat to humanity. This anxiety is widely held in certain tech circles. ‘Doomers’, as they’re known, talk about ‘p(doom)’, or the probability of humanity being destroyed by an AI superintelligence not aligned with human interests. Altman, influenced by these worries, emailed Elon Musk, a Doomer with ‘a more paranoid, pessimistic’ view than others in the industry (says Olson). The idea was that ‘someone other than Google’ should develop the technology, so that it ‘belongs to the world’. Musk thought it was ‘probably worth a conversation’. The result was a meeting at a Silicon Valley hotel, with attendees including Musk (who turned up an hour late), Altman, the prominent AI researchers Ilya Sutskever and Dario Amodei, and the star Stripe programmer Greg Brockman, who was in search of a new challenge. As a result of that meeting, Musk funded, and Altman helped to set up, OpenAI, whose purpose was to develop a safe version of superintelligent AI. The new company was to be a non-profit; its only function was ‘to advance digital intelligence in a way that is most likely to benefit humanity as a whole, unconstrained by the need to generate financial return’. The company would publish its research discoveries – hence the ‘open’.

That seems clear. It turned out not to be. Musk’s overriding obsession was the need to beat Google in the race to develop AI. He had developed one of his hyperfixations on Demis Hassabis, the British founder of the AI company DeepMind, which was bought by Google in 2014. To Musk, Hassabis was, as Hao puts it, ‘a supervillain who needed to be stopped’. (Hassabis won the Nobel Prize for chemistry last year, for solving the question of how proteins fold. But wait – maybe that’s exactly what a supervillain would do?) The way to stop Hassabis was to develop AI before he could, and the first stage of doing that was to attract talent. Altman was good at that, and OpenAI soon had several star researchers and programmers, including Sutskever – he was an especially important figure because the image recognition software he had helped to develop, AlexNet, had been one of the breakthroughs in the field of neural nets.

So far so good. The problem was that all this was expensive. Musk had thought that OpenAI needed $1 billion to play with to have a fighting chance of competing with Google. That turned out to be a big underestimate. Computer scientists were starting to find that size was fundamental in the training stage of creating AI – the point at which the data were fed into the algorithms and the neural net went to work. In civilian life, ‘compute’ is a verb. In AI, it is a noun, denoting the size of your computing power. Some AI programs have proved to be, once trained, relatively compact – the Chinese company DeepSeek, for instance, has a cutting-edge model that will work on an ordinary personal computer. I know a couple of people who run it on their laptops. But getting to that point, training the program, is another story, one in which the scale of your compute is all-important. In the tech world, it’s the ultimate example of a contest in which men fight for the right to say, ‘mine’s bigger.’

Google’s was bigger. OpenAI needed to scale up. Altman’s method for doing so was to cut a deal with Microsoft, in which the unloved software giant would give OpenAI $1 billion in funding in return for the exclusive use of OpenAI’s products in its own software and a share in any profits. To ensure that investors didn’t make an exploitative amount of money from the arrangement, returns were capped at a hundred times the initial investment. Microsoft stood to make a paltry $100 billion. Also, the deal would end if and when OpenAI developed Artificial General Intelligence, on the basis that existing forms of money would no longer have any value.

The deal was to be contained within a new, for-profit subsidiary of the parent non-profit. That might sound odd, but the set-up in which a for-profit company is owned by a non-profit entity isn’t unique. Denmark’s Novo Nordisk, for instance, maker of the magic weight loss drugs Ozempic and Wegovy, is majority owned by a non-profit foundation. There are other examples. What is bizarre about the OpenAI arrangement is that the non-profit and the for-profit have contradictory premises. What’s more important: developing ‘aligned’, benign AI, for the benefit of all humanity? Or making enormous amounts of money as quickly as possible by commercially exploiting your new technology? As for the ‘open’ bit, forget it – that aspect of the company’s mission was quietly forgotten. Any discoveries OpenAI might make were now proprietary.

Musk was livid. He saw the Microsoft deal as a betrayal of the company’s vision and purpose. Not long afterwards, at the end of 2018, he rage-quit (as gamers call it); later, he announced that he was setting up a competitor, xAI. That company now owns what used to be Twitter and is training on its data. Musk’s alternative to Altman’s company is an AI trained on pornbots, Nazis and spam, with a heavy emphasis on parroting his own worldview. Lovely.

AI had turned into a race. One of the key components in the race was talent, where the company’s position was strong, and another was funding, which was a permanent struggle. To attract attention and therefore funding, the company needed stunts – high profile events like DeepMind’s development of AlphaGo. Altman turned to multiplayer online video games, which, because of their fluidity, complexity and the unpredictability of human behaviour, are significantly harder for computers than finite rule-based games such as chess and Go. The chosen game was a multiplayer online contest called Defence of the Ancients 2, universally abbreviated to Dota 2. By 2017, OpenAI had developed a program that could beat a professional Dota player in a one-on-one contest. By 2019, it could play as a full team of five players and beat teams of five professionals. A remarkable achievement, with one catch: hardly anybody noticed or cared. The company’s next stunt, in 2019, was to announce that the second version of its showcase product, GPT-2, had such powerful potential for harm that the company had to put restrictions on its release. The software’s ability to generate text attracted a decent amount of attention in the tech world, without making a real dent in public attention.

That came on 30 November 2022, with the launch of a consumer interface to a recent version of the company’s software. The underlying software was GPT-3, a model trained with five hundred times more compute than GPT-2. The interface – basically a wrapper, or shop window, for the underlying model – was called ChatGPT. There was nothing about the program that existing models could not already do, which is the reason nobody inside OpenAI was expecting what happened next. Within hours of what was supposed to be a low-key trial, the comments on Altman’s launch tweet were, in Hagey’s words, ‘increasingly rapturous screenshots of people asking the bot to do their homework, locate the clitoris and tell them the meaning of life’. In two months, ChatGPT reached a hundred million users, ‘making it the fastest-growing consumer tech product in history’.

This was the point at which everything blew up. The value of the world’s top ten companies is $25.6 trillion. Of that, $15.1 trillion has accumulated since 30 November 2022 and is directly linked to the AI boom. Nvidia’s technology was the first factor driving the explosion – its most powerful chip, the H200, is a must-have for ‘frontier’ AI developers. A single H200 retails for between $30,000 and $40,000, depending on the configuration. The company is worth over eleven times more than it was on the day ChatGPT launched.

After Nvidia’s chips, the second big factor driving the boom was the hype created by Sam Altman and OpenAI. Hang on, though – isn’t this the guy who was worried that AI might destroy humanity? The same guy who said, ‘AI will probably, most likely, lead to the end of the world’ and ‘the bad case is, like, lights-out for all of us.’ You know, the Doomer? Well, yes, and it’s not as if people didn’t notice. It was part of a pattern whereby Altman said different things to different people. On Friday, 17 November 2023, the board of OpenAI, accusing Altman of not being ‘consistently candid’ in his communications, but not giving any further detail, sacked him. It came as a shock. When an employee said that Altman’s departure might mean the death of the company, Helen Toner, a member of the board, said: ‘That would actually be consistent with the mission.’ True – though that’s not what OpenAI’s employees, many of whom stood to make an enormous amount of money from the commercialisation of their work, wanted to hear.

The pushback was immediate. Microsoft offered to hire Altman and anyone else from OpenAI who wanted to join him. More than seven hundred of the 770 OpenAI staff signed a petition calling for Altman’s reinstatement. On Tuesday, 21 November, four days after being sacked, Altman was reinstated and the board members who wanted to sack him were forced out. A new chairman was appointed, the well-known ethical titan Larry Summers. Sutskever, the lead researcher at OpenAI and a board member, went on to resign and started a new company, Safe Superintelligence. Amodei, another lead researcher, had already left, questioning Altman’s commitment to AI safety: he founded the company Anthropic, also dedicated to safe AI. This means that three of the principals at the Silicon Valley meeting that led to the creation of OpenAI – Musk, Sutskever and Amodei – had fallen out with Altman. For each of them, the issue was whether he believed the things he had said about the importance of safety in developing AI. In the lawsuit that Musk eventually brought, he accused Altman of having mirrored his views in an attempt to gain his trust and his funding.

There’s that duality again. Good Sam, bad Sam. We keep encountering these moments in Altman’s story. In 2023, OpenAI approached Scarlett Johansson to provide the voice for a ChatGPT interface (she had played the voice of the intelligent android in Spike Jonze’s movie Her). She said no. When the software launched, its voice sounded a lot like Johansson’s. Altman celebrated the launch with a one-word tweet: ‘her’. (He is too advanced to use capital letters.) When Johansson objected, OpenAI put out a statement: ‘We cast the voice actor behind Sky’s voice before any outreach to Ms Johansson.’ But … according to Johansson, you were approaching her agent two days before the launch. Goofy or sinister? When the company launched an image creation app that would make images in the style of the great Japanese anime house Studio Ghibli, Altman talked about ‘the breadth of creative use cases’ made possible by the AI generation of images. But the maestro of Ghibli, Hayao Miyazaki, who in many people’s opinion (including mine) is the greatest living exponent of animation, has said that AI art is ‘an insult to life itself’ and added that ‘I would never wish to incorporate this technology into my work at all.’ Altman can’t possibly think that typing a few words into an image generator – ‘J.D. Vance kissing a frog in the style of Studio Ghibli’ – is a form of creativity. His advisers would certainly have told him Miyazaki’s views, if he didn’t already know them. In saying what he did, was Altman displaying the boyish enthusiasm of a 40-year-old multi-billionaire – or were his remarks smirking, taunting trollery?

As for Altman’s embrace of Doomerism, there are two ways you can look at that, too. He might have believed it, once upon a time. But talking about the existential risk to humanity posed by AI is also, let’s face it, a wonderful marketing tool. This stuff is powerful – so powerful – too powerful – it might even kill us all! Roll up, roll up, and ask it to design a poster of the world’s cheeses in ascending order of strength – before it kills everybody! Our tech overlords like the idea of being Thomas Edison, genius inventor-businessman, but they often have more in common with P.T. Barnum, genius of marketing and hype. Altman could go toe-to-toe with Barnum, and I wouldn’t want to pick a winner.

In addition to being a superb marketing tool, Doomerism is also an excellent distraction from the real-world harm being done by AI right here, right now. One of Altman’s signature moves is to loudly and frequently call for the regulation of AI. But there is a distinction between AI safety, which is hypothetical, and AI harm, which is happening now. For one thing, much of the data on which AI models have been trained is stolen – including, as it happens, from me. The only reason I know that is because my books are on a list of works that have been illegally used in training data. Many more people’s work has been stolen without any evidentiary trail to show it.

And then there is the tendency of AI models to, in Hagey’s words, ‘make things up, discriminate against women and minorities and produce toxic content’. A model trained on datasets that incorporate historic patterns of discrimination and bias will inevitably replicate those same patterns. The process of using human feedback to adjust and improve the models’ output is vividly described by Hao: it involves the extensive use of poorly paid foreign labour and is both exploitative on its own terms, and prone to introducing other forms of bias. (The over-correction to bias in AI output is what led to Google’s Gemini model serving users images of black women when asked to show typical popes or Vikings.) AI consumes unconscionable amounts of energy, much of it in the pursuit of obviously trivial output, and it’s not clear when this demand will slow. Sutskever has said: ‘I think that it’s fairly likely that it will not take too long for the entire surface of the Earth to become covered with data centres and power stations.’ That’s not an opponent of AI talking – it’s someone who is working as hard as he can to create the AI future.

So​ what happens next? The next big question is what happens when the bubble pops, and what it means for the future of AI and, for that matter, humanity. Jeff Bezos has spoken of AI being an ‘industrial bubble’, resembling the huge capital investment soaked up in the creation of the railways, rather than a financial bubble, based on pure speculation, which leaves nothing behind it when it pops. That seems reasonable to me.

There are four main possibilities. The first is that AI is a giant nothingburger. Large Language Models – the current market leader, thanks to OpenAI and its competitors – turn out to have insuperable limitations. People have noticed that the models don’t learn from input and have a tendency to ‘hallucinate’. (That word, by the way, is another piece of disguised salesmanship. Talk of ‘hallucinations’ distracts us from the fact that AIs get things wrong all the time. The implication is that the mistakes are a side effect of being sentient – because only sentient beings can hallucinate. AIs aren’t sentient, and can’t hallucinate, any more than a fridge or a toaster can. They also can’t lie, because that involves intention. What they can do is get things wrong.) Everyone gives up on AI and the whole story goes away. This seems to me to be the least likely scenario, because of the various impacts AI is already having.

Scenario number two: somebody builds a rogue superintelligence, which destroys humanity. Preventing this was, let’s not forget, the motive behind the creation of OpenAI. The apocalyptic scenario seems to me unlikely, for reasons linked to the issue of sentience. AIs can mimic intent, but they can’t possess it. So why would they bother to kill us? Again – a fridge can kill you (there’s a memorable death-by-fridge in a novel by A.S. Byatt) but it can’t do it on purpose.

Third scenario: AI leads to the ‘singularity’, the point at which computers become more intelligent than human beings; learn to self-program and self-improve; do so at speed and at scale; and lead humanity to a new era of, to use the latest hot term, abundance. Artificial general intelligence, or artificial superintelligence, creates a new era of cheap energy, drug discovery, desalination, an end to hunger, you name it. ‘Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony and the discovery of all of physics – will eventually become commonplace.’ That’s from an essay by Altman published last year, titled ‘The Intelligence Age’:

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it and end up with systems capable of creating increasingly capable artificial intelligence. This may turn out to be the most consequential fact about all of history so far.

Fourth scenario: AI turns out to be what Arvind Narayanan and Sayash Kapoor call a ‘normal technology’. It is an important invention, just as electricity or the internet are important, but it is not a radical discontinuity in the story of mankind. This is partly because computer intelligence is inherently limited and partly because of ‘bottlenecks’, human obstacles to the adoption of technology. Some things stay the same, and some change radically. Some jobs, especially entry-level white-collar jobs, are automated away. Back-end processes in logistics and suchlike become more efficient. Some forms of labour become more valuable and some less. There are breakthroughs in some fields, such as drug discovery. Other areas are largely untouched, and there are many areas in which AI is a strange combination of surprisingly useful and profoundly unreliable.

The last of those options – ‘normal technology’ – seems to me to be the most likely, not least because it is the version that is already, to some extent, here. Some forms of inequality are already being magnified by AI – the one between capital and labour, for a start. Young people are already noticing the impact of automation on starter-level employment. Freelance wages in some areas of the economy are already down. If you had to pick a single text to sum up the last few decades in political economy, it would be ‘for he that hath, to him shall be given.’ If I had to bet, it would be on the continuation of that trend. You know what, though? One of the fun things about AI is that, unlike pretty much every other area of politics and economics, we’re going to get a clear answer. ‘It’s hard to even imagine today what we will have discovered by 2035,’ Altman has written. By 2035, we’ll either be extinct, on the verge of unimaginable prosperity for all humanity or kind of the same, but more so. Pass the popcorn. Or wait until your robot butler can do it for you.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences