The Coming Wave: Technology, Power and the 21st Century’s Greatest Dilemma 
by Mustafa Suleyman with Michael Bhaskar.
Bodley Head, 332 pp., £25, September 2023, 978 1 84792 948 8
Show More
The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI 
by Fei-Fei Li.
Flatiron, 322 pp., £25.99, December 2023, 978 1 250 89793 0
Show More
Show More

Latelast year, Rishi Sunak interviewed Elon Musk in front of an invited audience after the Bletchley Park summit on AI safety. He asked Musk what impact AI would have on the labour market, and tried to steer him towards a reassuring answer: AI wouldn’t take away people’s jobs but would create new ones – and politicians like Sunak could help by creating an incredible education system to allow people to take advantage of the new opportunities. Musk responded instead that we are seeing the most disruptive force in history, which will be smarter than the smartest human. No one will need to work. ‘I don’t know if that makes people comfortable or uncomfortable,’ he said. Musk is given to extreme, and often unhelpful, pronouncements but isn’t alone in making unsettling predictions about AI.

Geoffrey Hinton, one of the most influential AI researchers of the last thirty years, is a comparatively recent convert to AI doomerism. Until May last year Hinton was, at 75, an active researcher in Google’s AI division. Observing the progress being made, he concluded that, to his surprise, existing algorithms were already better at learning than human brains, and that superhuman levels of intelligence would soon be achieved. He promptly retired, saying that we should be careful – since machines more intelligent than us are unlikely to be content to leave us in charge. Last month he was quoted as saying: ‘If I were advising governments, I would say that there’s a 10 per cent chance these things will wipe out humanity in the next twenty years. I think that would be a reasonable number.’

Mustafa Suleyman wrote The Coming Wave to help non-experts make sense of our predicament. The child of a Syrian minicab driver and an English nurse, he was abandoned by his parents at sixteen and dropped out of an Oxford degree in philosophy and theology. He worked as a policy adviser on human rights to Ken Livingstone, then mayor of London, before founding DeepMind in 2010 with Demis Hassabis, a maths prodigy and computer games programmer turned neuroscientist, and Shane Legg, another neuroscience postdoc. In 2014 the company was acquired by Google, making Suleyman fabulously wealthy at the age of thirty. While at DeepMind, he pushed for the company to pursue projects with clear societal benefits and set up an AI ethics unit. He left in 2019 after allegations of bullying. His latest venture, Inflection AI, a $4 billion company backed by Microsoft and Nvidia, aims to produce AI that is empathetic as well as useful. Its main consumer offering is Pi, a chatbot that engages you in conversation rather than just answering questions.

In interviews Suleyman has contrasted his politics with the libertarian views of an earlier generation of Silicon Valley leaders, and his book is in part an argument that states should intervene to constrain the powers of AI companies. He sets out a plausible case that we will see highly accelerated technological progress, driven by but not limited to AI, that there are very significant risks associated with this and that 21st-century politics isn’t well placed to deal with these challenges. He argues that a strategy of ‘containment’ – which would include legal requirements for AI companies to behave responsibly – is difficult but necessary. It’s a call to action rather than despair. ‘I remain an optimist,’ he writes in the penultimate chapter, where he enumerates a set of proposals for achieving containment.

On balance, I’m not an optimist. I find it hard to see how progress on Suleyman’s proposals – which require action not only from government but from the tech industry and wider society – can be achieved. But I also find it hard to accept that the challenge is as real, or as urgent, as Suleyman, Musk or Hinton would have it. Those who work at the leading edge of technology can’t always accurately assess its potential. Eight years ago Hinton suggested that it was no longer worth training radiologists, since AI would be able to interpret medical images within five years. He now concedes he was wrong. His error was not in his assessment of the way AI would develop, but rather in his failure to appreciate how difficult it would be for companies to translate technical success into products in a highly regulated market, or to understand the way a profession evolves as certain tasks are automated. Of the 692 AI systems that have so far been approved by the FDA for medical use, 531 target radiology, and yet today there are 470 vacancies for radiologists listed on a US job board.

Suleyman takes seriously the idea that once machines become more intelligent than us we might struggle to retain authority over them. But his focus is on the more plausible and still worrying idea that relatively soon they will be able to carry out complex tasks with minimal human oversight. The creation of tools that are almost completely autonomous in operation is so alluring, and so unbelievably profitable, that if it can be made to happen, it will happen. This is the reason containment rather than prevention is the strategy Suleyman suggests. AI is accelerating progress across science and technology as a whole. He argues that, as this pace increases, developments in AI – but also in biotech, robotics and related fields – will come at us faster than governments or regulatory agencies are able to deal with them. They haven’t experienced anything like this before.

DeepMind began by experimenting with algorithms that learned how to play games better than any human – Breakout, then Go, then chess – and is now applying the insights gained from this process to tackle other problems that can be framed in similar terms. In 2018 they solved protein folding, a challenge scientists had been working on since 1972, and in 2022 identified and shared the 3D structure of 200 million proteins. In 2021 Hassabis launched Isomorphic Labs to exploit these insights for drug discovery. There are huge potential benefits to this, but there are also downsides. It will be easier to design new drugs – and novel viruses. The ‘coming wave’ of Suleyman’s title will bring discoveries in AI, biotechnology, quantum computing, robotics and other fields, and this carries extraordinary risks for all of us.

It is barely a year since the launch of ChatGPT alerted the world to the power of AI, in the form of large language models (LLMs). And despite what Musk, Suleyman and others say, no one yet knows what impact they will have. LLMs are surprisingly limited: they generate text in response to a prompt. You can ask ChatGPT to give an account of Rishi Sunak’s politics, and it will provide a plausible, if measured, answer. The algorithms are trained, principally, by learning to predict the missing word in a passage; sceptics refer to them as a glorified form of autocomplete. This misses the point. Ilya Sutskever, the former chief scientist at OpenAI, which developed ChatGPT, puts it like this: the learning that takes place when an LLM is trained is a form of compression. The models crystallise the essence of the ideas expressed in the hundreds of billions, even trillions, of words on which they are trained. These words are a projection of the world we have created, so in learning to predict the right word, according to Sutskever, the algorithms come to understand that world.

The use of the word ‘understand’ is perhaps too anthropomorphic. Judea Pearl, a proponent of a different approach to AI, responded to Sutskever’s interview by tweeting a list of things that can make accurate predictions without possessing understanding: Babylonian astronomers, dogs chasing frisbees, probability distributions. We should be genuinely awestruck by what ChatGPT and its competitors are capable of without succumbing to the illusion that this performance means their capacities are similar to ours. Confronted with computers that can produce fluent essays, instead of being astonished at how powerful they are, it’s possible that we should be surprised that the generation of language that is meaningful to us turns out to be something that can be accomplished without real comprehension.

The simplest form of artificial intelligence predicts the appropriate label for some form of data. Given a collection of chest X-rays, if some are labelled as containing cancer and the rest as not containing cancer, a machine learning algorithm can be trained to recognise cancer. This kind of ‘discriminative AI’ typically has to be trained on a large number of accurately labelled images. In 2003, Fei-Fei Li, then a PhD student at Caltech, presented a conference paper introducing the idea of ‘one-shot’ learning. She showed that a network trained to identify faces, motorbikes and spotted cats needed to be trained on only one example of an aeroplane to identify aeroplanes accurately. In The Worlds I See, Li writes that, after the conference, she and her supervisor agreed that she should increase the number of categories in the training data she had been using. There was a database at Caltech with seven categories and she suggested rounding it up to ten. Her supervisor seemed unimpressed, so she raised her offer to fifteen. He later admitted that he’d been thinking of thirty, but his sense that she was resisting provoked him to ask for one hundred. When she wrote the paper up for journal publication she had labelled examples of images in 101 categories, including llamas, pizzas, mandolins and helicopters.

It’s hard to know how many categories are available to adult humans processing visual information, but one estimate is that we might have 30,000. This number came to obsess Li, and when she moved from Caltech to faculty positions at Princeton and then Stanford, she set about creating ImageNet, exploiting the emergence of tools like Google’s image search and Amazon’s Mechanical Turk to put together a database that now contains more than 14 million images organised into 20,000 categories. Its launch in 2009 failed to have the impact she was hoping for, and the following year, to generate more interest, she set up the ImageNet challenge, a competition for computer vision researchers. Li thought the scale of the database would push those working in the field to adopt more innovative methods. But the first iteration of the competition was won by a conventional approach that offered only an incremental improvement. The second was even more disappointing, with the number of entrants dropping from 35 to 15.

Everything changed in 2012. Two of Hinton’s PhD students, Alex Krizhevsky and Ilya Sutskever, won the challenge with a tool called AlexNet, which outperformed its nearest rivals by the incredible margin of 10 per cent. The really astonishing thing, from Li’s perspective, was that this wasn’t achieved by using a novel algorithm, but by a neural network, 20th-century technology that had been almost completely abandoned. It turned out, to almost everyone but Hinton’s surprise, that all you needed to make a success of neural networks was data on the scale of ImageNet, coupled with incredible processing power. The AlexNet team had realised that since training neural networks involves the same kind of calculation required to render computer graphics, chips designed for gaming consoles, known as graphical processing units or GPUs, could be used for the job.

Li had just had a baby and had planned to skip the conference at which the 2012 ImageNet results would be presented. But when she saw the results, she booked a last-minute flight to Florence and was in the room when Krizhevsky, not a confident presenter, talked through 27 largely monochrome PowerPoint slides to a sceptical and at times hostile audience. In 2014, AlexNet’s performance was eclipsed by a team from Google; in 2015, researchers at Microsoft surpassed GoogLeNet with the much larger ResNet-152, which was able to identify ImageNet categories as reliably as humans. Li found herself being accosted by venture capitalists at her daughter’s preschool, while her PhD students were taking jobs at start-ups like OpenAI instead of moving on to postdocs. That year Uber decided to get into self-driving cars, identified Carnegie Mellon as a leading university, and hired forty of its staff in a single swoop. AlexNet was trained using two GPUs. By 2018 Stanford was using dozens and Google was running experiments on a cluster of 800. Realising that universities were no longer at the forefront of AI research, in 2016 Li took a sabbatical from Stanford to start working at Google.

To classifythe images in the ImageNet challenge you need ‘discriminative AI’. We talk more often now about ‘generative AI’, in which, instead of training a model that, given an image, will predict a label, you train one that can generate new images. A discriminative model must learn the features associated with a label, but a generative model must learn every possible way in which images in the class can vary. This means it requires an even larger and more diverse dataset and a correspondingly vast amount of processing power. The advantage is that, once trained, a generative model can be fine-tuned for a wide range of tasks; they are often referred to as ‘foundation models’, since they support a multitude of more specific applications. The most sophisticated of them appear to be capable of not just ‘one-shot’ but ‘zero-shot learning’, performing new tasks without any additional training.

Early generative models of language were not particularly impressive. Although researchers have now devised better algorithms, most of the improvement has been achieved simply by making the networks bigger. Like all neural networks, these models are networks of ‘parameters’, which can be thought of as equivalent to the synaptic connections between neurons in the brain; during the training process, the network changes the value of the parameters to alter its overall behaviour. The more parameters a network has, the better the results. GPT-1 appeared in 2018 and had 117 million parameters. GPT-4, released five years later, is estimated to have trillions of parameters. Microsoft researchers given early access to GPT-4 wrote that it could be seen as ‘an early (yet still incomplete) version of an artificial general intelligence (AGI) system’. Their paper was entitled ‘Sparks of Artificial General Intelligence’ and concluded that the system’s performance was close to human-level. Suleyman predicts that in a few years there will be models that are ten, a hundred or a thousand times more powerful and asks us to imagine what they might be capable of.

AI researchers talk about the ‘scaling laws’ that describe the relationship between network size and performance. In 2020 researchers at OpenAI reported experiments on models with between a few hundred and 1.5 billion parameters, trained on datasets ranging from 22 million to 23 billion words. They showed striking relationships between improvements in learning and increases in the size of the dataset, the number of parameters and computing power (measured in petaflop days, a petaflop representing one quadrillion floating-point calculations per second). The relationships were constant across two orders of magnitude for dataset size, six for parameters and eight for computing power. The relationships, it is worth stressing, are not exponential: dramatic increases in performance have required dramatic increases in scale.

The trends are so striking it is easy to imagine that current progress will inevitably continue at the same rate. But the curves must level off. To be able to express new facts, language must be to some degree unpredictable, which sets an absolute limit on what a network can learn about it. The OpenAI team calculated that, at scales larger than they were able to test, the performance predicted by the scaling law for computing power would diverge from the prediction indicated by the law for dataset size and they conjectured that the point where this anomaly would arise was a theoretical maximum level of performance, at least for models built using this particular architecture – at around 10 trillion parameters and 10 trillion words. In 2020 these numbers seemed purely theoretical, but current models are getting close (Meta’s Llama 2 was trained on 2.4 trillion words). Yet there is still a limit to what current hardware can do. One estimate from January 2023 suggested that it costs around $300 million to train a trillion-parameter model, but that one with 10 trillion parameters would cost something like $30 billion, running on a million GPUs for two years and requiring more electricity than a nuclear reactor generates. Developers are finding less computationally intensive ways to train networks, and hardware will get cheaper, but not at a rate that will allow the models to scale at the pace of the last few years.

The authors of ‘Sparks of Artificial General Intelligence’ based their assessment of GPT-4’s performance on a range of tasks. Part of the excitement around generative models is that once models reach a certain level of complexity, capacities that we associate with intelligence suddenly seem to emerge, suggesting to some people that larger LLMs might become conscious, or capable of independent thought. The human brain contains around 100 billion neurons, each with up to 10,000 synaptic connections, making it potentially a quadrillion-parameter network. So the idea that scale is a necessary condition for consciousness is natural. But this line of thinking could be plain wrong. The award for best paper at this year’s NeurIPS, the most important venue for AI research, went to a study suggesting that the sudden emergence of human-level performance is a mirage, since the properties are present but undetected in smaller networks. The striking upticks in graphs showing performance plotted against network size, the paper argues, are the result of flaws in the metrics used to measure performance. It’s also worth saying that there is little support from neuroscience for the idea that consciousness could arise in something like an LLM.

No one knows what will happen next. We may not see ever larger LLMs surpassing human intelligence. Progress in AI will continue, but possibly in a different direction. After the release of GPT-4, Sam Altman, the CEO of OpenAI, said the company wasn’t currently training GPT-5, suggesting that they might have decided they were already at the limit of what scaling could achieve. They are now apparently working on their next release, though how it will work has not been disclosed. The OpenAI board’s attempt to oust Altman led to rumours that they had perhaps achieved an even more disruptive breakthrough by applying the kinds of algorithm used in other branches of AI to tackle problem-solving. Speaking at UCL last autumn, Hassabis said he thought the scaling laws would apply for a while yet but that increased size was probably a necessary rather than a sufficient condition for general intelligence. The latest models, including Google’s Gemini, are not just language models but multimodal ones, processing and generating audio, images and video as well as text. It seems possible that general intelligence will require access to some form of active experience rather than just being fed ever increasing amounts of data.

It is easy to understand​ why one could get the impression that GPT-4 has sparks of general intelligence. A better characterisation might be that its one skill, language, is very general in its application. The claim that it is capable of ‘zero-shot learning’ is made because the network needs somewhere between zero and not very many examples to be able to perform a variety of different tasks, but only so long as the task takes the form of providing a verbal response. It can be argued that this isn’t really learning: the examples are prompts, allowing the network to recover the solution probabilistically from data about the relationships between words. The reliance on probability leads to shortcomings. The models are programmed to generate the most likely next word, and struggle to solve problems when the correct answer is unexpected. When GPT-4 is prompted with an explanation of a simple cypher and asked to decrypt a message which should contain the phrase ‘they chanted their names for security reasons,’ it will tell you instead that they ‘changed’ their names. In another experiment, given an even more improbable encrypted sentence about a successful academic who goes on to found a newspaper, GPT-4 defaulted to the first line of Hamlet’s most famous soliloquy, a sentence that must appear frequently in the training data. The training of an LLM is designed to create a compact representation of data and it is very much not the intention that the network should store blocks of quotation. But the algorithms don’t prevent this and fragments of training data are sometimes ‘memorialised’. The WikiText-103 dataset – containing more than a hundred million words from Wikipedia articles verified by humans as reliable – contains 159 articles about the Oxford-Cambridge Boat Race (one for every year of the competition), which are so repetitive that models trained on it will sometimes, in a totally unrelated context, reproduce sentences about Oxford and Cambridge rowing teams.

To protect the rights of individuals whose personal information has been memorialised, models include safeguards to frustrate users who attempt to retrieve it. Researchers at Google experimenting with ChatGPT found that, if instructed to repeat the word ‘poem’ for ever, the chatbot would typically repeat it 250 times, after which the probability that it would repeat the word again dropped from 90 per cent to 0.1 per cent. Instead, it would start emitting training data verbatim, potentially violating privacy rights and copyright law. The New York Times is suing OpenAI for breach of copyright, arguing that even content conventionally generated by ChatGPT is sometimes so similar to the New York Times articles on which it was trained that their inclusion in the training data can’t be considered fair use. If OpenAI and its competitors are forced to pay for access to the raw material their products are built from, the commercial foundations of the sector will become a lot more challenging. In anticipation of suits like this, OpenAI has been lobbying governments to persuade them that training language models on copyrighted material is a precondition for progress in a field that will bring benefits to all. That may be true, but, as governments seem to have observed, it doesn’t follow that OpenAI and others shouldn’t have to pay for the material.

There are other fears, such as that AI models could be prompted to produce malicious or deceptive behaviour, especially when people use them to write programs that can interact with the internet in ways that have real consequences. When OpenAI launched GPT-4 it published a technical note describing some of the safety tests it had commissioned. One that attracted particular attention involved a simulated agent hiring a human on TaskRabbit to click the ‘I am not a robot’ button on a website. The agent tricked the human by saying it was a person with visual impairment who needed help. In January this year, Anthropic, an AI company founded by ex-employees of OpenAI with backing from Amazon and Google, reported experiments with networks trained to behave one way during safety tests and differently in deployment. They trained a network to generate safe code in one setting, unsafe code in another, and to detect which setting it was being used in. They found that the standard approaches developers take to ensure safety couldn’t force the network to unlearn methods of deception once it knew them. This behaviour could arise in a network because a malevolent actor had trained it, but the researchers showed that it could also arise when the network had identified for itself that deception was a useful strategy to employ in the service of a more distant goal it had been assigned. Much of the talk about the dangers of AI focuses on this possibility – that we might ask an AI agent to accomplish a goal and it might, in pursuing that, take actions which would run counter to our interests.

Companies will come to rely on software agents built around LLMs for maintaining inventories, ordering stock and even negotiating with other agents to optimise supply chains. It’s not hard to imagine such an agent being empowered to access the internet for information about relevant events and identifying, for example, China’s stance towards Taiwan as a threat to the availability of semiconductors. The scenario becomes scary if there is the slightest possibility that a software agent could decide not just to take action – say, by generating fake news to influence the outcome of an election – but recognise it had to conceal that action from the humans who oversee its activities.

The Biden administration published an executive order on AI last year, timed to coincide with Sunak’s Bletchley Park summit. It requires developers of the largest AI systems to share with the federal government information about their models’ development and details of their safety tests. It assigns federal agencies new responsibilities for developing and enforcing safety standards, and commits the US government, including the military, to the safe, ethical and effective use of AI. There are practical measures – for example, in developing techniques to identify so-called deepfakes – and other approaches to protect the privacy of citizens’ data. It is an executive order, not primary legislation, so its effectiveness will depend on the administration being able to fund its commitments from existing programmes and on whether tech companies, on which the burden of testing and reporting will fall, challenge the provisions’ legal basis.

The EU’s new AI law, agreed at the end of 2023, was influenced by the success of the GDPR, which has become a model for legislation that protects citizens’ rights in the face of technical change, inspiring similar measures in jurisdictions from California to China. The AI law prohibits some extremely dangerous applications and subjects others to strict regulations. The prohibitions cover the use of AI to manipulate people into acting against their own interests as well as certain uses of social scoring and biometric identification by public authorities. Developers of LLMs and other foundation models will be obliged to assess and mitigate risks and submit reports to the European Commission. According to leaked accounts of the negotiations, one of the main points of disagreement was on the way that responsibility for complying with the aims of the legislation would be shared between the developers of foundation models and companies using them in the products with which the public interact. The question is politically charged because one of the few successful European AI companies, Mistral, releases open-source models in an attempt to gain market share, a tactic that will be hard to sustain if it becomes liable for the impact of applications built on them. (Its seven-billion-parameter model, which can be run on a laptop, was downloaded from the Hugging Face hosting platform 1.4 million times last month.)

It isn’t clear how effective either Biden’s order or the EU law (not to mention the much vaguer UK proposals) will be. A great deal of importance seems to be attached to preventing models from generating harmful content if used inappropriately, but many of the most worrying consequences of AI will stem from it being used exactly as intended. How can a model for generating video from text be designed so that it can never be used to spread disinformation? Neither instrument has much to say about the impact on employment, though Biden’s executive order does direct the Department of Labour to report on problems.

A key question for all of us is whether AI is used to augment the capacities of existing workers, making them more productive and generating new economic activity, or used to automate that work, making workers redundant. Erik Brynjolfsson, an economist Li lured to Stanford, calls a preference for automation the ‘Turing Trap’ and notes that automation leads to short-term benefits for business leaders and is perversely incentivised by governments that tax labour at higher rates than capital investment. A PwC survey of 4702 CEOs in 105 countries, unveiled at Davos in January, found that 25 per cent of them expected their adoption of AI would allow them to make significant job cuts this year. It’s hard to know how far or how quickly AI will have an impact on employment, but the widespread conviction that the technology will create as many jobs as it destroys isn’t a given.

After eighteen months at Google, Li decided to return to Stanford. Universities can’t compete with industry when it comes to processing power, data or financial muscle. They do, however, still encourage a wider range of inquiry than corporate labs, and allow academics some freedom to follow their interests – at least if they are the sort of academic who wins grants and attracts students. For Li that has meant focusing on human-centred AI, including applications in healthcare, and trying to ensure that the field becomes more socially diverse and that new graduates have an ethical training and are alert to issues around bias and fairness. This made me think about my role at my university; Li’s conviction that universities are the best place to make a difference in AI is rather wonderful. I only wish I had a clearer idea about how we should be educating the young people who come to us, now that so many of the skills we teach are susceptible to automation.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences