The Shallows: How the Internet Is Changing the Way We Think, Read and Remember 
by Nicholas Carr.
Atlantic, 276 pp., £17.99, September 2010, 978 1 84887 225 7
Show More
Show More

‘I don’t own a computer, have no idea how to work one,’ Woody Allen told an interviewer recently. Most of us have come to find computers indispensable, but he manages to have a productive life without one. Are those of us with computers really better off?

There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.

The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.

But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.

Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic. Carr, a technology writer and a former executive editor of the Harvard Business Review, has now elaborated his indictment of digital culture into a book, The Shallows.

Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser. ‘You know the rest of the story because it’s probably your story too,’ he writes.

Ever-faster chips. Ever quicker modems. DVDs and DVD burners. Gigabyte-sized hard drives. Yahoo and Amazon and eBay. MP3s. Streaming video. Broadband. Napster and Google. BlackBerrys and iPods. Wi-Fi networks. YouTube and Wikipedia. Blogging and microblogging. Smartphones, thumb drives, netbooks. Who could resist? Certainly not I.

It wasn’t until 2007, Carr says, that he had the epiphany that led to this book: ‘The very way my brain worked seemed to be changing.’

Lest we take him to be speaking metaphorically, Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.

The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.

Many in the neuroscience community scoff at such claims. The brain is not ‘a blob of clay pounded into shape by experience’, Steven Pinker has insisted. Its wiring may change a bit when we learn a new fact or skill, but its basic cognitive architecture remains the same. And where is the evidence that using the internet can ‘massively remodel’ the brain? The only germane study that Carr is able to cite was undertaken in 2008 by Gary Small, a professor of psychiatry at UCLA. Small recruited a dozen experienced web surfers and a dozen novices, and scanned their brains while they did Google searches. Sure enough, the two groups showed different patterns of neural firing. The activity was broader in the experienced web surfers; in particular, they made heavy use of the dorsolateral prefrontal cortex, an area of the brain associated with decision-making and problem-solving. In the novices, by contrast, this area was largely quiet.

Is ‘broader’ the same as ‘worse’? Rather the opposite, one might think. As Carr admits, ‘the good news here is that web surfing, because it engages so many brain functions, may help keep older people’s minds sharp.’ Nor did the brain changes caused by web surfing seem to interfere with reading. When the researchers had the subjects read straight texts, there was no significant difference in brain activity between the computer veterans and the novices. And just how extensive was the alleged rewiring? When the UCLA researchers had the novices spend an hour a day surfing the web, it took only five days for their brain patterns to look like those of the veterans. ‘Five hours on the internet, and the naive subjects had already rewired their brains,’ Small concluded. Typically, though, brain changes that occur very quickly can also be reversed very quickly. If, for example, a normally sighted person is made to wear a blindfold, in a week’s time the visual centres of their brain will have been taken over to a significant degree by the tactile centres. (This was found in experiments on the learning of Braille.) But it takes only a single day after the blindfold is removed for brain function to snap back to normal.

If surfing the web stimulates problem-solving and decision-making areas of the brain, as the UCLA study indicated, are we entitled to conclude, pace Carr, that Google makes us smarter? That depends on what you mean by ‘smart’. Psychologists distinguish two broad types of intelligence. ‘Fluid’ intelligence is one’s ability to solve abstract problems, like logic puzzles. ‘Crystallised’ intelligence is one’s store of information about the world, including learned short cuts for making inferences about it. (As one might guess, fluid intelligence tends to decline with age, while the crystallised variety tends to increase, up to a point.) There is plenty of evidence that computers can stoke fluid intelligence. Ever played a video game? Maybe you should have. Video-gamers are better at paying attention to several things at once than non-players, and are better at ignoring irrelevant features of a problem. Very young children trained with video games have been shown to develop superior attention-management skills, scoring substantially higher than their untrained peers on some IQ tests. You can actually see the improvement on an EEG: four-year-olds trained on video games display patterns of activity in the attention-control parts of their brains that you’d normally expect to find in six-year-olds. Carr acknowledges the evidence that video games can enhance certain cognitive skills. But he insists that these skills ‘tend to involve lower-level, or more primitive, mental functions’. Those who are unfamiliar with video games might find that plausible, but a very different picture emerges from Steven Johnson’s book Everything Bad Is Good for You (2005). According to Johnson, the sophisticated video games of today (unlike the simplistic Pac-Man-style games of yesteryear) involve richly imagined worlds with their own hidden laws. To navigate such worlds, one must constantly frame and test hypotheses about their underlying logic. This is hardly a pastime that promotes mental flightiness. ‘The average video game takes about 40 hours to play,’ Johnson writes, ‘the complexity of the puzzles and objectives growing steadily over time as the game progresses.’

Even if computers can improve our fluid intelligence, perhaps they are inimical to crystallised intelligence – that is, to the acquisition of knowledge. This seems to be Carr’s fall-back position. ‘The net is making us smarter,’ he writes, ‘only if we define intelligence by the net’s own standards. If we take a broader and more traditional view of intelligence – if we think about the depth of our thought rather than just its speed – we have to come to a different and considerably darker conclusion.’ Why is the ‘buzzing’ brain of the computer user inferior to the ‘calm mind’ of the book reader? Because, Carr submits, a buzzing brain is an overloaded one. Our ability to acquire knowledge depends on information getting from our ‘working memory’ – the mind’s temporary scratch pad – into our long-term memory. The working memory contains what we are conscious of at a given moment; it is estimated that it can hold only as many as four items of information at a time, which quickly vanish if they are not refreshed. The working memory is thus the bottleneck in the learning process – or, to use Carr’s image, the ‘thimble’ by which we must fill up the ‘bathtub’ of our long-term memory. A book provides a ‘steady drip’ of information which, through sustained concentration, we can transfer by means of this thimble with little spillage. But on the web, Carr writes, ‘we face many information faucets, all going full blast. Our little thimble overflows as we rush from one faucet to the next,’ and what we end up with is ‘a jumble of drops from different faucets, not a continuous, coherent stream from one source.’

This is a seductive model, but the empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’

The digerati are not impressed by such avowals. ‘No one reads War and Peace,’ responds Clay Shirky, a digital-media scholar at New York University. ‘The reading public has increasingly decided that Tolstoy’s sacred work isn’t actually worth the time it takes to read it.’ (Woody Allen solved that problem by taking a speed-reading course and then reading War and Peace in one sitting. ‘It was about Russia,’ he said afterwards.) The only reason we used to read big long novels before the advent of the internet was because we were living in an information-impoverished environment. Our ‘pleasure cycles’ are now tied to the web, the literary critic Sam Anderson claimed in a 2009 cover story in New York magazine, ‘In Defense of Distraction’. ‘It’s too late,’ he declared, ‘to just retreat to a quieter time.’

This sort of ‘outré posturing’ by intellectuals rankles with Carr since, he thinks, it enables ordinary people ‘to convince themselves that surfing the web is a suitable, even superior, substitute for deep reading and other forms of calm and attentive thought’. But Carr doesn’t do enough to dissuade us from this conclusion. He fails to clinch his case that the computer is making us stupider. Can he convince us that it is making us less happy? Suppose, like good Aristotelians, we equate happiness with human flourishing. One model for human flourishing is the pastoral ideal of quiet contemplation. It is this ideal, Carr submits, that is epitomised by ‘the pensive stillness of deep reading’. He gives us a brisk history of reading from the invention of the codex through to the Gutenberg revolution, and describes how its evolution gave rise to an ‘intellectual ethic’ – a set of normative assumptions about how the human mind works. ‘To read a book was to practise an unnatural process of thought, one that demanded sustained, unbroken attention to a single, static object,’ he writes. As written culture superseded oral culture, chains of reasoning became longer and more complex, but also clearer. Library architecture came to accommodate the novel habit of reading silently to oneself, as private carrels and cloisters were torn out and replaced with grand public rooms. And the miniaturisation of the book, hastened in 1501 when the Italian printer Aldus Manutius introduced the pocket-sized octavo format, brought reading out of libraries into everyday life. ‘As our ancestors imbued their minds with the discipline to follow a line of argument or narrative through a succession of printed pages, they became more contemplative, reflective and imaginative,’ Carr writes. The digital world, by contrast, promotes a very different model of human flourishing: an industrial model of hedonic efficiency, in which speed trumps depth and pensive stillness gives way to a cataract of sensation. ‘The net’s interactivity gives us powerful new tools for finding information, expressing ourselves and conversing with others,’ but it ‘also turns us into lab rats constantly pressing levers to get tiny pellets of social or intellectual nourishment.’

So which fits better with your ideal of eudaemonia, deep reading or power browsing? Should you set up house in Sleepy Hollow or next to the Information Superhighway? The solution, one might decide, is to opt for a bit of both. But Carr seems to think that it’s impossible to strike a balance. There is no stable equilibrium between analogue and digital. ‘The net commands our attention with far greater insistence than our television or radio or morning newspaper ever did,’ he writes. Once it has insidiously rewired our brains and altered our minds, we’re as good as lost. He quotes the novelist Benjamin Kunkel on this loss of autonomy: ‘We don’t feel as if we had freely chosen our online practices. We feel instead … that we are not distributing our attention as we intend or even like to.’

Near the end of the book, Carr describes his own attempt to emancipate himself from the nervous digital world and return to a Woody Allen-like condition of contemplative calm. He and his wife move from ‘a highly connected suburb of Boston’ to the mountains of Colorado, where there is no mobile phone reception. He cancels his Twitter account, suspends his Facebook membership, shuts down his blog, curtails his Skyping and instant messaging, and – ‘most important’ – resets his email so that it checks for new messages only once an hour instead of every minute. And, he confesses, he’s ‘already backsliding’. He can’t help finding the digital world ‘cool’, adding, ‘I’m not sure I could live without it.’

Perhaps what he needs are better strategies of self-control. Has he considered disconnecting his modem and Fedexing it to himself overnight, as some digital addicts say they have done? After all, as Steven Pinker noted a few months ago in the New York Times, ‘distraction is not a new phenomenon.’ Pinker scorns the notion that digital technologies pose a hazard to our intelligence or wellbeing. Aren’t the sciences doing well in the digital age, he asks? Aren’t philosophy, history and cultural criticism flourishing too? There is a reason the new media have caught on, Pinker observes: ‘Knowledge is increasing exponentially; human brainpower and waking hours are not.’ Without the internet, how can we possibly keep up with humanity’s ballooning intellectual output?

This raises a prospect that has exhilarated many of the digerati. Perhaps the internet can serve not merely as a supplement to memory, but as a replacement for it. ‘I’ve almost given up making an effort to remember anything,’ says Clive Thompson, a writer for Wired, ‘because I can instantly retrieve the information online.’ David Brooks, a New York Times columnist, writes: ‘I had thought that the magic of the information age was that it allowed us to know more, but then I realised the magic of the information age is that it allows us to know less. It provides us with external cognitive servants – silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves.’

Books also serve as external memory-storage devices; that is why Socrates, in the Phaedrus, warned that the innovation of writing would lead to the atrophy of human memory. But books have expanded the reservoir of information and ideas, and, through the practice of attentive reading, have enriched the memory, not superseded it. The internet is different. Thanks to algorithmic search engines like Google, the whole universe of online information can be scanned in an instant. Not only do you not have to remember a fact, you don’t even have to remember where to look it up. In time, even the intermediary of a computer screen might prove unnecessary – why not implant a wireless Google connection right in the head? ‘Certainly,’ says Sergey Brin, one of the founders of Google, ‘if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.’

The idea that machine might supplant Mnemosyne is abhorrent to Carr, and he devotes the most interesting portions of his book to combatting it. He gives a lucid account of the molecular basis of memory, and of the mechanisms by which the brain consolidates short-term memories into long-term ones. Biological memory, which is necessarily in ‘a perpetual state of renewal’, is in no way analogous to the storage of bits of data in static locations on a hard drive: that he makes quite plain. Yet he doesn’t answer the question that really concerns us: why is it better to knock information into your head than to get it off the web?

The system by which the brain stores and retrieves memories is, for all its glorious intricacy, a bit of a mess. That’s understandable: it’s the product of blind evolution, not of rational engineering. Unlike a computer, which assigns each bit of information a precise address in its data banks, human memory is organised contextually. Items are tied together in complex associative webs, and are retrieved by clues rather than by location. Ideally, the desired item just pops into your head (‘The founder of phenomenology? Husserl!’). If not, you try various clues, which may or may not work (‘The founder of phenomenology? Let’s see, starts with an “h” … Heidegger!’). Human memory has certain advantages over computer memory; for instance, it tends to give automatic priority to the most frequently needed items. But it’s fragile and unreliable. Unrehearsed memories soon sink into oblivion. And interference between items within associative webs causes confusion and leads to the formation of false memories.

The computer’s postcode memory system has no such vulnerabilities. Moreover, as the cognitive psychologist Gary Marcus has pointed out, it’s possible to have the benefits of contextual memory without the costs. ‘The proof is Google,’ Marcus writes. ‘Search engines start with an underlying substrate of postcode memory (the well-mapped information they can tap into) and build contextual memory on top. The postcode foundation guarantees reliability, while the context on top hints at which memories are most likely needed at a given moment.’ It’s a pity, Marcus adds, that evolution didn’t start with a memory system more like the computer’s.

Considering these advantages, why not outsource as much of our memory as possible to Google? Carr responds with a bit of rhetorical bluster. ‘The web’s connections are not our connections,’ he writes. ‘When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.’ Then he quotes William James, who in 1892 in a lecture on memory declared: ‘The connecting is the thinking.’ And James was onto something: the role of memory in thinking, and in creativity. What do we really know about creativity? Very little. We know that creative genius is not the same thing as intelligence. In fact, beyond a certain minimum IQ threshold – about one standard deviation above average, or an IQ of 115 – there is no correlation at all between intelligence and creativity. We know that creativity is empirically correlated with mood-swing disorders. A couple of decades ago, Harvard researchers found that people showing ‘exceptional creativity’ – which they put at fewer than 1 per cent of the population – were more likely to suffer from manic-depression or to be near relatives of manic-depressives. As for the psychological mechanisms behind creative genius, those remain pretty much a mystery. About the only point generally agreed on is that, as Pinker put it, ‘Geniuses are wonks.’ They work hard; they immerse themselves in their genre.

Could this immersion have something to do with stocking the memory? As an instructive case of creative genius, consider the French mathematician Henri Poincaré, who died in 1912. Poincaré’s genius was distinctive in that it embraced nearly the whole of mathematics, from pure (number theory) to applied (celestial mechanics). Along with his German coeval David Hilbert, Poincaré was the last of the universalists. His powers of intuition enabled him to see deep connections between seemingly remote branches of mathematics. He virtually created the modern field of topology, framing the ‘Poincaré conjecture’ for future generations to grapple with, and he beat Einstein to the mathematics of special relativity. Unlike many geniuses, Poincaré was a man of great practical prowess; as a young engineer he conducted on-the-spot diagnoses of mining disasters. He was also a lovely prose stylist who wrote bestselling works on the philosophy of science; he is the only mathematician ever inducted into the literary section of the Institut de France. What makes Poincaré such a compelling case is that his breakthroughs tended to come in moments of sudden illumination. One of the most remarkable of these was described in his essay ‘Mathematical Creation’. Poincaré had been struggling for some weeks with a deep issue in pure mathematics when he was obliged, in his capacity as mine inspector, to make a geological excursion. ‘The changes of travel made me forget my mathematical work,’ he recounted.

Having reached Coutances, we entered an omnibus to go some place or other. At the moment I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake, I verified the result at my leisure.

How to account for the full-blown epiphany that struck Poincaré in the instant that his foot touched the step of the bus? His own conjecture was that it had arisen from unconscious activity in his memory. ‘The role of this unconscious work in mathematical invention appears to me incontestable,’ he wrote. ‘These sudden inspirations … never happen except after some days of voluntary effort which has appeared absolutely fruitless.’ The seemingly fruitless effort fills the memory banks with mathematical ideas – ideas that then become ‘mobilised atoms’ in the unconscious, arranging and rearranging themselves in endless combinations, until finally the ‘most beautiful’ of them makes it through a ‘delicate sieve’ into full consciousness, where it will then be refined and proved.

Poincaré was a modest man, not least about his memory, which he called ‘not bad’ in the essay. In fact, it was prodigious. ‘In retention and recall he exceeded even the fabulous Euler,’ one biographer declared. (Euler, the most prolific mathematician of all – the constant e takes his initial – was reputedly able to recite the Aeneid from memory.) Poincaré read with incredible speed, and his spatial memory was such that he could remember the exact page and line of a book where any particular statement had been made. His auditory memory was just as well developed, perhaps owing to his poor eyesight. In school, he was able to sit back and absorb lectures without taking notes despite being unable to see the blackboard.

It is the connection between memory and creativity, perhaps, which should make us most wary of the web. ‘As our use of the web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the net’s capacious and easily searchable artificial memory,’ Carr observes. But conscious manipulation of externally stored information is not enough to yield the deepest of creative breakthroughs: this is what the example of Poincaré suggests. Human memory, unlike machine memory, is dynamic. Through some process we only crudely understand – Poincaré himself saw it as the collision and locking together of ideas into stable combinations – novel patterns are unconsciously detected, novel analogies discovered. And this is the process that Google, by seducing us into using it as a memory prosthesis, threatens to subvert.

It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.

By the way, it is customary for reviewers of books like this to note, in a jocular aside, that they interrupted their writing labours many times to update their Facebook page, to fire off text messages, to check their email, to tweet and blog and amuse themselves on the internet trying to find images of cats that look like Hitler. Well, I’m not on Facebook and I don’t know how to tweet. I have an email account with AOL (‘America’s Oldest Luddites’), but there’s rarely anything in my inbox. I’ve never had an iPod or a BlackBerry. I’ve never had a mobile phone of any kind. Like Woody Allen, I’ve avoided the snares of the digital age. And I still can’t get anything done.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences