Thinking, Fast and Slow 
by Daniel Kahneman.
Allen Lane, 499 pp., £25, November 2011, 978 1 84614 055 6
Show More
Show More

Habit, Samuel Beckett says in his essay on Proust, substitutes the ‘boredom of living’ for the ‘suffering of being’, and he has a point. Human existence is an acquired taste, and many of us get through it with the aid of what Vladimir in Waiting for Godot calls the ‘great deadener’. Blank simian rote – the round of feeding, grooming, ablution, slack-jawed vacancy – serves to block out tracts of time that might otherwise get colonised by anxious thought. And who wants that? Bertrand Russell said that people will do almost anything rather than think. Despite one’s best efforts, though, thoughts still sometimes come. Then, as Beckett says elsewhere, thinking can do proleptic duty, ensuring that rogue thoughts are repeated over and again, till they sink at last into the mud of oblivion.

Is thinking, pace Beckett, a good thing? A thoughtless response might be ‘yes’, but after a moment’s reflection it becomes plain that thinking is a human activity as prone to miscarry as Greek debt restructuring. Many endeavours go wrong not through lack of thought, but through our having the wrong thoughts. Human cogitative failure is a many-splendoured beast, which Daniel Kahneman has devoted his life to studying. Some goofs prove popular enough to put paid to any very sanguine view of evolutionary cognitive ascent. Humans are dab hands at some tasks, such as acquiring language and matching patterns. But we suck at others, including many that involve statistical inference. In the UK currently, the statistical likelihood of suffering serious injury from al-Qaida is many times lower than that of suffering a similar fate at the hands of one’s fridge. But few enter the kitchen cowed by the looming menace posed by their Smeg. This effect, which Kahneman likens to perceptual illusions such as the Müller-Lyer, can’t be sloughed off simply by realising that it is illusory – though some people, including some in government, don’t even get that far.

As with perceptual illusions, certain cognitive snafus seem immune to willed control. The base rate fallacy, targeted here by Kahneman, is another notorious instance. The fallacy lies in wrongly inferring from the fact that an investigative procedure has a certain statistical accuracy in relation to a defined group – for example, those who have a disease – that the same accuracy will obtain for the entire sample population. Suppose there is a test for a rare form of cancer. The test is pretty good, but less than perfectly accurate. When taken by those who will get the disease, in 99 cases out of a hundred it will be positive. But one test in a hundred will produce a ‘false negative’: it will give a clean bill of health to someone who will in fact get the cancer. Among those who don’t have the cancer, the test gives a correct negative diagnosis 99 times out of a hundred, and a false positive in 1 per cent of cases. Suppose you take the test, and it comes back positive: how worried should you be? The test might be said to be 99 per cent accurate; so does that mean there is a 99 per cent chance that you have the cancer? No. To draw that conclusion would be to commit the base rate fallacy. The arithmetic becomes clearer when the cancer is very rare indeed. Suppose it’s so rare that only one person in a million will get it, and that the total population is one hundred million. This means that of the hundred or so people who will get the cancer, some 99 should be correctly identified as having the disease. But if the whole population takes the test, the number of people mistakenly identified as having the cancer will be much larger: around 1 per cent of a hundred million, or one million. Of the two groups, the correct positives and the false positives, you are much more likely to be in the second group. If your test is positive, there’s about a one in ten thousand chance that you’re in the first group, those who have the cancer. That’s worse than the rate for the population as a whole, but it’s a whole lot better than the 99 per cent chance of having the cancer you’d imagine if you succumbed to the base rate fallacy.

In underlining the pervasiveness of the fallacy, Kahneman doesn’t spare himself: he recounts his own jittery road use in Israel during heightened alarms over bus bombers, despite the much higher probability of perishing in a run of the mill car crash. If there is a surprise here, it is not so much that people unaware of the base rate fallacy fail to estimate the accuracy of a diagnostic test. It’s that even societies without much overt censorship amplify certain threats over others without statistical warrant. Some of this is down to snake-oil salesmen calling forth the ailments they pretend to cure, but they couldn’t pull it off if they weren’t exploiting a well-ingrained predisposition. Politicians seize on statistics in the hope that they will defuse opposition to their schemes: it would be uncharitable to David Blunkett to suppose that he really was as naive as he pretended to be in pronouncing biometric ID foolproof. The more disturbing fact is that he could say it and expect to be believed.

Given Kahneman’s title, one might expect a paean to deliberation, the cogitative counterpart to slow food, as against ‘quick and dirty’ heuristics. And so it proves, up to a point. But in fact, as he argues in this hugely enjoyable and engaging book, humans need both fast and slow thinking – or, more formally, cognitive Systems 1 and 2. The ‘fast’ System 1 presents us with conclusions before we even know we’re thinking. Unsurprisingly, it fails a lot of the time, but there is no disowning it. ‘Slow’ System 2 involves the active effort of trying to figure things out, and it’s this strenuous mental engagement – working out a long division, say, or trying to make sense of the tax system – that, as Russell observed, people will do much to avoid.

System 1 pervades one’s perception of the world. It presents judgments that the subject can’t help making: one simply sees anger in someone’s facial expression without wilfully striving to do so. Sometimes our perceptions are distorted even when our sensory apparatus is doing its job, as when we see a stick in water as bent, or experience frame-relative phenomena like the Doppler effect. Optical effects like those induced by the Müller-Lyer illusion and the Ames Room don’t arise from conscious thought: one just sees the equally long line as shorter, or the average-height person standing at one side of the room as a giant. Such phenomena are not confined to humans. Alex, a thirty-year-old African Grey parrot trained to compare the size of presented objects, reportedly saw the Müller-Lyer illusion in 32 out of fifty tests in which humans would see it.

The workings of System 1 can be seen in non-perceptual processing, too. A famous experiment conducted in the 1970s by Kahneman and his long-time collaborator, the late Amos Tversky, put the ‘sure thing’ preference to the test. Which of the following would you choose: (a) a sure-thing £200; or (b) a coin toss by which you win either nothing or £400? The expected-money outcomes are of course the same, given in each case by the sum of the pay-offs multiplied by their probabilities. But in tests, subjects show a marked preference for the sure thing, which can persist even when the second bet is improved so that the expected outcome is better than (a), say to a 50:50 chance of either nothing or £500. This might be thought to show that people have a general aversion to risk. However, Kahneman and Tversky’s work, which led to the formulation of Prospect Theory, indicated that people have asymmetrical preferences towards prospective gains and losses: they are pessimists about gains and optimists regarding losses. Faced with a sure-fire loss of £200 and a 50:50 gamble between losing nothing and losing £400, people are more likely to chance their arm.

Quentin Crisp once said that a pessimist is someone who won’t get out of the bath to answer the phone. There’s nothing irrational about having risk-averse preferences towards gaining more of a good, while being risk-friendly towards losses; such patterns assume a certain benchmark, set for instance by one’s current holdings of the good. Investors often take profits when equity prices are still on the up-and-up, and choose not to sell in a bear market in the hope that losses will reverse themselves. Similar behaviour can be seen among mug punters who back losers in the first few races on the card, then take on bigger risks in later races in the hope of recouping their losses. As Kahneman and Tversky’s work suggests, someone whose losses to date were reimbursed by a philanthropist, or who took an amnesia-inducing drug that led him to think he was starting tabula rasa, would have a lower propensity to take risks. The subject’s assessment of the starting position influences whether she takes a prudent or a risk-seeking approach to a given schedule of risks. So, since people’s behaviour towards risk depends on whether they are facing gains or losses, the absolute value of the initial benchmark affects their propensity to gamble. If one starts at a lower benchmark, what were prospective losses turn into gains, and conversely.

However, the news is worse than this, at least for those who hope that people’s revealed preferences can be framed rationally, i.e. ‘consistently’. As Kahneman and Tversky showed, people’s choices will vary markedly between two presentations of the same outcome. Take the following scenario. A flu epidemic will kill six hundred people if you do nothing. A tested vaccine will save two hundred people for sure but fail to save the other four hundred; or you can use an untested vaccine, with a one-third chance of saving six hundred people and a two-thirds chance of saving nobody. In experiments, most people (72 per cent) went for the sure thing. Now imagine a situation in which you have a choice between two policies, of which one will kill four hundred people for sure, while the other has a one-third chance of killing nobody and a two-thirds chance of killing six hundred people. This time most people gambled in the hope that everyone would avoid death. For reasons such as these, Kahneman concludes that a coherence view of rationality is ‘impossibly restrictive’.

One might think that these results are just another illustration of the asymmetrical view of gains and losses, since a prospective gain (saving a certain number of lives) elicits risk-aversion, while a prospective loss (killing people) prompts subjects to take a punt. But in this case the numbers have been engineered so that the options are not only the same within each scenario, but also between them: both involve the same number of expected deaths. So even working with a single baseline applicable in both cases, the large proportion of subjects who choose both to play safe in the former and to gamble in the latter seemingly adopt variable policies towards what amounts to the same schedule of risks. This spells a bad day for the project pioneered by John von Neumann and Oskar Morgenstern in the mid-20th century, of creating univocal preference functions for individual economic actors based on their revealed preferences. Someone who prefers one outcome to another when it’s presented in one way, but reverses that ranking when the same alternatives are described differently, can’t be given a coherent preference function.

If this is so, not much can be inferred about the wishes of consumers, or other economic actors, from actual market behaviour even under idealised conditions. All that can be inferred is that someone, faced with a certain choice, framed in a certain way, made a particular decision. Indeed, in many real-life situations even that inference may go too far. One of Kahneman’s more disconcerting results suggests that people’s valuations can readily be manipulated if before making their estimate they are primed with an irrelevant ‘anchoring’ number. For example, subjects asked to guess Gandhi’s age at his death produce a much higher estimate if they are primed with an absurdly high anchor (148, say) than if they are primed with an implausibly low one (such as 25). This remains true even if the experimenters make it clear to the subjects that the figures used to prime them have nothing to do with Gandhi’s actual age. Estate agents primed with a high but irrelevant anchor and then asked to assess property values come up with significantly higher figures than colleagues primed with a low anchor. So go-getting retailers wanting to make more money flogging their widgets are well advised to prime consumers with a high anchor.

True believers in the market laud it as an efficient means of meeting people’s wants, and insist that people should therefore also want the market itself. This argument relies on the legitimating power of wants – a dogma that is seldom spelled out in political justifications of the market and, in the way of dogmas, doesn’t stand up well to scrutiny when it is. First, some wants rest on straightforward error: Bernard Williams gave the example of someone who wants to drink what’s in a glass, believing it to be gin, when in fact it is petrol. Other wants, while not resting on false beliefs, may be problematic from the viewpoint even of the person who has them. I may crave another drink, while also wanting to kick my gin-drinking habit. Or, addiction aside, I may be so effectively brainwashed by adverts for Gordon’s that it doesn’t even occur to me that I might be better off not wanting to knock it back. On top of all this, there is the measurement problem: how much does my urge for a slug of gin weigh against your desire for a new kidney?

The trouble isn’t just that when people cease to be authoritative about their own wants, some more or less paternalistic regime can then prevail; it’s that the authoritativeness of expressed wants tends to be undermined by the means needed to elicit them. As Kahneman and Tversky’s work indicates, different ways of framing choice draw out different expressed wants. Since the rankings are not consistent, they don’t warrant extrapolating from here and now to anywhere else. Someone trying to read off preference functions from observed behaviour has about as much chance as a sheep reading a barcode.

To take a real-life example: the local post office has now been annexed to the St Andrews branch of W.H. Smith, so one waits, usually for a long time, in a queue for stamps etc, alongside shelves groaning with Jedward biographies and other knock-down shlock. On my last visit one of the ‘special offers’ comprised a boxed set of five thrillers selling either at £9.95 per set, or £10 for two. The marginal purchase cost of the second set of five novels, identical to the first, was just 5p. Why didn’t Smith’s go the whole hog and make it a Buy One Get One Free, at a unit price of £10? It’s beside the point to say that bookshops face externalities in holding stock, or that publishers may offer Smith’s a premium for shifting copies in bulk. The question is why the merchants think that this pricing scheme is the best way to shift the books. After all, you’d need to be a fairly besotted fan of Robert Ludlum to see any point in buying two indistinguishable copies of The Bourne Ultimatum. The idea may be that, unlike cans of beans, books are unique and personal purchases; readers’ personal bond with an author may be shaken, even with Robert Ludlum, if the commodification gets too blatant. Or maybe buying two sets becomes more eligible by juxtaposition with the bad-deal £9.95 single set. With the £9.95/£10 pricing, punters can feel that they’re getting a bargain without thinking they’ve fallen, BOGOF-style, for the hard sell. One might think that Smith’s are imputing inconsistent preferences to their customers if they assume that people will buy two sets with the £9.95/£10 pricing structure, but not a BOGOF at £10. In any case Kahneman’s point holds: the notion of consistency as applied to consumers’ behaviour gets little traction because their behaviour can be moulded by framing effects.

This is where Kahneman’s work becomes quietly subversive. In one of his have-it-all-ways obiter dicta, Sartre said that freedom is what you do with what’s been done to you. Well, the market nowadays is what has been done to all of us. The bored post office savings-book queuer who buys the two boxes of pulp on a whim has certainly made a choice, of sorts. Faced with choices, people choose. But that does nothing to support the dogmatic assertion that markets are justified because they aggregate choices made by sovereign consumers. First-year political philosophy students are taught that it’s rational to ‘rat’ in the two-player one-shot prisoner’s dilemma, because ratting is dominant, i.e. it’s better to rat whatever the other guy does. But that only follows if the pay-offs facing the players are utilities, and utilities map choices that have already been made. The dilemma shows not that ratting is rational for humans (and so, by a further dodgy inference, ‘natural’ for us), but that ratting is what rats do.

The skies darken towards the end of the book, where Kahneman turns to people’s thinking about life and its value. According to the ‘peak-end rule’, people rate a given past experience by averaging how they felt at the point of its greatest intensity with how they felt at the end. Kahneman describes an experiment in which subjects had to dunk a hand in a tub of water cold enough to cause discomfort short of excruciating pain. They were asked to endure this for a minute before withdrawing their hand. In a separate episode, the subjects were asked to keep their hand underwater for a minute and a half; during the last thirty seconds, without forewarning, some slightly hotter water was pumped in, which left the temperature still unpleasantly cold but just discernibly warmer (by about one degree Celsius). The subjects were then asked whether, if they had to choose, they would opt to repeat the first or the second experiment. Seemingly in line with the peak-end rule, 80 per cent of subjects chose the second, even though it differs from the first only in adding a further thirty seconds of pointless discomfort.

People evaluate experiences by reference to salient moments, not as a sum of sensation over time. Kahneman sees this as a point of similarity between humans and rats. We and our fellow mammals care less about hedonic duration than intensity: a long period of chronic pain is preferred to a significantly shorter period of mild discomfort with peaks of acute pain. What matters is whether things improve by the end of the experience, even if that means only that things get marginally less bad. Admittedly, ‘things can only get better’ is a more potent political slogan than ‘things can only get marginally less bad.’ But maybe that’s merely because human beings are really utopian meliorists – a further illusion – rather than gradualists about the misery to which their lot condemns them.

It’s significant that these preferences work retrospectively. They reflect one’s valuation of subjective states not while they’re in full swing but when recollected later. Here Kahneman, having earlier avoided a corrective stance towards rationality, seems to change his tune. He thinks the yardstick to use in evaluating experiential states should be their felt quality at the time, their now-for-now value. He regards the results of the cold-hand experiment as a quirk that he puts down to the psychic misrule of System 1.

No doubt there’s no sound reason to ascribe more importance to one experiential state than another merely because it occurs later. But the now-for-now norm is open to objection too, and not only because even those happily free of the Protestant tic for psychic self-urtication may agree that present consumption should sometimes be deferred in favour of investment for the future. In fact, the full-on now-for-now view may be not just non-obvious, but even mildly touched. In a celebrated thought experiment, Robert Nozick aired the notion of an ‘experience machine’: think of a virtual reality suit that could recreate the felt quality of any experience for the wearer. Would one opt now to don the suit for life, having punched in one’s pet repertoire of zany thrills? Maybe on a now-for-now basis Nozick’s machine is a winner. Yet people tend not to like the idea. This preference seems not to express some rat-brain penchant for instant gratification. It suggests rather a calm passion that is engaged when people are given the opportunity to shape their experience in ways that make sense to them, and one way of making sense is to telescope experiences within a retrospective narrative frame.

The odd turnaround is that System 1, earlier glossed as the domain of snap and often wrong judgments, now appears to stand counter to the instant gratification of now-for-now. Kahneman sees the seemingly irrational peak-end rule as the work of System 1, but in the cold-hand experiment, the irrationality – if that is what it is – consists in a rejection of instant gratification in favour of a sequential narrative of modest improvement. While the peak-end rule seems to have irrational effects in that experiment, the experience machine shows that the now-for-now norm is questionable – and surely it’s System 2, the domain of slow thinking, that’s asking the questions. Would you rather tick over for eternity as a polyp, in the moderate bliss that marks the apex of its creaturely aspiration, or be granted a few decades of anxious thought as the liver of an examined life? Whether or not the existence of the polyp or satisfied pig appeals more than the examined life, the latter at least reflects on the basis of one’s actions, rather than just going with the creaturely flow. This weakens the impression that fast thinking stems from a unitary system; perhaps it is simply thinking that happens fast.

Fast thinking resorts to quick and dirty heuristics, such as available narratives or handy stereotypes. Kahneman pays particular heed to people’s unwillingness to explain phenomena via regression to the mean. Why do extremely intelligent women marry less intelligent men? Ask at a party and people will come up with any number of seemingly plausible, usually psychological explanations. And in a given case some of them may be right. But look at a normal distribution curve: it is, as Kahneman puts it, a ‘mathematical inevitability that highly intelligent women will be married to husbands who are on average less intelligent than they are’. It’s unromantic, it’s short on narrative interest, but there it is. Much informal and indeed formal education instils maladaptive patterns of inference. Kahneman’s book shows that we are stranger beasts than we like to believe – no doubt a further, and all but indelible, thumbprint left by fast thought.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 34 No. 8 · 26 April 2012

Glen Newey, writing about Daniel Kahneman’s Thinking, Fast and Slow, doesn’t give a full enough explanation of the ‘sure thing’ principle (LRB, 22 March). It can perhaps be best understood in the context of Maurice Allais’s famous paradox. Allais asks us to choose between two options, say A and B, where A is an offer to receive £1 million for certain, and B is a lottery ticket that offers an 89 per cent chance of winning £1 million, a 10 per cent chance of winning £5 million and a 1 per cent chance of winning nothing. A and B thus share a ‘common consequence’ (or ‘sure thing’) – i.e. an 89 per cent chance of winning £1 million. According to standard economic theory, we should immediately detect this common consequence, and see that the only relevant part of A is an 11 per cent chance of £1 million and the only relevant part of B a 10 per cent chance of £5 million and a 1 per cent chance of nothing.

Allais then asks us to choose between C and D, where C offers an 89 per cent chance of nothing and an 11 per cent chance of £1 million, and D offers a 90 per cent chance of nothing and a 10 per cent chance of £5 million. The common consequence across C and D is an 89 per cent chance of nothing. Thus the relevant parts of A and C are identical, as are the relevant parts of B and D. That is, we should prefer A and C, or B and D, but never, as is often observed in practice, A and D.

That people tend to prefer A over B can be attributed to the ‘certainty effect’, which can in turn partly be explained by loss aversion: when certainty is offered, people weight it heavily, far more than can be explained by standard concepts of risk attitude. More generally, the Allais paradox implies that we do not evaluate goods independently of one another (i.e. our valuation of A is influenced by B): actual choice is often at odds with the normative precepts of the dominant theories of rational choice. This can have huge implications across all realms of life, from the ordering of items on a supermarket shelf so as to maximise sales of particular products, to the composition of programmes that a state sponsored TV channel may choose to transmit so as to maximise trust in its nightly news features.

Adam Oliver
London School of Economics

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences