In the latest issue:

An Ordinary Woman

Alan Bennett

Anglo-America Loses its Grip

Pankaj Mishra

Short Cuts: John Bolton’s Unwitting Usefulness

Mattathias Schwartz

Smells of Hell

Keith Thomas

Mrs Oliphant

Tom Crewe

Tippett’s Knack

Philip Clark

At Tate Modern: Steve McQueen

Colin Grant

Catherine Lacey

Nicole Flattery

Churchill’s Cook

Rosemary Hill

The ‘Batrachomyomachia’

Ange Mlinko

On Dorothea Lange

Joanna Biggs

Paid to Race

Jon Day

Poem: ‘Traveller’s Tales: Chapter 90’

August Kleinzahler

The Soho Alphabet

Andrew O’Hagan

Old Tunes

Stephen Sedley

Victor Serge’s Defective Bolshevism

Tariq Ali

The Murdrous Machiavel

Erin Maglaque

Diary: Insane after coronavirus?

Patricia Lockwood

The Undoing Project: A Friendship that Changed the World 
by Michael Lewis.
Allen Lane, 362 pp., £25, December 2016, 978 0 241 25473 8
Show More
Show More

In​ 1975, as Henry Kissinger was trying to negotiate a settlement to the Arab-Israeli War, he warned the Israeli government that a breakdown in the talks would bring catastrophe to the Middle East. The Israeli minister of foreign affairs, Yigan Allon, doubted this and convened a group of experts to investigate. It was led by Zvi Lanir, a political scientist and official at the Israeli foreign ministry, and Daniel Kahneman, who taught at the Hebrew University in Jerusalem, and had spent the war, along with his colleague Amos Tversky, in a unit of psychologists embedded with the IDF and tasked with studying troop morale in the Sinai. Lanir and Kahneman assigned a numerical probability to various consequences of a collapse in the talks: a new war with Syria and Egypt; the fall of the Saudi royal family; the Soviets threatening nuclear attack. They were surprised their study made so little impression. Officials, it turned out, didn’t consider a 10 per cent chance of all-out war with Syria very significant. ‘The understanding of numbers is so weak that they don’t communicate anything,’ Kahneman later said. ‘No one ever made a decision because of a number. They need a story.’

This realisation, as Michael Lewis recounts in The Undoing Project, his story of the intellectual partnership of Kahneman and Tversky, had an important effect on Kahneman’s work, and on his and Tversky’s efforts to overturn some of the basic assumptions of 20th-century social science: that the human mind has an intuitive grasp of statistics and probability, that people make rational choices on the basis of a clear understanding of the external world, and that errors in judgment are the result of emotions like fear or anger, not of reason itself. Kahneman’s conclusions, made famous in his unlikely bestseller Thinking, Fast and Slow (2011), have had a transformative effect on psychology, economics, and the relationship of social science to business and government.* Most states, organisations like the European Union and World Bank, as well as private businesses, social media companies and management consultancies, deploy policies – ‘nudges’ – inspired by the work he and Tversky did.

Lewis’s other books have focused on professional sport and finance (two of them, Moneyball and The Big Short, were turned into films starring Brad Pitt), so the collaboration of two Israeli academics might seem an unlikely topic for him. He found out about their work after the economist Richard Thaler and the legal scholar Cass Sunstein suggested in a joint review of Moneyball that the story of Billy Beane, the general manager of the baseball team the Oakland Athletics, could be read as a case study in support of Kahneman and Tversky’s work: by trusting statistics over the ‘eye test’, Beane exploited inefficiencies in the market for baseball players caused by the cognitive errors of scouts and managers. Kahneman and Tversky provide a psychological key to the kinds of character Lewis is drawn to: underdogs who go against conventional wisdom and make a killing, whether it’s by choosing unathletic but effective baseball players, or by betting against the housing market at its height. They make good characters themselves: Kahneman – self-effacing, insecure and moody (‘like Woody Allen,’ as one colleague put it, ‘without the humour’) – grew up in Vichy France, in hiding. His father, a chemist, had been saved from deportation to a concentration camp by the intervention of his boss, the founder of L’Oréal cosmetics, who had been a prominent supporter of the fascists before switching his allegiance to the Resistance. In 1944, Kahneman’s father died of complications from diabetes, and two years later, the family left for Israel. Tversky – who’d been brought up in Haifa – was brash, outgoing and intimidating. Both men had military experience, but Tversky was the real soldier. He fought in the wars of 1956, 1967 and 1973, was a decorated platoon commander, and had, briefly, been the military governor of the West Bank city of Jericho. In the late 1970s, as their careers took off, both men left Israel for North America – Kahneman went to the University of British Columbia and Tversky to Stanford. Tversky was the first to become famous for the work they’d done together, and the tensions caused by this damaged their relationship, which until then had had a near romantic intensity. In 1996, Tversky died, suddenly and young, of malignant melanoma; the 2002 Nobel Prize for economics was awarded to Kahneman for both men’s research.

Today Kahneman is better known than Tversky. But neither would have become famous without the other. Before the 1970s, Kahneman worked largely on problems of vision and attention, and on the antagonism between perception and taxing mental activities like memorisation. This had clear military applications: tank commanders and fighter pilots need to ignore extraneous perceptions and redirect focus quickly. Tversky’s work was more abstract, focusing on the mathematical formalisation of psychological principles, but his interest in the decision theories of the American psychologist Ward Edwards brought him into contact with Kahneman in 1969. According to Edwards the human mind was naturally Bayesian: it assessed probabilities more or less accurately according to a set of principles that allowed it to combine new information with existing views. This was the orthodoxy in the field. But Kahneman had found that errors of perception, such as optical illusions, were common. Why should human reason be different? Tversky had his own doubts about the mind’s natural grasp of probabilistic reasoning – in part because of the repeated failures of Israeli officials to predict the coming of war.

The starting point of their theory, Kahneman wrote in Thinking, Fast and Slow, is that the mind has two systems. The first is automatic and intuitive, and is used for such basic mental activities as perception of the external world, easy arithmetic and the recognition of simple words in one’s native language. With practice, System 1 can take charge of seemingly automatic physical activities like driving a car or riding a bicycle. System 2 governs the conscious and effortful mode of thinking we associate with our rational selves. It’s used for complex calculation, speaking a foreign language, assessing logical claims, memorisation and self-control. The two systems constantly interact. Most of the time we rely on System 1 to navigate a largely familiar world, and use System 2 in situations that call for active attention and assessment. When System 1 cannot easily comprehend a particular scenario, System 2 arrives to help: 2 + 2 can be completed by the former; 17 x 24 cannot.

System 1 has little grasp of logic or probability. It operates by association. When you hear the word banana, you think of yellow; when you hear the word table, you imagine four legs and a flat surface. This means you can be primed to react in particular ways: if you’re shown the word eat, for example, you’re more likely to complete the word ‘so_p’ with a ‘u’ than an ‘a’. System 1 strives for coherence, avoids doubt, and doesn’t worry about a lack of information. What matters is whether a story about the world holds together, not whether it’s logical. The mind, which hates randomness, looks for causes and agency everywhere. Some argue that this explains the seemingly universal human tendency to believe in the supernatural. This is a product of evolution: relying on a coherent and automatic understanding of the world is more conducive to survival than devoting energy to doubt and evaluation. Better to run when you see what looks like a predator than to stay put and calculate the odds it will catch you.

System 2 checks System 1. But it is lazy, and often avoids correcting the image of the world provided by System 1. System 1 yields only when System 2 is going at full tilt, which is exhausting (a fully engaged System 2, according to Kahneman, requires significant caloric expenditure, so that you’re more likely to lean on System 1 when you are tired or hungry). System 2 relies on heuristics and rules of thumb to assess the world, leading to all manner of mistakes. Most people, for example (including statisticians), reach strong conclusions on the basis of limited evidence. When they hear a story that appears to make sense, they forget a basic statistical truth: that small samples make for bad generalisations. System 2 can also be ‘anchored’ by arbitrary impressions. If you assign a random number to people and then ask them to make an estimate – about the average temperature, say, of Phoenix in the summer – those assigned a high number will make a higher estimate, and vice versa. The way we evaluate information is strongly influenced by the way it is presented, and we rely on stereotypes and available evidence, even if it is limited, which makes us prone to jump to conclusions. People buy earthquake insurance after there has been a bad quake, not because they have evaluated the likelihood that another will be along soon. This explains the power of terrorism: constant reminders of the threat make it come to mind more easily, and loom larger, than the likelihood of drowning in the bath (the probability of which is roughly the same as the probability of being killed in a terrorist attack). Underlying all these errors is the same basic fact: the mind prefers narratives to numbers.

Kahneman and Tversky’s research began to have a major impact outside psychology after the publication in 1979 of their work on ‘prospect theory’. There they argued that people were more concerned with change than with current states – with gaining and losing money rather than having a particular level of wealth. This violated the theory of ‘expected utility’, the idea that people act rationally in accordance with what they think will bring them the most future satisfaction. First developed in the 18th century by the Swiss mathematician Daniel Bernoulli, expected utility theory was formalised in the 1940s by the mathematician John von Neumann and the economist Oskar Morgenstern in their work on game theory. They devised a set of axioms for decision-making that transformed economics, psychology, political theory and US nuclear strategy.

If a person with £10 loses £5, and a person with £1 gains £4, expected utility theory says that they should experience the same level of happiness. All that matters is their absolute level of wealth: £5. But Kahneman and Tversky showed that the person who loses £5 will be significantly less happy than the person who gains £4. The change in wealth is what matters. It follows that people will be more likely to take a risk to avoid loss than they are to pursue gain. This was a violation of a basic assumption about rational decision-making at the heart of mainstream economics. It turns out that few people think like economists. They evaluate their choices according to where they start, not where they might end up, and much more than they like gaining, they hate losing.

This discovery seemed to explain various puzzles. Take the idea of sunk cost. The reason we are reluctant to quit a bad investment, but instead ‘throw good money after bad’, is that giving it up is experienced as a loss, even if it is the rational thing to do. One of the most influential early devotees of Kahneman and Tversky’s research was the economist Richard Thaler, who had long been interested in behaviour that economists tended to dismiss as irrational and thus irrelevant to their models. Thaler found Kahneman and Tversky’s work exciting because they had shown that such behaviour was the rule, not the exception. Almost no one acted like the rational, self-interested, and utility-maximising individual assumed in standard economic models. Thaler referred to this fictional person as an ‘Econ’, to be contrasted with a ‘Human’, who makes silly decisions and is easily distracted. What was the point of economics if it didn’t describe the world?

Behavioural economics proliferated in the 1980s and 1990s. Its early adoptees included Robert Shiller, who won the Nobel Prize in 2013 for his research on asset price volatility and the psychology of bubbles. Economics and psychology are now closer than ever, with new fields like neuro-economics using fMRIs to map the way the brain processes economic decisions. Behavioural economics is also popular with certain public intellectuals who enjoy its counterintuitive truisms. Malcolm Gladwell’s books are littered with research from the behavioural sciences, and the founders of Freakonomics list Kahneman and Tversky among their inspirations. Facebook, Amazon and Google use this research when they draw your attention to certain links and products. Uber exploits the cognitive errors of its drivers to wring as much value out of their labour as possible.

In the 2000s Thaler and Sunstein, guided by Kahneman and Tversky’s ideas, attempted to create a new politics of what they called ‘libertarian paternalism’. In their hugely influential book Nudge (2008), they called for governments to take advantage of common cognitive biases in order to influence the public’s decision-making, though they were careful to stress that people should retain freedom of choice. This could be as simple as reframing an advertising campaign in order to take advantage of loss aversion: people are more likely to save electricity, for example, if you remind them they’re losing money if they don’t than if you tell them how to save it. Making people opt out of organ donation rather than opt in increases enrolment. In 2009 Obama appointed Sunstein head of the Office of Information and Regulatory Affairs, and in 2015 signed an executive order calling for the increased use of behavioural science across the federal government.

There are​ many criticisms of this kind of social engineering. There are obvious libertarian objections to the idea that the state should have licence to mess with your mind. Some see the cheap fixes of the ‘nudge’ as a complement to austerity: make people go to the gym now to save money on their healthcare later. In 2010 the coalition government in the UK established its own ‘Nudge unit’. Others see the excitement about ‘nudging’ as wildly out of proportion to what it can deliver: relatively small-scale interventions, like getting more people to sign up to a pension plan. It is the perfect embodiment of the technocratic liberalism that suddenly seems a relic of a bygone era.

Some of the pioneers of behavioural economics are uncomfortable with the fact that the policies they inspired are substituting for real economic governance: if you want people to buy less sugar or petrol, the best way is to increase the price of these goods by cutting subsidies for their production or by taxing them. And getting the poor to act more ‘rationally’ can be a way of avoiding redistributive policies. Kahneman admits that authoritarian leaders are well aware of how to exploit the biases he and Tversky studied. It’s easy to convince others of falsehoods merely by repeating them: when you’re exposed frequently to an idea, System 1 comes to feel at ease with it, and the lazy System 2 interrogates it less. ‘Familiarity is not easily distinguished from truth,’ Kahneman writes.

Kahneman would argue that simple formulas should be trusted over human judgment in most cases, and that expertise is wildly overrated. Pundits are terrible at forecasting political and economic events, and few stock pickers can beat the market. When humans make an evaluation, they can’t help being influenced by context, and a slight change of mood can have a dramatic effect: judges, for example, are much less likely to grant parole when they’re hungry than just after they’ve eaten. Better to entrust this kind of decision to an algorithm. Kahneman first tested this idea in the 1950s, when, in his early twenties, he was called up for military service and asked to redesign the IDF’s system for assessing recruits. Instead of grilling them in interviews, Kahneman suggested evaluating them according to short, standardised lists of characteristics (including ‘sociability’ and ‘masculine pride’), and by assessing their academic performance and employment history. The aim was to correct for the favourable or negative first impressions that come from face to face conversations. The IDF still uses Kahneman’s method.

Computerised algorithms now make many of the decisions that keep capitalism humming and social order intact: determining credit scores and insurance premiums, assessing job candidates, and picking stocks. The Trump campaign, for all its folksy, anti-expertise talk, used the same kind of algorithmic method for assessing voter preference as Clinton’s did. Kahneman insists that algorithms prevent human biases from having discriminatory results – which is the reason some support their growing use in the criminal justice system. It’s now common in the US for decisions about sentencing, bail and parole to be made on the basis of computerised assessments of the likelihood that someone will reoffend. The calculations that decide one person is a risk while another is not aren’t open to public scrutiny, since they are usually managed by for-profit companies that claim this is proprietary information. But, in general, they work by making a risk assessment based on information about employment history, education level, gender and zip code. Coming from an urban neighbourhood places you in a higher risk bracket, as do prior arrests, even if they didn’t lead to convictions. The fact of your arrest then feeds into other calculations, leading to what the data scientist Cathy O’Neil calls a ‘death spiral of modelling’: you get a lower credit score, your CV is blacklisted by potential employers, you’re targeted by loan sharks, and more police are sent to your neighbourhood. Since markers of poverty and policing are used as proxies for risk, there’s little wonder these algorithms produce consistently worse scores for African Americans than whites. It turns out that the machines look a lot like the people who made them.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

Please include name, address, and a telephone number.


Vol. 39 No. 17 · 7 September 2017

Jamie Martin discusses the influence of Kahneman and Tversky’s work concerning the use of heuristics in decision-making (LRB, 27 July). My own PhD research in the late 1970s focused on how women made decisions about childbearing in the light of quantified risks that the foetus might be affected with a chromosomal or Mendelian disease. (These were the days when amniocentesis was a limited test available to only a few.)

The standard thinking at the time was based on an expectation that women would make their decisions on a ‘rational’ basis: they would avoid becoming pregnant, or not, depending on whether the risk was ‘high’ or ‘low’. Clinicians despaired when women did not behave this way: the feeling was that they must be either uneducated about statistics or simply ignorant. I set out to explore how women actually made their choices.

What I learned from the women I interviewed was that all of them knew the probability of particular undesired outcomes, but that this didn’t determine their decisions. Rather, they created ‘scenarios’ for various outcomes, placed themselves in those scenarios (all of which were possible, however improbable some may have been), and made their childbearing decisions according to what they thought they could manage in any given case.

Kahnemann and Tversky’s work not only gave my qualitative research a proper quantitative basis, but by introducing me and others to the ways that real people made decisions, helped bring a nuance to genetic counselling that I hope persists today. Sometimes being ‘irrational’ is the rational way to behave; and some logic is ‘fuzzy’.

Abby Lippman

Jamie Martin mentions that judges ‘are much less likely to grant parole when they’re hungry than just after they’ve eaten’. This example is often called on to do the heavy lifting in arguments against the reliability of rationality in decision-making. The hypothesis ‘hungry judge = harsh judge’ was introduced by Shai Danziger in his paper ‘Extraneous Factors in Judicial Decisions’ (2011). But the effect he observed was orders of magnitude too great to be plausibly explained by the hunger hypothesis. Effects this large, if they were real, would be repeated across all professions – pilots, surgeons and firefighters would be similarly affected. The reality is that the results Danziger recorded were probably caused by structural factors masked to those not professionally involved. It may be, for example, that the court cases were ordered such that there was an apparent correlation. There’s no good evidence that bringing your judge elevenses will spare you a prison sentence.

Robin Garner
London SE5

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Read More

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences