Irrationality: The Enemy Within 
by Stuart Sutherland.
Constable, 357 pp., £14.95, November 1992, 0 09 471220 4
Show More
Show More

Studies have shown repeatedly that children with bigger feet reason better than do those with smaller feet. Many of you have probably noticed this very strong correlation yourselves. Of course, there is no causal connection here. Children with bigger feet reason better because they’re older. Irrationality: The Enemy Within is about the mistakes, misconceptions, and unfounded assumptions that muddle decision-making in everyday life and in a wide variety of occupations. People don’t notice associations that are strong, they believe in causal connections that are non-existent, infer significance where there is only chance, remain immune to overwhelming evidence, and are over-responsive to dramatic incidents. One of the most appealing aspects of this book is that its grand pronouncements are few and its specific illustrations plentiful. The author does not attempt a general analysis of rationality. Hume’s notorious problem of induction, for example, is mentioned only to be dismissed, as are concerns about our ultimate ends or purposes. The book is, rather, a compendium of psychological studies and real-world instances whose central thesis is that most of us make critical mistakes in reasoning.

Stuart Sutherland teaches psychology at the University of Sussex, but his early career as a journalist may have sensitised him to human irrationality. Thus he begins his tour of the psychological literature on irrationality with a discussion of the so-called ‘availability error’, a phenomenon particularly widespread in the media. This disposition, first described by the psychologists Amos Tversky and Daniel Kahneman, is merely a strong tendency to make judgments or evaluations in light of the first thing that comes to mind (is ‘available’ to the mind). Are there more words with ‘r’ as a first letter or as a third letter? What about ‘k’? Most people incorrectly decide that there are more words with these letters in the first position than in the third position since the former words were more readily available to our memories. Words such as ‘rich’, ‘real’ and ‘rambunctious’ are easier to recall (‘recall’ is another) than words such as ‘farce’, ‘street’ and ‘throw’. Another example involves a group of people asked by psychologists to memorise a collection of words that included four terms of praise – ‘adventurous’, ‘self-confident’, ‘independent’ and ‘persistent’. A second group was asked to memorise a similar list except that the four positive words were replaced by ‘reckless’, ‘conceited’, ‘aloof’ and ‘stubborn’. Both groups then moved on to an ostensibly different task: reading a somewhat ambiguous story about a young man, whom they were then asked to evaluate. The first group thought much more highly of the young man than the second did, presumably because the positive words they had just memorised were more available to them.

What makes something available and readily accessible? Material that is recently presented, emotionally evocative, dramatic and concrete is generally more available than that which is old, emotionally neutral, boring or abstract. In this sense a terrorist hijacking is much more worrisome than it should be. In the other direction deaths from heart disease are so commonplace that they almost go unnoticed. It’s no surprise that advertisers, newspapermen and many others deliberately invite the availability error. Consider the bells and lights that accompany the smallest win in a casino, for example, and contrast this with the soundless invisibility that accompanies a loss. There are a number of corollaries to the ‘availability error’. Prominent among them is the ‘halo effect’, the inclination people have to judge a person in terms of one salient characteristic. Articles lacking the researcher’s prestigious university affiliation have been submitted to scholarly journals with the predictable consequence that they’re rejected. Similar fates have befallen novels by famous authors submitted to publishers under a pseudonym. Much more chilling is Sutherland’s account of the psychologist Stanley Milgram’s classic experiments on obedience, in which perfectly ordinary people obey the experimenters and deliver what they think are near-fatal electric shocks to other participants. He also describes studies on conformity, especially in crowds. The de-individuating influence of mobs, the spread of hostile emotions through them, the desire to impress fellow in-group members, the breakdown of convention and related effects are all depressingly chronicled.

Even in organisations tamer than crowds, interaction among members tends to engender bias. Since they want to be valued by the group, members freely express opinions in line with what they perceive to be the group’s attitudes and suppress those that run counter to those of the group. A prejudicial breeze soon develops. Leaders arise who are more extreme than the average member. They typically pick yes-men rather than more independent members and are deferred to by most others, particularly if the leaders can influence their careers. Other investigations suggest that groups are more likely to embark on risky undertakings and to do so with more certitude than are individuals. Such studies, especially the work of Muzifer Sherif on inducing group hatred, bring to mind the Biblical injunction (one of the few quoted by Bertrand Russell): ‘Follow not a multitude to do evil.’ Germane, and more hopeful, are results indicating that people who take an unpopular stand in public are much less likely to be swayed by conformist statements and actions than are those whose initial ideas are similar but only privately expressed.

Sutherland discusses at length errors and misapprehensions of a mathematical kind. For a compelling application of Bayes’s theorem in probability imagine that you’ve taken a test for dread disease X, and your doctor has sombrely informed you that you’ve tested positive. The question is then: how depressed should you be? The answer depends on many factors, of course, but as the following calculations indicate, you should perhaps be cautiously optimistic. Suppose there is a test for disease X which is 99 per cent accurate in the following sense. If someone has X, the test will be positive 99 per cent of the time, and if he doesn’t have it, the test will be negative 99 per cent of the time. (For the sake of simplicity I assume the same percentage holds for both positive and negative tests. Some tests including the standard Aids tests are more reliable, but many, in particular the Pap test for cervical cancer, are much less so.) Suppose further that .1 per cent – one in 1000 people – actually have this rare disease. Let’s now assume that 100,000 tests for X are administered. Of these, how many are positive? On average, 100 of these 100,000 people (.1 per cent of 100,000) will have X, and so, since 99 per cent of these 100 will test positive, we will have, on average, 99 positive tests. Of the 99,900 healthy people, 1 per cent will test positive, resulting in a total of approximately 999 positive tests (1 per cent of 99,900 is 999). Thus, of the total of 1098 positive tests (999 + 99 = 1098), most (999) are false positives, and so the probability that you have X given that you tested positive is 99/1098, or only about 9 per cent, and this for a test which was assumed to be 99 per cent accurate! To reiterate, if you have X, the test will be positive 99 per cent of the time, but only 9 per cent of those with positive tests will have X.

A different bit of mathematics with psychological implications is provided by the phenomenon of regression to the mean: the tendency for an extreme value of a random quantity whose values cluster around an average or mean to be followed by a value closer to the average or mean. Very intelligent people can be expected to have intelligent offspring, but their offspring will usually not be quite as intelligent. A comparable tendency toward the average holds for the children of very short parents, who are likely to be short, but not as short. People frequently attribute the regression to the mean to something material, rather than to the behaviour of any random quantity dependent on many factors. Sutherland recounts the study of Tversky and Kahneman in which novice Israeli pilots were praised after good landings and were berated after bumpy ones. The flight instructors then mistakenly attributed the pilots’ subsequent deterioration to their praise, and likewise the pilot’s subsequent improvement to their criticism. Both, however, were simply regressions to the more likely mean performance. The sequel to a great movie is often not as good as its original. The reason may not be the avarice of the movie industry in exploiting the first film’s popularity, but simply another instance of regression to the mean. A great year by an athlete will likely be followed by a less impressive year. The same can be said of the book after the bestseller or the album that follows the gold disc.

Another common chink in our rationality, known as ‘anchoring effects’, is illustrated by the case of subjects who were asked to estimate the population of Turkey. Before answering, however, they were presented with a figure and asked whether the actual number was higher or lower. Of those who are first presented with the figure of five million, the average estimate was 17 million; of those first presented with a figure of 65 million, the average estimate was 35 million. As Tversky, Kahneman and others have established, people are ‘anchored’ to the original figure presented to them, and although they move in the correct direction from it, they are reluctant to move too far.

More generally, Irrationality amply demonstrates that the way situations are framed affects our choices within them. Surprisingly, even in mathematically equivalent cases, we’re often swayed by artful phrasing. In one investigation subjects generally chose to receive £45 with probability 20 per cent rather than receiving £30 with probability 25 per cent. This is reasonable since the average gain in the first case is £9 (20 percent of 45), whereas the average gain in the second is only £7.50 (25 per cent of 30). What’s not so reasonable is that subjects generally made the opposite choice when their options were stated in terms of stages. The alternative framing: with a probability of 75 per cent the subject is eliminated at the first stage and receives nothing. If someone reaches the second stage, he or she has the option of receiving £30 for certain or £45 with probability 80 per cent. This is equivalent to a choice between £30 with probability 25 per cent (since 25 per cent is 100 per cent minus 75 percent) or £45 with probability 20 per cent (since 80 per cent of 25 per cent is 20 per cent). In this case, however, the majority of subjects choose the seemingly safer £30 option, influenced apparently by the idea of certainty. Other scenarios, many of them matters of life and death, support the proposition that people are considerably more willing to take risks to avoid losses than they are in order to achieve gains.

Although we may lack lucidity on these matters, we’re rarely short of confidence. If Yeats was right in writing, ‘The best lack all conviction, while the worst are full of passionate intensity,’ then investigations by B. Fischoff and others on over-confidence suggest that most of us aren’t very admirable. (Interestingly, one of the few categories in which American maths students score well is self-confidence.) In addition to reasons of vanity and self-esteem, we’re cocksure of our decisions, actions and beliefs because we fail to look for counter-evidence, distort our memories and the existing evidence, pay no attention to alternative views and their consequences and are seduced by our own explanatory schemes. An experiment is described in which the subjects were told of two firemen, one successful, one not. Half the subjects were told that the successful one was a risk-taker and the unsuccessful one was not. The other half of the subjects were told just the opposite. Afterwards, they were informed that the firemen did not exist and that the experimenters had invented them. Oddly, they continued to be strongly influenced by whatever explanatory stories they had concocted for themselves. If they had been told that the risk taking fireman was successful, they thought that prospective firemen should be chosen for their willingness to take risks; if not, then not. If asked to account for the connection between risk-taking or its absence and successful firefighting, the members of each group gave a cogent explanation consistent with the imaginary story originally told them.

Sutherland warns repeatedly that we tend to look only for confirmation of our ideas, seldom for disconfirmation. The psychologist Peter Wason presented subjects with four cards having the symbols A, D, 3 and 7 on one side and told them that each card had a number on one side and a letter on the other. He then asked which of the four cards needed to be turned over in order to establish the rule: any card with an A on one side has a 3 on the other. Most subjects picked the A and 3 cards. The correct answer is the A and 7 cards. Think about it.

In assigning causes, we’re also frequently irrational. We’re much more liable to attribute an event to an agent rather than to chance if it has momentous or emotional implications. One group of subjects was told that a man parked his car on an incline after which it rolled down into a fire hydrant. The other group was told that the car rolled into a pedestrian. The members of the second group held the driver more responsible. And, as Nisbett, Rossi and others have shown, we still tend to engage in magical thinking that ‘like causes like’. The lungs of a fox will cure asthma. Fowl droppings will eliminate ring-worm, which they resemble. Fixation at the oral stage will lead to preoccupation with smoking, eating and kissing. Homeopathic nostrums as well as more mainstream medical thought on cholesterol type and A behaviour are based on the same idea. A number of illustrations of these various studies are briefly discussed, from misguided military campaigns and wasteful government programmes to groundless claims for the paranormal. Particularly topical is the notion of a QALY – a quality-adjusted life year. It is a rough estimate of people’s judgments of how many years with a given disability are equivalent to one year of normal life. Supplemented with a little common sense, it’s a useful tool in deciding which medical measures are worthwhile. In a world of limited resources, compassion can easily slide into stupidity if a heart bypass, hip replacement and face-lift are performed on a 97-year-old cancer patient with Alzheimer’s. Stuart Sutherland attempts to make us less prone to such slides, and performs a signal service in collecting and organising these important studies in such an engagingly written book.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 15 No. 7 · 8 April 1993

John Allen Paulos’s odd non-review of the unfortunately-titled Irrationality: The Enemy Within by Stuart Sutherland (LRB, 11 March) did nothing to assuage my concerns about mathematicians’ imperialist meddling along the interesting borderlines between rational thought or behaviour and this ‘Other’ of ‘irrationality’, apparently to be reviled. There was a widely-differing range of ‘examples’ Paulos mentioned in his piece, though with no discussion of what each contributed to identifying or understanding irrational thinking. He generally cited studies by others rather than discussed what Sutherland had to say about them – assuming they even were from his book. At times, it was quite unclear whether what we got was just Paulos telling us about a motley collection of studies (e.g, the Wason experiment on cards) that he knew about.

Paulos informed us that a sequel to a great movie being not as great as its original may ‘simply be another instance of regression to the mean’. Eh? Pretty implausible, I thought but how to defend against such a claim? Does ‘regression to the mean’ explain anything – or merely explain away? What on earth could ‘the mean’ mean here, where there is at best a highly finite, underlying (and only potential) distribution (with the possible exception of the Rocky collection of films)?

The situation he reported with regard to the two different sums of money won with different probabilities shows a similar problem. The fact that mathematical comparison fails to distinguish the two scenarios may actually be a criticism of mathematics for not reflecting or attending to salient distinctions. The point is surely with the stage scenario that you can make the decision at stage one – and it depends on your gambling nature whether you take the then certain £30 or continue on for a second risk. The situations are not equivalent to me! His discussion certainly suggests that probabilities are things of this world that can be known or discovered. But probability is not part of the material world. It is not observable. Everything that happens, happens with 100 per cent probability every time. And I am often not interested in average expected gain. I am interested in the actual outcome when I carry out an action – and probability has almost nothing to say to me about that singular occurrence that has never happened before nor can again.

Consider the following situation. I have to decide whether or not to inoculate my child against whooping cough. I am told that the statistics about the vaccine causing brain damage are 1 in so many. I am also told that the statistics on child deaths from whooping cough are 1 in some other many. What do I do? My decision cannot be just to compare the two rates. That is to compare unalike things. My vaccination decision is now – and at the end of it I will either have a brain-damaged child or I won’t. The statistics on death from whooping cough only refer to a future possibility – once my child catches whooping cough. So I am trying to compare a present about-to-be-actual state with a possible future state. And these apparent probabilistic ‘facts’ fail to make important distinctions – the rates are not uniform geographically, nor across social class, to name but two. I can continue to make distinctions, until I get down to the actual circumstances of our life, even the genetic make-up of my child. Because that is what I am interested in – not average rates and likelihoods.

Paulos in passing mentions QALYs. On a TV programme a while ago, the sociologist inventor of QALY (the Quality-Adjusted Life Year) was interviewed and asked the pertinent question: is this a helpful way to think about the value of human life? In the telling subtitle to his book Computer Power and Human Reason: From Judgment to Calculation, Joseph Weizenbaum identifies precisely my major area of concern: namely, the way in which human judgments are being devalued and eventually ignored in preference to calculations – as if the latter were somehow preferable.

I end by offering a more interesting form of ‘irrational’ thinking, Gregory Bateson’s spectacular pseudo-syllogism:

Grass dies;
Men die;
Men are grass.

I think Bateson is onto something fundamental about how humans think creatively (and the implicit role of metaphor). But Bateson had no need of clothing such a style of creative argument (known pejoratively as ‘affirming the consequent’) in the virtuous trappings of ‘rationality’ and ‘logic’. There is much work to be done on the development of human rationality. I think it also worth exploring the concomitant fear of irrationality, one which I felt lurking beneath the surface of Paulos’s piece. He seems to see rationality as objectified with rules and procedures of its own – so I imagine that current discussions with regard to whose rationality, discourse and rules and procedures (e.g. Women’s Ways of Knowing by Belensky et al, or Gillingan’s In a Different Voice) will leave him cold.

I have not yet read Sutherland’s book. But if Paulos is giving an accurate reflection of the tenor and range of examples, it seems to offer a far from full account of thinking, which is a far more interesting phenomenon than certainly Paulos would have us believe. Richard Noss has written: ‘the belief that mathematical thinking is genuinely superior to practical thinking is deeply embedded in Western culture; it forms part of the ideology of what it means to think abstractly, perhaps even what it means to think.’ I claim mathematics actually plays a far smaller and less significant part in rational thinking about the material world around us. What’s more, I bet Paulos believes people wouldn’t gamble if they understood probability theory a bit more.

David Pimm
Faculty of Mathematics,

John Allen Paulos writes: Mr Pimm raises the spectre of mindless cretins churning away at scientistic and inadequate algorithms and in the process oppressing us all. Unfortunately, this vision is at best tenuously related to the contents of Mr Sutherland’s fine book or my review of it. On a more positive note, he has discovered that any situation can be analysed in greater depth, with more attention to nuances and complications. This is a remarkable insight and he should be proud of himself.

Vol. 15 No. 6 · 25 March 1993

John Allen Paulos (LRB, 11 March), explains how psychologists can devise situations which lead unsuspecting subjects to make supposedly irrational choices. This must be entertaining for lovers of practical jokes, but it is not clear what it reveals about human rationality. We are told that, when asked whether ‘r’ occurs more often as the first or the third letter of a word, people answer on the basis of the words readily available to memory. Does this show that people are irrational, or merely that they don’t think hard enough about the question? How hard should a rational being try, when answering such a pointless question? Research into irrationality requires a ‘gold standard’ of rational behaviour. Much of the work Paulos surveys adopts a mathematical theory of rational decision-making which indicates the correct choice when all options and their outcomes are known. However, such a theory cannot tell us how seriously to take a problem, whether to mistrust the evidence or to stop and look for other solutions.

The Imperial Cancer Research Fund is developing computer systems to assist in medical decision-making. We have found that the models used by mathematically-minded decision-theorists are of little use in complex situations where evidence is ambiguous and data may be missing. Much better results are obtained from models based on understanding what people actually do when confronted with a dilemma.

Paul Taylor
Imperial Cancer Research Fund,

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences