Studies have shown repeatedly that children with bigger feet reason better than do those with smaller feet. Many of you have probably noticed this very strong correlation yourselves. Of course, there is no causal connection here. Children with bigger feet reason better because they’re older. Irrationality: The Enemy Within is about the mistakes, misconceptions, and unfounded assumptions that muddle decision-making in everyday life and in a wide variety of occupations. People don’t notice associations that are strong, they believe in causal connections that are non-existent, infer significance where there is only chance, remain immune to overwhelming evidence, and are over-responsive to dramatic incidents. One of the most appealing aspects of this book is that its grand pronouncements are few and its specific illustrations plentiful. The author does not attempt a general analysis of rationality. Hume’s notorious problem of induction, for example, is mentioned only to be dismissed, as are concerns about our ultimate ends or purposes. The book is, rather, a compendium of psychological studies and real-world instances whose central thesis is that most of us make critical mistakes in reasoning.
Stuart Sutherland teaches psychology at the University of Sussex, but his early career as a journalist may have sensitised him to human irrationality. Thus he begins his tour of the psychological literature on irrationality with a discussion of the so-called ‘availability error’, a phenomenon particularly widespread in the media. This disposition, first described by the psychologists Amos Tversky and Daniel Kahneman, is merely a strong tendency to make judgments or evaluations in light of the first thing that comes to mind (is ‘available’ to the mind). Are there more words with ‘r’ as a first letter or as a third letter? What about ‘k’? Most people incorrectly decide that there are more words with these letters in the first position than in the third position since the former words were more readily available to our memories. Words such as ‘rich’, ‘real’ and ‘rambunctious’ are easier to recall (‘recall’ is another) than words such as ‘farce’, ‘street’ and ‘throw’. Another example involves a group of people asked by psychologists to memorise a collection of words that included four terms of praise – ‘adventurous’, ‘self-confident’, ‘independent’ and ‘persistent’. A second group was asked to memorise a similar list except that the four positive words were replaced by ‘reckless’, ‘conceited’, ‘aloof’ and ‘stubborn’. Both groups then moved on to an ostensibly different task: reading a somewhat ambiguous story about a young man, whom they were then asked to evaluate. The first group thought much more highly of the young man than the second did, presumably because the positive words they had just memorised were more available to them.
What makes something available and readily accessible? Material that is recently presented, emotionally evocative, dramatic and concrete is generally more available than that which is old, emotionally neutral, boring or abstract. In this sense a terrorist hijacking is much more worrisome than it should be. In the other direction deaths from heart disease are so commonplace that they almost go unnoticed. It’s no surprise that advertisers, newspapermen and many others deliberately invite the availability error. Consider the bells and lights that accompany the smallest win in a casino, for example, and contrast this with the soundless invisibility that accompanies a loss. There are a number of corollaries to the ‘availability error’. Prominent among them is the ‘halo effect’, the inclination people have to judge a person in terms of one salient characteristic. Articles lacking the researcher’s prestigious university affiliation have been submitted to scholarly journals with the predictable consequence that they’re rejected. Similar fates have befallen novels by famous authors submitted to publishers under a pseudonym. Much more chilling is Sutherland’s account of the psychologist Stanley Milgram’s classic experiments on obedience, in which perfectly ordinary people obey the experimenters and deliver what they think are near-fatal electric shocks to other participants. He also describes studies on conformity, especially in crowds. The de-individuating influence of mobs, the spread of hostile emotions through them, the desire to impress fellow in-group members, the breakdown of convention and related effects are all depressingly chronicled.
Even in organisations tamer than crowds, interaction among members tends to engender bias. Since they want to be valued by the group, members freely express opinions in line with what they perceive to be the group’s attitudes and suppress those that run counter to those of the group. A prejudicial breeze soon develops. Leaders arise who are more extreme than the average member. They typically pick yes-men rather than more independent members and are deferred to by most others, particularly if the leaders can influence their careers. Other investigations suggest that groups are more likely to embark on risky undertakings and to do so with more certitude than are individuals. Such studies, especially the work of Muzifer Sherif on inducing group hatred, bring to mind the Biblical injunction (one of the few quoted by Bertrand Russell): ‘Follow not a multitude to do evil.’ Germane, and more hopeful, are results indicating that people who take an unpopular stand in public are much less likely to be swayed by conformist statements and actions than are those whose initial ideas are similar but only privately expressed.
Sutherland discusses at length errors and misapprehensions of a mathematical kind. For a compelling application of Bayes’s theorem in probability imagine that you’ve taken a test for dread disease X, and your doctor has sombrely informed you that you’ve tested positive. The question is then: how depressed should you be? The answer depends on many factors, of course, but as the following calculations indicate, you should perhaps be cautiously optimistic. Suppose there is a test for disease X which is 99 per cent accurate in the following sense. If someone has X, the test will be positive 99 per cent of the time, and if he doesn’t have it, the test will be negative 99 per cent of the time. (For the sake of simplicity I assume the same percentage holds for both positive and negative tests. Some tests including the standard Aids tests are more reliable, but many, in particular the Pap test for cervical cancer, are much less so.) Suppose further that .1 per cent – one in 1000 people – actually have this rare disease. Let’s now assume that 100,000 tests for X are administered. Of these, how many are positive? On average, 100 of these 100,000 people (.1 per cent of 100,000) will have X, and so, since 99 per cent of these 100 will test positive, we will have, on average, 99 positive tests. Of the 99,900 healthy people, 1 per cent will test positive, resulting in a total of approximately 999 positive tests (1 per cent of 99,900 is 999). Thus, of the total of 1098 positive tests (999 + 99 = 1098), most (999) are false positives, and so the probability that you have X given that you tested positive is 99/1098, or only about 9 per cent, and this for a test which was assumed to be 99 per cent accurate! To reiterate, if you have X, the test will be positive 99 per cent of the time, but only 9 per cent of those with positive tests will have X.
A different bit of mathematics with psychological implications is provided by the phenomenon of regression to the mean: the tendency for an extreme value of a random quantity whose values cluster around an average or mean to be followed by a value closer to the average or mean. Very intelligent people can be expected to have intelligent offspring, but their offspring will usually not be quite as intelligent. A comparable tendency toward the average holds for the children of very short parents, who are likely to be short, but not as short. People frequently attribute the regression to the mean to something material, rather than to the behaviour of any random quantity dependent on many factors. Sutherland recounts the study of Tversky and Kahneman in which novice Israeli pilots were praised after good landings and were berated after bumpy ones. The flight instructors then mistakenly attributed the pilots’ subsequent deterioration to their praise, and likewise the pilot’s subsequent improvement to their criticism. Both, however, were simply regressions to the more likely mean performance. The sequel to a great movie is often not as good as its original. The reason may not be the avarice of the movie industry in exploiting the first film’s popularity, but simply another instance of regression to the mean. A great year by an athlete will likely be followed by a less impressive year. The same can be said of the book after the bestseller or the album that follows the gold disc.
Another common chink in our rationality, known as ‘anchoring effects’, is illustrated by the case of subjects who were asked to estimate the population of Turkey. Before answering, however, they were presented with a figure and asked whether the actual number was higher or lower. Of those who are first presented with the figure of five million, the average estimate was 17 million; of those first presented with a figure of 65 million, the average estimate was 35 million. As Tversky, Kahneman and others have established, people are ‘anchored’ to the original figure presented to them, and although they move in the correct direction from it, they are reluctant to move too far.
More generally, Irrationality amply demonstrates that the way situations are framed affects our choices within them. Surprisingly, even in mathematically equivalent cases, we’re often swayed by artful phrasing. In one investigation subjects generally chose to receive £45 with probability 20 per cent rather than receiving £30 with probability 25 per cent. This is reasonable since the average gain in the first case is £9 (20 percent of 45), whereas the average gain in the second is only £7.50 (25 per cent of 30). What’s not so reasonable is that subjects generally made the opposite choice when their options were stated in terms of stages. The alternative framing: with a probability of 75 per cent the subject is eliminated at the first stage and receives nothing. If someone reaches the second stage, he or she has the option of receiving £30 for certain or £45 with probability 80 per cent. This is equivalent to a choice between £30 with probability 25 per cent (since 25 per cent is 100 per cent minus 75 percent) or £45 with probability 20 per cent (since 80 per cent of 25 per cent is 20 per cent). In this case, however, the majority of subjects choose the seemingly safer £30 option, influenced apparently by the idea of certainty. Other scenarios, many of them matters of life and death, support the proposition that people are considerably more willing to take risks to avoid losses than they are in order to achieve gains.
Although we may lack lucidity on these matters, we’re rarely short of confidence. If Yeats was right in writing, ‘The best lack all conviction, while the worst are full of passionate intensity,’ then investigations by B. Fischoff and others on over-confidence suggest that most of us aren’t very admirable. (Interestingly, one of the few categories in which American maths students score well is self-confidence.) In addition to reasons of vanity and self-esteem, we’re cocksure of our decisions, actions and beliefs because we fail to look for counter-evidence, distort our memories and the existing evidence, pay no attention to alternative views and their consequences and are seduced by our own explanatory schemes. An experiment is described in which the subjects were told of two firemen, one successful, one not. Half the subjects were told that the successful one was a risk-taker and the unsuccessful one was not. The other half of the subjects were told just the opposite. Afterwards, they were informed that the firemen did not exist and that the experimenters had invented them. Oddly, they continued to be strongly influenced by whatever explanatory stories they had concocted for themselves. If they had been told that the risk taking fireman was successful, they thought that prospective firemen should be chosen for their willingness to take risks; if not, then not. If asked to account for the connection between risk-taking or its absence and successful firefighting, the members of each group gave a cogent explanation consistent with the imaginary story originally told them.
Sutherland warns repeatedly that we tend to look only for confirmation of our ideas, seldom for disconfirmation. The psychologist Peter Wason presented subjects with four cards having the symbols A, D, 3 and 7 on one side and told them that each card had a number on one side and a letter on the other. He then asked which of the four cards needed to be turned over in order to establish the rule: any card with an A on one side has a 3 on the other. Most subjects picked the A and 3 cards. The correct answer is the A and 7 cards. Think about it.
In assigning causes, we’re also frequently irrational. We’re much more liable to attribute an event to an agent rather than to chance if it has momentous or emotional implications. One group of subjects was told that a man parked his car on an incline after which it rolled down into a fire hydrant. The other group was told that the car rolled into a pedestrian. The members of the second group held the driver more responsible. And, as Nisbett, Rossi and others have shown, we still tend to engage in magical thinking that ‘like causes like’. The lungs of a fox will cure asthma. Fowl droppings will eliminate ring-worm, which they resemble. Fixation at the oral stage will lead to preoccupation with smoking, eating and kissing. Homeopathic nostrums as well as more mainstream medical thought on cholesterol type and A behaviour are based on the same idea. A number of illustrations of these various studies are briefly discussed, from misguided military campaigns and wasteful government programmes to groundless claims for the paranormal. Particularly topical is the notion of a QALY – a quality-adjusted life year. It is a rough estimate of people’s judgments of how many years with a given disability are equivalent to one year of normal life. Supplemented with a little common sense, it’s a useful tool in deciding which medical measures are worthwhile. In a world of limited resources, compassion can easily slide into stupidity if a heart bypass, hip replacement and face-lift are performed on a 97-year-old cancer patient with Alzheimer’s. Stuart Sutherland attempts to make us less prone to such slides, and performs a signal service in collecting and organising these important studies in such an engagingly written book.