The world is a complex place. That is a truism, but perhaps complexity can be investigated rather than taken for granted. Think of the sorts of causal interaction one might regard as ‘complex’. In 2002, Avshalom Caspi and collaborators published a widely reported study in which they concluded that the degree to which abuse in childhood increases the likelihood that men will exhibit antisocial behaviour later in life was partly dependent on the presence of a gene that appears to control the activity of an enzyme called monoamine oxydase A. Or consider an article published by the British Journal of Cancer in February, which suggests ‘the intriguing possibility of a causal link between the molecular basis of breast cancer exemplified by p53 mutations and extreme deprivation’. Mutations in the p53 gene reduce the body’s ability to suppress tumours: what is ‘intriguing’ here is the thought that poverty might interact with the gene in question, thereby worsening the prognosis for economically deprived cancer patients. Some studies have suggested that socioeconomic inequality somehow causes poor health – even among the reasonably well-off in unequal societies. These assertions of causal links between very different sorts of things – poverty and genes, inequality and health – are often contested, but they aren’t especially unusual.
We see interactions across causal levels: poverty affects the activity of genes. We see probabilistic contingency: poverty does not guarantee a lower breast cancer survival rate, it merely makes it more likely. We see contextual contingency: the probabilistic boost that a particular gene gives to one’s chances of manifesting antisocial behaviour depends on how one was treated as a child. We see feedback loops across multiple causal levels: inequality is a population-wide phenomenon that affects the health of individuals within it. Health, in turn, affects economic productivity and alters the socioeconomic profile of the population. Finally, there are aspects of complexity not captured by these examples, such as ‘chaotic contingency’. Change the initial conditions of a system by a barely perceptible fraction and its fate is altered radically.
Sandra Mitchell’s Unsimple Truths – a translation of a book originally published in German as Komplexitäten: Warum wir erst anfangen, die Welt zu verstehen – attempts to trace these contours of complexity. Five of its six chapters focus on Mitchell’s own discipline, philosophy of science, revising traditional accounts of how the sciences, and the world they investigate, work. She builds towards a general vision of scientific practice she calls ‘integrative pluralism’, which recognises not only the complexity of the world but also the complexity of the sciences’ relationships to each other. Refreshingly, she also steps beyond the conventional boundaries of the philosophy of science, devoting a chapter to reflections on how policy-makers should deal with complexity. Some sciences have the luxury of investigating model systems, such as those friction-compensated surfaces on which the school physics student’s toy car can be subjected to an allegedly uniform acceleration. But the policy-maker needs to plan and intervene in the world as we find it, not treat it as an artificially controlled system. How can one plan in a world whose causal structures are multilayered, and whose future is so dependent on contingencies?
Traditional philosophy of science, as Mitchell sees it, suffers from two lifestyle conditions. First, it has an unhealthy diet: too many examples from physics. If we paid more attention to psychology, biology and so forth we would see the error of trying to squeeze all of scientific practice into a model built for one subdiscipline. Second, it doesn’t get out enough: it spends too much time in the armchair. Philosophers have tended to study how science works by putting together formal models which are overly reliant on logic and abstract reflection on imaginary examples, and insufficiently sensitive to the details of what scientists do and to the nature of real processes. Both of Mitchell’s remedies – move the focus away from physics, use real cases from science – are characteristic of much fashionable work by ‘naturalistic’ philosophers of science, who tend to regard science as ‘the best source of knowledge of nature’; the remedies are valuable, but their virtues can be exaggerated. Philosophy, too, is a complex business.
Mitchell sometimes overstates the degree to which traditionally minded philosophers have assumed that all sciences are, or should be, like physics. Reflecting on laws of nature, she writes: ‘The search for universal, exceptionless laws, for example, was taken by 19th-century British philosophers . . . to be the goal of scientific investigation, as they reflected on the enormous success of Isaac Newton’s laws of motion and universal gravitation.’ This is unfair on the likes of William Whewell, the 19th-century British philosopher who gave the name ‘palaetiological sciences’ to disciplines that seek to give explanations, cause by cause, for the changes that occur in the natural and social worlds over time. Whewell thought these narrative explanations for how things got to be the way they are were perfectly respectable, as were the sciences which dealt in them, in spite of the fact that they didn’t propose universal laws. Perhaps people forget about Whewell’s pluralism because the term ‘palaetiological’ hasn’t caught on (unlike ‘scientist’, another of his neologisms).
Mitchell’s main grumble is not so much with the claim that all sciences hunt for laws, but with the view that laws themselves must conform to the Newtonian paradigm. Many philosophers have argued that laws of nature are fundamental truths that characterise the universe in its spatial and temporal entirety. Laws tell us how all objects behave; indeed, some have believed that laws tell us how all objects must behave. On this view, disciplines such as evolutionary biology do not deal in laws, because they do not deal in exceptionless statements about what must be the case. Instead, they tell us about the structure of the ‘tree of life’ by investigating which species branched off from which and when. They tell us how the leopard got its spots, or how the bat developed a capacity for echolocation. They are sciences, but sciences that deal in the contingency of narrative history, not the necessity of timeless laws.
Mitchell disagrees with this traditional picture, but the disagreement sometimes looks like a scuffle over definitions rather than a demonstration of the value of attending to the realities of science in all their complexity. ‘Armchair’ philosophers of science such as David Lewis and David Armstrong, while holding to the claim that laws are spatially and temporally exceptionless, have tended to agree with Mitchell that even the most basic laws of physics are ‘contingent’. Roughly speaking, they mean by this that we can coherently imagine alternative possible universes in which the laws are different: gravity could have obeyed an inverse cube law instead of an inverse square law, for example. And Mitchell appears to agree with tradition in claiming that the principles governing the biological world are ‘more contingent and less universal, more ephemeral, less fundamental’ than the laws of physics. Examples abound: Mendel’s ‘laws’ are frequently violated; generalisations about the characteristic traits of biological species are statistical rules of thumb at best; natural selection works with the variation that happens to be available, with the result that there are no sure-fire rules telling us which adaptations will evolve to meet the demands of a given environment. Philosophers have typically taken this to show that biological generalisations may be valuable in prediction and explanation, but they are not true laws because they have exceptions. Mitchell takes it to show that we need to revise our account of what laws are: on her view, a law is a principle that can be used effectively in explanation and prediction, and a principle need not be exceptionless to fill this role. The disagreement, it seems, is only about labelling.
The substantive value of Mitchell’s views on laws starts to shine through, however, when one asks why, if laws could have been different, they happened to turn out the way they did. To learn that tigers have stripes is to learn about a reliably instantiated pattern in nature, albeit one that hasn’t always been true and admits of exceptions even now. One can ask why it is that tigers, by and large, are stripy. To enlarge the inquiry, why are species in general frequently the sorts of thing that permit these kinds of generalisations? Why is nature formed of what Darwin called ‘tolerably well-defined objects’, when – on the face of things – it could have been a thoroughly disordered mess, with no discernible species at all? As Mitchell puts it: ‘It is not sufficient to say that laws are contingent; one must detail what kinds of conditions they depend upon and how that dependency works.’ In giving answers to these sorts of questions, biologists (among others) can decisively answer Rutherford’s reported challenge: ‘In the world of science, there is physics. Everything else is just stamp-collecting.’ Mitchell shows us that the biologist who recognises the impossibility of finding ‘laws’ in the traditionalist’s sense need not settle for philately. They too can aim for, and frequently achieve, the goal of articulating and explaining robust generalisations about the natural world.
Mitchell doesn’t complain only about traditional accounts of laws, but also about traditional accounts of causation:
Traditional philosophical analyses of the cleanest, clearest cases of causal interactions, perhaps the first step in philosophical analyses of science, are difficult if not impossible to apply to the messy, murky causal relations that are displayed by genes and phenotypes, human interventions on the global climate, or multilevel, feedback-laden phenomena studied in modern psychiatry.
Whether this criticism is justified depends in part on what one means by ‘traditional’. For a generation or two, philosophy students have been taught about David Lewis’s ‘counterfactual’ theory of causation, published in 1973, and constructed, defended and revised more or less entirely from the armchairs of Lewis and his graduate students. The gist of Lewis’s view is almost absurdly simple: to say that an event c causes an effect e is just to say that had c not occurred, e would not have occurred. This allows for all sorts of causal complexity, and the complexities are sometimes obvious enough that we don’t need to leave our armchairs to recognise them. The question, for example, of what effect pulling a gun’s trigger has obviously depends on local context (whether there is anyone at the other end of the gun) and on starting conditions (whether the gun is loaded).
However, the ways in which some natural systems show complexity cannot be intuited from the armchair. Mitchell explains how genes get together to produce an adult organism. They form networks, which sometimes have a ‘robustness’ built into them. If a gene with some significant effect is removed from a robust system, the network reorganises itself, so that the normal developmental outcome occurs all the same, through a different causal pathway. (This means that you can’t always figure out the causal role of genes by using so-called ‘knockout’ techniques, which involve removing the genes in question and seeing what happens.)
At first glance, this clashes with Lewis’s counterfactual account: it appears that the gene in question has no causal impact on the trait. But Lewis understood perfectly well that his counterfactual account needed modification to encompass similar forms of robustness. Bill fires a gun and kills the president. Bob also has a gun trained on the president, and he is resolved to shoot if Bill’s nerve fails him. So had Bill not fired, the president would still have died. Lewis knew that a good account of causation should allow that Bill causes the death, and he tried to ensure that his counterfactual account could do this. Roughly speaking, his solution was to use the counterfactual account to identify elements of the causal process going from Bill’s firing to the death, then stitch these elements together in a causal chain. My point here isn’t to insist that Lewis’s solution is a good one, but that features of causation such as robustness were perfectly visible to armchair philosophers even before many of the phenomena of biological complexity had been discovered by scientists. Moreover, there are structural similarities between these armchair cases and cases of genetic robustness which suggest that any solution that works in one case is likely to work in the other.
These worries about how complexity fits with traditional philosophical views of causation, laws and so on have little purchase on the important matter of what decision-makers can do to handle complexity. This is where Mitchell’s book is at its best. Nearly all the systems we care about – the global climate, the human body, the international financial system – exhibit the various forms of complexity she dissects. Complexity is not synonymous with uncertainty – someone expert in assessing precisely how a subtle constellation of causes will interact and unfold may be able to come to reasonably certain conclusions about a given complex system – but it will nearly always be the case that the sorts of conditions that characterise complex systems will leave even attentive investigators highly unsure about how they work and which outcomes they will produce.
A typical reaction, displayed in many policy documents, is that when dealing with scientific uncertainty in relation to important systems, policy-makers should adopt a precautionary approach. At one level, this is undeniable: if a policy-maker is considering making an intervention in the international financial system (by restructuring banking institutions, say), the human body (by changing drug regulation) or the global climate (through ambitious geo-engineering), then she shouldn’t simply blunder in with the latest fashionable thinking. Then again, she shouldn’t refuse to act altogether for want of cast-iron evidence that the proposed course of action will work. Both unintentional vandalism and irresponsible dithering can lead to disaster.
Those who oppose precautionary thinking often argue that it becomes incoherent or dangerous when spelled out in detail. The problem is that precautionary thinking is supposed to help in situations of uncertainty; that is, in situations where we lack knowledge, or where our knowledge is imprecise. But since decisions under such conditions tend to have the potential for grave outcomes whichever option we choose, we need guidance on how to err on the side of caution. Imagine the situation of a telecommunications regulator, confronted with a poorly designed scientific study suggesting that mobile phones might ‘fry your brains’. Is the precautionary measure to encourage people to refrain from using their mobiles, or perhaps to deny children access to them, on the off-chance that the study is onto something? Or does this measure itself risk grave harm, on the grounds that children are at greater risk of abduction if they aren’t able to call home?
High-profile opponents of the precautionary principle, such as Barack Obama’s new regulation tsar, Cass Sunstein, have argued on the basis of these ‘paradoxes of precaution’ that the principle offers no guide for decision-making at all. Sunstein instead advocates a form of cost-benefit analysis as the best way to ensure that the potential costs and benefits of all courses of regulatory action – including inaction – are placed ‘on screen’. He also argues that cost-benefit analysis forces regulators to be explicit about the ways in which different factors that contribute to a complex decision are weighted, so that cost-benefit analysis is not only more effective in preventing potential disasters than precautionary thinking, it also makes for greater transparency in public deliberations about the setting of priorities in regulatory contexts.
Mitchell’s critique of cost-benefit analysis is a familiar one. It is suitable for well-understood systems, unfolding over short time periods, where we can assign probabilities with confidence. But the probability of a given outcome – financial profit, the extinction of species, an increase in sea levels, high blood pressure – in whatever system we are analysing will often vary significantly with small changes in the starting conditions, with our assumptions about the causal interactions within the system, and with variation in background conditions as the system evolves over long periods of time. Our estimates of these conditions will often be imprecise, or thoroughly conjectural, in spite of the apparent precision of the cost-benefit methodology.
The question is how to turn uncertainty of this sort into trustworthy policy recommendations. Mitchell’s stance on these matters is not new – she explicitly acknowledges her debts, especially to Steven Popper, Robert Lempert and Steven Bankes’s work at the Rand Corporation on what they call ‘robust adaptive planning’ – but her way of justifying it is particularly crisp and compelling. Simple cost-benefit analysis will tend to collapse a rich understanding of the complexity of a system into a single set of all-things-considered probability estimates for its likely end-states. In so doing, Mitchell says, we mask our grasp of complexity, and replace it with a bland expression of uncertainty. We should not, of course, try to pretend that we know more about how these systems work than we really do. But once we do acknowledge complexity, two strategies become available. First, we can examine how our proposed interventions will fare under a range of different plausible scenarios for the unfolding of a complex system, picking the strategy which has a satisfactory outcome across the largest range of future scenarios. Second, we can assume that the world may not behave in a manner we expect it to, and therefore make sure that the strategy we choose can be undone or altered with reasonable ease. The end result should be a set of concrete recommendations that are thoroughly in accordance with precautionary thinking in remaining humble about our state of knowledge, while taking into account the full range of scientific evidence.
Planning in the face of uncertainty is an intricate matter, and Mitchell’s simplified presentation ignores some important complexities of the process. Rand’s ‘robust adaptive planning’ is a tool designed by an economist, a physicist and a computer scientist. Other scientific disciplines have made their own valuable contributions to the design of appropriate planning systems. Psychology is one. The question of how good a particular outcome would be, were it to arise, should be wholly independent of the question of how likely that outcome is. And yet it turns out that we tend to overestimate the likelihood of outcomes we favour, while underestimating the likelihood of outcomes we don’t want. This is known as ‘optimism bias’. And ‘affiliation bias’ results in (for example) the conclusions of studies on the effects of passive smoking varying according to the authors’ affiliation with the tobacco industry. Needless to say, these psychological results suggest that policy-makers need to be attentive to the institutional sources of the data they use. And this, in turn, underlines a long-standing theme of work among social scientists, who have claimed that broad public participation in risk planning may increase the quality of risk analysis. Mitchell’s stance on policy isn’t complete, but perhaps that is to be expected in a complex world.