Dennett’s Ark

P.N. Johnson-Laird

  • The Intentional Stance by Daniel Dennett
    MIT, 388 pp, £22.50, January 1988, ISBN 0 262 04093 X

When the single-celled organism paramecium bumps into an obstacle, it reverses the power beat of its cilia, backs away, and swims off in a different direction. How natural to suppose that this animalcule forms a representation of the world, determines that it is obstructed, and decides to set another course. When ‘Washoe’, the celebrated chimpanzee who was taught the American Sign Language for the deaf and dumb, saw a duck for the first time, she made the signs for water and bird. How natural to suppose that she knows how to use language creatively. When Mrs Thatcher tells us that making money is no sin, how natural to suppose that she knows what she is talking about. In all of these cases, we treat other living beings much as we treat our next-door neighbours (most of the time): we assume that they are rational agents with beliefs, desires, and mental representations of the world. We adopt what Dan Dennett, the distinguished American philosopher of mind, refers to as the ‘intentional stance’ towards them. His latest collection of papers is a series of ruminations on quite what we are doing.

At the core of our stance is the assumption that mental states exhibit intentionality. When philosophers talk about intentionality, however, they do not mean the capacity for deliberate purposeful action. They hark back instead to a distinction revived by the 19th-century Austrian philosopher, Franz Brentano: ideas, beliefs, and the other contents of the mind, are about things. Because boulders, trees and the furniture of the physical world lack this property of intentionality – they are not about anything – there appears to be an irreducible gulf between matter and mind. The concept of intentionality thus descends from Cartesian Dualism, and, like Dualism, it divides contemporary theorists into two main camps.

On the one hand, the Behaviourists and ‘eliminative materialists’ such as Paul and Patricia Churchland propose to throw intentionality out along with other alleged myths of folk psychology. They believe there are no beliefs. Or, as Dennett more tactfully remarks: ‘the theory they, um, espouse or champion has no room in its ontology for beliefs.’ On the other hand, the mentalists – philosophers such as Dretske and Fodor, and cognitive scientists from Craik to Marr – hold to the reality of representations and beliefs. They assume that there is a physical world in space and time, and that mental processes enable human beings to perceive that world, to have beliefs and feelings, and to be aware of themselves. These assumptions can be challenged, but, as William James observed, their discussion is called metaphysics and lies outside the scope of science. Unlike James, cognitive scientists aim to give a thoroughgoing computational account of how the brain constructs and uses representations of the world.

One can adopt, of course, other possible stances towards the intentional stance. Indeed, it is difficult to imagine a view about it that does not have a current adherent. Thus, perched precariously between eliminativists and mentalists is Dan Dennett, and, not surprisingly, the bulk of the essays in the book are devoted to expounding, extending, and defending his position.

We adopt, he says, a ‘physical stance’ towards the weather: we treat it as obeying the laws of physics. We adopt a ‘design stance’ towards a computer: we treat it as an object designed to fulfill certain functions. These attitudes enable us to predict at least some aspects of the behaviour of the weather and of computers. The same aim of prediction leads us to adopt the intentional stance: we treat an organism – sometimes, he says, even a machine such as a thermostat – as having beliefs relevant to its goals and as being rational. We adopt the intentional stance all the time, because it works: it enables us to make the right predictions, and none of the other stances would work so well. Sceptics may (and do) deny this fact, but here is a small-scale demonstration of how the intentional stance appears to be built into our modes of understanding.

Suppose I tell you: ‘When Alice found Bill in bed with Christine, she took the ring off her finger and flung it in his face.’ You can easily imagine the scene. Bill has been caught in flagrante delicto, and his wife, Alice, has thrown her wedding-ring at him in an angry gesture.

In fact, you are wrong. Christine is Bill’s wife, and it was she who threw her wedding-ring at him, because she knew that he had contrived their discovery by Alice out of an exhibitionistic impulse. You misinterpreted the pronoun ‘she’ in my original remark without giving its uncertain reference a second’s thought, and what made it possible to mislead you was your knowledge of sexual mores. (Other related, but less salient knowledge also enabled you to understand the true dénouement.) The intentional stance is indeed thoroughly embedded in the tacit processes of understanding, and, as cognitive scientists have shown, it yields many replicable phenomena of this sort.

To act as though human beings, animals, and even machines, have intentional states can be useful both in daily life and in advancing science. But Dennett argues that we should use the hypothesis in a purely instrumental and predictive way without necessarily believing that it is true. Thermostats obviously do not have beliefs – pace certain devotees of Artificial Intelligence. Paramecia do not have beliefs – their behaviour can be explained without postulating that they construct representations of the external world. Chimpanzees and politicians are more interesting candidates. But, according to Dennett, even human beings might not have fully intentional representations of the world; they might not have mental states corresponding to beliefs or desires. The existence of such states, he points out, certainly does not follow merely from the success of intentional attributions. Hence he differs from eliminativists by defending the intentional stance, and from cognitive scientists by denying its most obvious implication.

This position is so strange – an echo of pragmatism, perhaps – that one is bound to ask why anyone should have adopted it. My first thought was: just for the hell of it! I do sometimes feel as though some subtle congenital brain lesion – in my pineal gland, presumably – has rendered me insensible to the virtues of profound metaphysical doubt, but I have also wondered whether it isn’t a lesion that gives rise to such doubts in the first place. My first thought was reinforced by Dennett’s delightfully ironic advice, in the final essay, on how philosophers should reach their chosen view on a topic. It is best, he says, to develop an account that is memorably radical – something that differs from everyone else’s views and that is expressed with as few qualifications and concessions as possible. This insight into philosophical method perhaps explains why so much philosophy consists of guided tours, replete with maps, of where everyone stands in relation to some problem, as in Dennett’s own essay ‘comparing and contrasting’ the views of Quine, Sellars, Chisholm, Putnam, Davidson, Bennett, Fodor, Stich, and Dennett. It also explains why he says philosophers write too much – they can’t all be right – and why cynics say there is no need for wastepaper baskets in philosophy departments.

Dennett’s case against the full intentionality of representations is radical and memorable, but ultimately unconvincing. Its starting-point can be illustrated by those symbolic artefacts, such as books, pictures and computer programs, that seem themselves to be about things. Their intentionality is often said to be at best second-hand and to derive from us, the prime movers in the intentionality business. Dennett, however, argues that even our intentionality is second-hand. And what, you may ask, does it derive from? The answer, according to Dennett, is from selfish genes: we mean what they mean us to mean. Alas, something seems to have gone wrong here. Genes don’t have beliefs; they don’t reason; they don’t understand anything, let alone genetic theory. He admits as much, but then adroitly changes the subject.

One source of the trouble, and of Dennett’s doubts about intentionality, appears to be that perennial riddle: can machines think? Some say yes, because the brain is a machine. Some say no, because the brain isn’t a machine. Both parties often agree that computers don’t think because they lack the magic ingredient – being made out of flesh and blood, being connected to a body, having a rich social life, or whatever – that enables brains to think. For Dennett, the brain is a machine but machines do not really think: they have a derived intentionality. Hence the brain has only a derived intentionality.

What seems wrong here is the assumption that brains and machines are necessarily just syntactic devices. ‘A machine is a machine,’ Dennett writes, ‘and there is nothing about the construction or materials of any sub-variety that could permit it to transcend the limits of mechanism and eke out “real” semantics over and above its merely syntactical churning.’ Others who take the same view argue that cognitive science must ignore the external world and its relations to mental states. The discipline must adopt a sort of solipsism and worry only about internal syntactic matters. As Dennett observes, psychology viewed in this light isn’t really psychology at all: its aim ought to be to show how the brain implements intentionality. Unfortunately, his solution is that the brain merely mimics the operations of a semantic device. In other words, paramecium, chimp, and your next-door neighbours, are all rightfully in the same ark: none of them has access to real beliefs, desires, meanings. The only mistake is not to include thermostats in the list. They differ from us, says Dennett, only in degree – a matter of complexity rather than a qualitative distinction.

Dennett has another worry. When you adopt the intentional stance, you attribute to an entity beliefs that appear to be relevant to its interests. Beliefs according to many philosophers are sentence-like entities inscribed in a mental language in your head. You have a belief, say, about Mrs Thatcher’s concern for higher education, and so there is a sentence representing this belief stored in your mind. A hallmark of beliefs is that they may be about things that don’t exist – a phenomenon that seems trivial, but that produces horrendous problems in the logical analysis of sentences expressing beliefs. Not only can true assertions be made about false beliefs, or about beliefs concerning non-existent entities: they can even be made about beliefs founded on a semantic error, as in ‘Queen Victoria thought that transvestites were monks who have peculiar habits.’ What Dennett shows in convincing detail is that there is currently no viable analysis of beliefs, or expressions of beliefs, as sentence-like entities. If you treat beliefs in terms of what people say, he says, then you make the mistake of treating a head-cold as a large set of internally-stored sneezes.

What else could beliefs be? There may be elements in the brain that serve as a set of core beliefs from which others are derived by inference. But, Dennett suggests, these core elements may turn out not to be real beliefs at all, let alone expressions in a language of thought. The information they contain might be wholly alien to us – a matter of the details of neuronal processes. Here he shows every sympathy with the eliminative materialists, who argue that once we identify the information-storing elements in the brain, they will be no longer recognisable as the everyday beliefs of folk psychology.

This claim, perhaps inadvertently, exposes a central problem in trying to eliminate the psychological level of description and in trying to abandon the view that mental states are intentional. When we examine bits of information shifting from one register to another in a micro-chip, they are no longer recognisable as the executions of, say, our favourite word-processing program. When we describe the program, should we therefore stop talking about deleting words, or moving paragraphs, and speak instead of shifting bits from register to register, and so on? To do so, of course, would be to lose the whole point of the exercise. The same conclusion can be drawn about the proposal to stop talking about beliefs in favour of neuronal states (if we ever discover the relevant ones). A computer program can be described at different levels – from a high-level description of what it computes down to its detailed implementation in a particular micro-chip. A description of a belief might, in principle, be similarly couched at different levels – from the conscious apprehension of its content down to its underlying neuronal processes. To argue that an explanation at the neuronal level will render the high-level account superfluous, vacuous, or even down-rightly false, is to court self-refutation because, of course, the argument is itself a high-level statement. Likewise, to believe that the mind can be properly understood only at the neuronal level is like believing that the study of coins will reveal the truth about economics.

Let us grant provisionally that human beings have beliefs, and that beliefs are genuinely intentional, but not necessarily sentence-like entities in the head. How, if at all, might we reconcile such claims with Dennett’s thesis that the brain is a machine? Philosophers tend to think of brains as disembodied and cut off from the rest of the nervous system, and one of their favourite thought-experiments is to imagine the consequences of a brain kept alive in a vat. But it is a mistake to isolate the brain from the causal chain in which energy from the world triggers nerve impulses, from which in turn the brain constructs a representation of the world. Thus, the two-dimensional pattern of light falling on to the retina is used to construct an internal three-dimensional model of the objects in the scene from which the light has been reflected. This process of ‘optics in reverse’ is largely outside conscious awareness: for good evolutionary reasons, we seem to be in direct contact with the world, and are aware neither of process nor of representation.

Vision also depends, as the late David Marr emphasised, on a number of in-built constraints acquired during the course of evolution. Without them, it would be impossible because there are infinitely many different arrangements of objects that would produce the same pattern of light on the retina. (Analogous constraints may solve the much rehearsed philosophical problem of the indeterminacy of meaning.) The visual process has been modelled by programs that use the output of an electronic camera. When such a system enables a robot to avoid pitfalls and obstacles, it would take a radically-lesioned sceptic to deny that the program’s internal representation of the world is genuinely intentional.

Dennett’s essays are vivid, witty and admirably provocative. Sometimes they give us different perspectives on the same underlying objects of thought, and sometimes these objects themselves undergo change. For those of us who are not professional philosophers, he helpfully appends short explanatory essays to the main papers. They summarise the issues and explain his most recent thoughts. There were times, I confess, when I longed to get beyond all the fiddle about brains in vats, twin earths, vending-machines, Chinese rooms, and the other paraphernalia beloved by philosophers. No doubt they are a useful tool in conceptual analysis, but I note that Dennett himself is increasingly caught up in empirical work in cognitive science.

What is the truth about intentionality? Dennett says that we make sense of one another by adopting the intentional stance, but we may not have true intentions. It could be that there are no intentional representations anywhere from people down to paramecia, from robots down to thermostats. Alternatively, it could be that they are ubiquitous – nothing but intentional states all the way down. Most likely both of these view are false. Paramecia and thermostats respond directly to physical properties of the world; they have no intentional representations. Other entities, such as mammals and robots, can represent the world, but they don’t know that they represent the world, and they have no abstract knowledge or beliefs about themselves. Human beings, however, do not merely have beliefs: they know that they have beliefs – despite the best efforts of certain philosophers to persuade them otherwise. As Descartes (or Woody Allen) might have said, ‘I know that I think, therefore I am an intentional system.’ This capacity for self-awareness distinguishes us qualitatively, I am happy to say, from thermostats. It depends on consciousness – Dennett’s other main philosophical preoccupation, but one which he deliberately eschews in the present book. We will just have to wait for his next one.