The problem of explaining consciousness is the joint property of philosophy, psychology and neurobiology, though there have been times when none of these fields much wanted it. In philosophy, the mood in the middle years of the 20th century was to deny or dissolve the problem: if we just talked about everything more clearly, Wittgenstein and Ryle believed, we’d see there was no issue. That moment having finally passed, say forty years ago, philosophy took the problem of consciousness as one of the three major challenges faced by anyone attempting a theory of the relation between mind and body. The others were the problem of ‘qualia’, explaining how the subjective feel of the mind could be a feature of a physical system; and ‘intentionality’, the fact that thoughts can be about things, and can represent objects and events, including those far removed from us. Consciousness was seen as an aspect of certain sophisticated forms of experience that have both a distinctive feel and a role in intelligent thought. Conscious thought might involve an ability to think about one’s own mental states, to perceive them as thoughts and feelings.
It wasn’t until the 1980s that scientists’ reluctance to talk about consciousness relented. Before then, if you brought the subject up, the result would be like the scene in a Western when someone says the wrong thing in a saloon and the piano suddenly stops playing. Consciousness was not a serious topic for science – it was too elusive, too much of a mess, yielding little but fruitless speculation. But in 1988, Bernard Baars put forward his ‘global workspace’ theory, that a system in the brain functions to integrate diverse sources of information for use in a slow, attentive style of thinking. We are conscious of whatever is currently in that workspace. In 1990, Francis Crick, working with Christof Koch, offered a somewhat different theory, focusing on consciousness in visual experience, and around the same time some groundbreaking experiments were undertaken by Nikos Logothetis, working with Jeffrey Schall and David Leopold. It had been known since the 19th century that if quite different images are shown to each of your eyes at the same time, your conscious experience doesn’t blend the two but flips between them. If one of your eyes is shown a face and the other is shown a house, your experience will be house, then face, then house, and so on. Monkeys were trained to use a lever to indicate which image they were seeing in just such a situation (a clever experimental feat in itself), and Logothetis and his colleagues recorded the activity of neurons in different parts of the monkeys’ brains to work out which were active when each image was being registered.
What they found was that in the ‘early’ stages of visual processing, the activity of neurons mostly reflected what was being presented to the eyes, but that deeper inside the brain were neurons whose firing was associated instead with the monkey’s report (via the lever) of what it was experiencing. In Consciousness and the Brain, Stanislas Dehaene calls this ‘the first glimpse of a neuronal correlate of conscious experience’. Dehaene is a neuroscientist with little time for philosophers. He trained in mathematics and psychology and now runs a laboratory at the Collège de France outside Paris. His message is that there has been enormous progress. With a little ingenuity, he claims, consciousness can now be studied routinely: we have several ‘signatures’ of conscious thought in the activity of the brain, and a theory, descended from Baars’s ideas, of what consciousness does for us, and why it exists.
In philosophy, meanwhile, many more people now work on the topic of consciousness, and the scope of the problem is seen much more broadly. What used to be called the problem of qualia, or the feel of the mental, is now often treated as just one facet of the problem of consciousness. In Thomas Nagel’s language, if there’s ‘something it’s like’ – something it feels like – to be you, then you are conscious. Consciousness is also, as before, sometimes seen as a special self-aware kind of thought. So today the literature often makes divisions between different senses of the term, distinguishing ‘phenomenal’ consciousness – the feel of experience – from senses that have to do with self-reflection and other cognitive phenomena.
This reorganisation was good in some ways, misleading in others. First, it was a problem that qualia were often thought of as if they were little things, atoms of experience: one quale, two qualia. This was a holdover from old philosophical theories of knowledge based on elementary ‘sense-data’ or ‘simple ideas’, dating from the time of Locke and Hume. People also liked to call qualia ‘raw feels’. Raw as opposed to cooked? The concept was a mess, as Daniel Dennett and others were witheringly effective in pointing out. But the notion of qualia, seen as separate from the idea of consciousness, did have one good feature: it naturally accommodated the idea that there might be a kind of feeling present in an organism that is less sophisticated than consciousness. This is the idea also sometimes captured in the word ‘sentience’, a good term but not one much used by philosophers or scientists. Take pain, for example: I wonder whether squid feel pain. I take this to mean: does damage feel like anything to a squid? Does it feel bad to them? This question would now be expressed by asking whether squid are ‘phenomenally conscious’ – which to me always sounds like it’s asking too much of the squid.
Dehaene thinks several notions of consciousness are OK, but that one is central. He calls this ‘conscious access’: we are conscious of a piece of information in our minds when it ‘enters our awareness’ and, usually, can be reported. He has no time for the broader concept of feeling: ‘The notion of a phenomenal consciousness that is distinct from conscious access is highly misleading and leads down a slippery slope to dualism.’
Psychology has long accommodated the idea that much of the processing we do to make initial sense of what we see and hear is unconscious. It would be easy to assume that this is just low-level processing – bookkeeping, number-crunching, long-term storage. If so, it’s no surprise that we don’t experience it. Dehaene, though, argues that much more than this is unconscious. Some of the sophisticated ways we respond to the meaning of what we hear, for example, can be entirely unconscious. Sequences of words can be flashed so quickly at a person that she has no idea they were there at all, yet sequences of words with incongruous meanings – ‘very happy war’ – will have been registered by the brain differently from combinations with more reasonable meanings – ‘not happy war’. We might reasonably have assumed that distinctions of this sort would have to be made in a somewhat reflective way, but that isn’t so. Consciousness, as Dehaene sees it, is not an inevitable accompaniment to perception or even intelligence, but something that accompanies a small subset of what’s in our minds.
Why is some information conscious while the rest is not? Dehaene handles this distinction with his own version of the workspace theory. Consciousness has specific tasks, specific things it’s good for, and these make evolutionary sense. First, consciousness comes into play when we are faced with novel tasks and problems, especially tasks we can handle only by bringing together a variety of information. Second, it enables us to handle time in ways that unconscious thought can’t. Take conditioning experiments. If an irritating puff of air to the eye is preceded by a tone, you will quickly learn to close your eyes when you hear the tone. If the tone is immediately before the puff, then the learning can be done unconsciously (and in rabbits, done with much of the brain removed). But if there’s a gap between tone and puff, then the conditioning will work only if you are conscious of the association between the two. Conscious thought is a special-purpose tool, not just the feeling of being a living organism dealing with the world. Consciousness also has a discrete, on-or-off character. An experience either ‘ignites’ the brain into conscious thought, or it doesn’t.
The progress that has been made on the basis of the science described in Dehaene’s book is remarkable. The British neuroscientist Adrian Owen uses brain-scanning technology to study people incapacitated by an accident or stroke. Clinicians had become familiar with locked-in syndrome, whereby a person is paralysed except for one tiny window of voluntary action, such as the capacity to move the eyelids, by which they can communicate with the outside world. This seemed distinct from the vegetative state, in which a patient is completely unresponsive; it was assumed that conscious activity had ceased entirely in such people. In 2006, Owen was able to show that in some cases that is not true, and that it was possible for them to communicate by sheer imagining. If such a person is asked to imagine tennis, or a house, we can track when the action-related or space-related areas of their brain light up (action-related if they’re thinking of tennis, space-related if of a house), and this can be used as a way of allowing them to answer yes or no questions – a deliberate, non-routine act of the sort that is always conscious in uninjured people.
From here, as Dehaene sees it, the science of consciousness is just a matter of sorting out the details. He pushes aside the queries that might arise from thinking differently about these issues: scientific progress will overwhelm residual quibbles, as it did a century ago in the case of vitalism – the idea that life can’t be fully explained by materialist biology. I think, however, that there’s more to the situation than Dehaene allows. He has taught us a lot about one phenomenon, but next door to it there is another that also needs to be explained: subjective experience in a broader sense, the feel of our lives. It seems to be a matter Dehaene would have to dismiss, given his rejection of ‘the notion of a phenomenal consciousness that is distinct from conscious access’ on the grounds that it ‘is misleading and leads down a slippery slope to dualism’. To show that it need not, it’s worth looking more closely at two features of the phenomenon Dehaene describes as consciousness, both of which he is quite emphatic about.
First, he holds that the route by which brain states become conscious includes a ‘bottleneck’. We can only be conscious of one thing – or, more exactly, one topic – at a time. This is a surprising claim. To me, at least, it seems that there’s usually something that’s the focus of attention, while all sorts of other things lurk in the periphery – in the background, but experienced as there. We experience a scene, and within that scene, we can focus more on one thing and then more on another. That might seem uncontroversial, but Dehaene often writes as if there’s no periphery at all: ‘We never really process two unrelated items consciously at exactly the same moment.’
A ceiling fan is rattling slightly in the room in which I am reading Dehaene’s book. I think I can process the word ‘occupied’ – the last word I read before putting the book aside for a moment – while also taking in the rattle. Can I really? It does take a certain kind of settling-in, but there they are, the two of them. Maybe this is a case where the pair comprises one item? Some ways of counting items would render empty the claim that we experience only one thing at a time. I don’t think this is a problem in general, because the way we count items can depend on what task we are doing. But in this case, reading with the fan rattling in the background, I assume there’s no task at hand except that of retaining some sense of what’s going on.
Dehaene might say that such vague introspective ruminations count for nothing: there are experiments that bear on this. There are experiments, yes, but the experiments that I know of involve doing a task of some kind, a richer sort of engagement than the more passive and relaxed experiencing of word-plus-fan that I am talking about. If I had to count the clicks of the fan and also inspect the font that has been used to typeset the word ‘occupied’, I accept that I couldn’t do both tasks at once. Dehaene can insist that I do one of them at a time, with the aid of a ‘buffer’ that keeps one task waiting while I am working on the other. But if we’re talking about a more low-key kind of subjective experience, then things seem different, and the experiments I know of don’t show that my impression is wrong. I think the focus on what can be readily studied in the lab leads Dehaene to set aside – and occasionally to suppress – phenomena that are real but a bit more intractable.
The second aspect of his thinking that I want to draw attention to is related to the first. In part because of the one-item bottleneck, but also because of the tortuous route things take through our brains, Dehaene says there’s a delay between what happens and our experience of it. We think we experience events as they happen, but this is an illusion. Some delay is hardly a surprise: the experience can’t be exactly simultaneous; brain activity must take time. But Dehaene thinks that for things we consciously experience, the delay is long: about a third of a second. He thinks that as stimuli roll in to our minds, a lot of them are dealt with quickly but unconsciously, and a small subset – one item at a time – rather slowly becomes conscious.
I listen to an orchestra, with strings, brass, percussion and more. That the sound has different components may not be a problem for Dehaene – it might be seen as one item. But suppose that I single out the brass section for particular attention. Given what Dehaene says about delays, it should now sound as if it were a note or two behind the rest of the orchestra. My ear isn’t perfect, but to me it still seems in tune with the rest. If I had to do something with my experience of the brass – perform a task that involves ‘working memory’ – then I’m sure that delays would arise. But again, experiments that involve doing tasks – as most experiments inevitably will – need not tell us about other kinds of experience. Much of what Dehaene says in this area seems at odds with the simple idea that much of the time we experience a unified scene, with various things going on and eliciting our interest to different degrees. This is not merely an intuitive judgment; it is an idea central to Gerald Edelman’s neurobiological theory of consciousness, which is also in the ‘workspace’ tradition.
There is, no doubt, something real, a kind of conscious thought, that Dehaene is giving us a description of. But it does not exhaust the phenomena. We can detect another kind of experience in us, and it probably exists in other animals too. Take, once more, the case of pain. There is reasonably good evidence that fish can feel pain, and some invertebrates too, including hermit crabs and octopuses. Robert Elwood, of Queen’s University in Belfast, studies pain in invertebrates by looking for behavioural responses that go beyond reflexes and simple aversion. For example, a hermit crab will abandon a valuable shell if it receives slight electric shocks. If the shell is a particularly good one (crabs being very real-estate conscious) it takes a larger shock to get them to leave. The crabs trade off competing goals (home v. comfort) in quite a sophisticated way. Pain is something we feel; it is a kind of subjective experience. It’s not a simple concomitant of tissue damage in all animals; there’s pretty good evidence that insects, unlike crabs, don’t feel pain. They carry on after severe body damage as if nothing had happened. Crabs and octopuses don’t just carry on, though: they groom and protect the wounded area. Pain is a form of subjective experience with a clear evolutionary rationale.
Dehaene might say at this point: of course pain is real, and a different kind of subjective experience from the one he is describing. A theory of consciousness does not have to be a theory of the feel of pain. That is what I’d say if I were him. But he might say instead that his theory is meant to be a complete theory of subjective experience – of all the kinds that are real. Animals without ‘workspace’ architecture in their brains have no conscious experience, and hence can’t feel pain. That reply must deal with the octopuses and the crabs, animals with nervous systems very different from ours. Perhaps workspaces can be achieved by other means? Or perhaps Dehaene would say that these animals can’t have subjective experience at all. His book does push away from notions of subjective experience other than his preferred one, and away from the broader notion of feeling or sentience that I think has to be part of the story.
Dehaene believes that if we follow this road we will be led to dualism, to views that posit a mysterious separation between mind and body. This is because we are being asked to take seriously ‘pure mental experience detached from any information-processing role’. But I’m not describing something detached from any information-processing role, just something that isn’t tied to the particular role that Dehaene has studied. It’s also true that some philosophers have been led to dualism, or in some cases to panpsychism, by going down this road, but we need not follow them. Making biological sense of sentience is the task we face.