Don’t bet the chicken coop

Jerry Fodor

  • Thinking about Consciousness by David Papineau
    Oxford, 280 pp, £25.00, April 2002, ISBN 0 19 924382 4

A note to Royall Tyler’s elegant new translation of Murasaki Shikibu’s The Tale of Genji explains that ‘Hahakigi (“;broom tree”) is a plant from which brooms were indeed made and that had the poetic reputation of being visible from afar and of disappearing as one approached.’* Well, philosophers live in a thicket of such things; it is often very trying.

Consider, as an example, current philosophical discussions of consciousness. Lots of us think that, details aside, Lucretius had things about right. What there really is is atoms-and-the-void, and there’s really nothing else. True, the story about atoms is more complicated than Lucretius supposed; so, too, is the story about the void. But the underlying materialist intuition continues to be plausible; everything is the same sort of stuff as familiar, ontologically untendentious objects like rivers, rocks and stars. The whole world is that sort of stuff in its myriad configurations. So construed, materialism is a sound bet on a research programme: sooner or later, science will figure out what it is that everything is made of. The results from our first couple of millennia pursuing this programme have been pretty good. It now seems probable, for example, that not just rocks, rivers and stars, but also many animate things are material through and through. That’s most encouraging.

On the face of it, however, the problem of consciousness suggests that materialism can’t be true. ‘How can pain (which hurts so) possibly be the same thing as insensate molecules rushing around in nerve fibres?’ David Papineau asks. Good question; but one that seems, much like the broom tree, to slip away even as one tries to grasp it. Witness passages like this:

There is, of course, a rather different reason for doubting that the semantic power of ‘quasi-phenomenal reference’ can be found in the absence of genuine phenomenal subjectivity. For it is possible to doubt that the silicon ‘zombie’ would be a zombie to start with, on the grounds that the presence of appropriate representational properties may itself guarantee the presence of phenomenal subjectivity. This will follow if we adopt a ‘representational theory of consciousness’, according to which conscious properties are constituted by representational properties. On any such theory, a representational duplicate will necessarily be a conscious duplicate, which would mean that there is no possibility of a silicon doppelgänger who makes ‘quasi-phenomenal references’ and yet has no genuine subjectivity.

What on earth is that about? And where, oh where, has the broom tree gone? Papineau’s new book is a brave attempt to show that consciousness isn’t, after all, an intractable problem for materialists (‘Mystery – what mystery?’). But it strikes me as not convincing.

Here’s his view in a nutshell. There’s a ‘widespread conviction that materialism is false’; that, in particular, a conscious experience (say of a twinge, or of a shade of blue) is a different kind of thing entirely from the firing of a neuron; different, indeed, from anything at all that biochemistry is able to describe. Papineau calls this the ‘intuition of distinctness’. The main thesis of his book is that this intuition is ontologically unreliable. As things stand, he thinks, there’s no reason to doubt that materialism is right. Mental events (and states, and properties and so forth) are organised basic stuff, just like rocks and the rest.

There is, to be sure, a bona fide mind/ body distinction in the offing, but Papineau doesn’t believe that it’s ontological. Though mental things are (by assumption) a species of material things, it doesn’t follow that mental concepts are a species of material concepts. There can, after all, be many ways of thinking about the same thing; the concept ‘water’ is distinct from the concept ‘H2O’, but water is H2O nonetheless. Similarly, perhaps, the concept of a brain state and the concept of a conscious (‘phenomenal’) state are different ways of representing the same configurations of matter. Papineau thinks that the intuition of mind/body distinctness is a sort of hypostatic illusion in which a difference between kinds of concept is mistaken for a distinction between kinds of thing. Point this out, and the intuition goes away.

Except that it doesn’t, and that’s worrying. In fact, the water/H2O case feels quite different from the conscious state/brain state case. In the former, one sees straight off how the identity claim in question might be true: being water might just consist in having the microstructure H2O. But, as Papineau is well aware, people who are thoroughly sensitised to the difference between conceptual and ontological distinctions nonetheless often persist in the intuition that the neurological and the phenomenological must be metaphysically different kinds. The problem isn’t really that with regard to consciousness there’s a ‘widespread conviction that materialism is false’. It’s rather that with regard to consciousness there’s a widespread intuition that materialism is unintelligible; and this intuition survives the recognition that conceptual dualism is compatible with metaphysical monism. ‘How can pain (which hurts so) possibly be the same thing as insensate molecules rushing around in nerve fibres?’ is a quite different kind of question from ‘why should I believe that pain is the same thing as insensate molecules rushing around in nerve fibres?’ The second kind of question arises about water and H2O, and the usual chemical evidence is germane. By contrast, the intuitive objection to mind/ body materialism is that it raises questions of the first kind but doesn’t answer them. I admit to being moved by this objection.

In any case, Papineau has his task set out for him. He has to explain what the putative difference between material and phenomenal concepts is; and he has to explain why the intuition of distinctness persists even when attention has been drawn to this difference. For reasons now to be explored, I doubt that he succeeds in doing either.

There’s an obvious and unastounding answer to the question ‘how does the concept of a rock differ from the concept of a pain?’ Namely, that they apply to different things. The concept ‘rock’ applies to rocks; the concept ‘pain’ applies to pains, and no pain is a rock or vice versa. Fine so far. But Papineau can’t of course give that sort of answer to the question ‘how does the concept of a pain differ from the concept of a brain state?’ To be sure, ‘pain’ applies to pains and ‘brain state’ applies to brain states, but materialism has it that being a pain and being a brain state aren’t different things; if materialism is true, then the distinction between phenomenal concepts and material concepts is precisely not in what they apply to. So what does it consist in?

Papineau’s idea is that the difference has to do with the way that they relate to experience and perception. Abstracting from some interesting details, his story is a lot like Hume’s: phenomenal concepts are derived from perceptual experiences. (Hume says that they are ‘faint copies’ of experience; Papineau thinks this is a pretty good metaphor even if it’s not the literal truth.) Thus: ‘phenomenal concepts are . . . formed by entering some state of perceptual classification or re-creation into the frame provided by a general experience operator “the experience –--”. For example, we might apply this experience operator to a state of visually classifying something as red, and thereby form a term which refers to the phenomenal experience of seeing something red.’

Let’s forget about the ‘experience operator’; I doubt that the details matter. What I think matters is this: Papineau can’t draw the distinction between phenomenal and material concepts by appealing to the way they relate to perception, experience and the like, because ‘perceiving’, ‘experiencing’ and the like are themselves phenomenal concepts. To identify phenomenal concepts with the ones that bear such-and-such a relation to perception is to presuppose the very distinction that one is trying to explicate.

Here’s a way to see the point. Some years back, the idea of ‘subliminal perception’ caught the attention of the media. The thought was that advertisers might insert into movies a brief message that says, as it might be, ‘popcorn’. This would all go by very fast; too fast, in fact, for a viewer’s conscious notice. But his unconscious would nonetheless take heed, and he would soon find himself craving popcorn. The popcorn people would get very rich in consequence.

In the event, it didn’t work. It appears that if there is a phenomenon of unconscious priming, it lasts for only a fraction of a second and is therefore of no commercial value. Such a pity. My point, anyhow, is to emphasise the difference between this subliminal kind of perception and the paradigm, everyday cases: in both of them, mental states are affected by causal interactions with a stimulus. But, in paradigmatic perception, some alteration of consciousness (some experience, some phenomenology) is among the effects of the interaction. The moral is that, if Papineau wants to identify phenomenal concepts by their relation to perception, it must be conscious perception that he has in mind. But to explain phenomenal concepts by reference to examples that are themselves by assumption phenomenal would be to run in a circle that’s very small even by the lenient standards that philosophers observe. If, in short, there is a problem about how to distinguish phenomenal concepts from others, there is the same problem about how to distinguish perceptual concepts from others; e.g. how to distinguish really seeing a popcorn ad (a mental event that has phenomenal properties) from seeing one only subliminally (a mental event that doesn’t). Replacing the first problem with the second achieves an overall gain of no yardage.

If the distinction between phenomenal concepts and the others isn’t in what they refer to, it’s got to be drawn in some other way, perhaps in a way that doesn’t assume that the phenomenal and material refer to different kinds of thing. In which case, Papineau’s main point – that mind/body dualism is true of mind and body concepts but not of minds and bodies – might be right. So let’s pretend (very concessively) that we’re in possession of an unquestion-begging account of the difference between material and phenomenal concepts, and then ask whether intuition of an ontological difference between minds and bodies can plausibly be viewed as a misapprehension of this conceptual distinction.

I’ve already mentioned the main objection to thinking that it can: namely, that in bona fide cases where different concepts refer to the same thing, no intuition of distinctness need be felt. There is, for example, no tendency to mistake the difference between the concepts ‘water’ and ‘H2O’ for a distinction between H2O and water; or, if there is, it disappears on brief reflection. So how is the assumption that ‘material’ and ‘phenomenal’ are different kinds of concept supposed to explain the intuition that minds and bodies are different kinds of thing? I should emphasise that Papineau is thoroughly aware that he has to answer this question; a lot of his book is his attempt to do so. But, once again, I find I’m not convinced.

There’s a difference between the water/ H2O case and the mind state/brain state case. We can (plus or minus a bit) explain why something with the microstructure H2O has the properties that water does. If water is indeed H2O, it follows, given the rest of our science, that water is a liquid at room temperature, that it is transparent, that it is heavier than air, and (uncoincidentally) that it converts to hydrogen and oxygen under electrolysis. In that slightly tortuous sense, our science explains why water is H2O. But nothing of the sort holds in the mind/brain case, and that it doesn’t is a traditional source of philosophical puzzlement. For example, suppose a certain brain state is found, de facto, to be correlated with a certain phenomenal state: such-and-such going on in one’s visual cortex correlates with having an experience of a blue flash. Well, if it does then it does; there’s no arguing with induction. But one might wonder nonetheless why the brain state has that phenomenal counterpart rather than some other; just as one might wonder, in much the same tone of voice, why there’s a correlation between being H2O and being a liquid at room temperature rather than a correlation between being H2O and being a gas at room temperature. Now, as I’ve said, there is a rather convincing scientific answer to the second of these questions; but it’s hard even to imagine what an answer to the first one would be like. Psychophysical correlations (and, a fortiori, psychophysical identifications) appear to be mere surds. As Joseph Levine has rightly emphasised, there seems to be a sort of ‘explanatory gap’ between mind states and brain states; one without precedent in the rest of our science.

Papineau admits to the gap, but he thinks it’s no problem for his sort of materialism. His idea is that water/H2O is the wrong model for mind/body identifications. The right model would be identity statements like ‘Cicero is Tully.’ (This, he says, is because of certain features of the semantics of proper names. I won’t discuss this for fear of losing sight of the broom tree entirely. Philosophers have been worrying about proper names for the last thousand years or so, and the end is not in sight.) The point, anyhow, is that although it is possible to explain why water is H2O (see above) it is not possible to explain why Cicero is Tully. He just is him and that’s that. So there are, after all, cases where patently bona fide identifications result in an explanatory gap, and the fact that mind/body identifications do oughtn’t to discourage materialists. Thus Papineau.

That misses the point. If there’s no intuition of distinctness about Cicero’s being Tully, that’s not because ‘Cicero’ and ‘Tully’ are proper names. Rather, it’s because we antecedently understand how, in a given case, X and Y could turn out to be the same person. By contrast, there is a problem (so the intuition of distinctness tells us) about how X and Y could be the same thing if X is a brain state and Y a phenomenal state. The mind/brain case provokes an intuition of distinctness where the Cicero/Tully case does not. So we’re back where we started, only, as it were, one level up. Getting back to where you started, only one level up, is what analytic philosophers call ‘semantic ascent’. I’ve never known it to do any good, and it certainly obscures the broom tree.

Papineau thinks the intuition of distinctness is an illusion consequent on mistaking conceptual mind/body dualism for metaphysical mind/body dualism. He therefore needs to explain why calling attention to this distinction doesn’t make the illusion go away. The analogy with proper names doesn’t help, but he has another arrow in his quiver. To put it very roughly: phenomenal concepts are supposed to be some sort of copies of experience; they resemble what they refer to, according to Papineau, much as they do according to Hume. But brain-state concepts aren’t like that; you don’t use an image of your cortex to think about its states of excitation. Hence the intuition of distinctness; or the intuition that mind states can’t be brain states because brain-state concepts ‘leave out’ something phenomenal that phenomenal concepts don’t. That’s a rough sketch of Papineau’s story, but it will do for present purposes.

I don’t think it’s plausible. Even if you grant Papineau that a phenomenal concept is some kind of imitation of an experience (which probably it isn’t), that fact can’t be principled. It must be at least possible that a kind of mind should denote phenomenal states with, for example, a word rather than an image. Consider a kind of mind that thinks in English. Papineau has to predict that if there were such a mind, it wouldn’t have a mind/body problem; it wouldn’t have the intuition of distinctness that keeps materialists with our kinds of mind awake at night. Well, maybe he’s right but I wouldn’t advise betting the farm on it; I wouldn’t even advise betting the chicken coop. The intuition of distinctness may be confused, but it surely isn’t just an artefact of a quirk of our contingent psychology.

More damaging still: I can think of a circle as ‘a plain figure with a bounding edge (circumference), all points on which are equidistant from a fixed point’. That’s the way my dictionary thinks of it. Or, I suppose, I can think of it Hume-wise, via a mental image of a circle. So Papineau must hold, consonant with his general treatment of phenomenal concepts. I thus have one kind of concept of a circle that recruits some phenomenology, and another kind that doesn’t; just as, according to Papineau, I have one kind of concept of a brain state that recruits some phenomenology and another that doesn’t. But (having stumbled my way through high-school geometry) I am utterly convinced that a circle is a plain figure with a bounding edge (circumference), all points on which are equidistant from a fixed point. I don’t intuit so much as a smidgen of distinctness between the two. There isn’t a broom tree anywhere in sight. The moral is that whatever makes one feel that phenomenal concepts capture something that brain-state concepts ‘leave out’, it can’t be that the former image what they refer to but the latter don’t.

Where does all that leave us? Here, I think: it’s understandable that philosophers should feel the pull of materialism. Materialism is overwhelmingly the ontology with the best track record so far. But conscious psychological states are a problem that shows no sign of going away. Maybe phenomenal states are material after all, but it’s literally the case that nobody has the slightest idea how that could be so. And because nobody does, it’s also literally the case that nobody has the slightest idea what kind of empirical research might throw light on the issue. There is available, these days, lots of costly, high-tech equipment that allows an investigation of where the brain lights up when its owner has a phenomenal experience. Neuroscientists are in love with these gadgets; playing with MRI machines is as good a way as any of contriving not to think about the broom tree. But the hard problem isn’t where consciousness happens; everybody knows it’s got to be somewhere north of the neck. Rather, just as Papineau says, the hard problem is how pain (which hurts so) could possibly be the same thing as insensate molecules rushing around in nerve fibres. The answer, as of this writing, is that nobody has a clue.