Putnam’s Change of Mind

Ian Hacking

  • Representation and Reality by Hilary Putnam
    MIT, 136 pp, £14.95, September 1988, ISBN 0 262 16108 7
  • Mental Content by Colin McGinn
    Blackwell, 218 pp, £25.00, January 1989, ISBN 0 631 16369 7

Big issues and little issues: among established working philosophers there is none more gifted at making us think anew about both than Hilary Putnam. His latest book is motivated by large considerations, most of its arguments are driven by small ones, and its topic is deliberately restricted to something middle-sized: the brain, the mind and the computer program. At the end he soars and contemplates all of metaphysics and epistemology. The book will mostly be read as Putnam’s denunciation of his former philosophical psychology, to which he gave the name ‘functionalism’. I’ll try to explain that, but first a glance at what Putnam calls, when he soars, his ‘approach/avoidance’ relation to a family of large ideas.

The thoughts which he wants to avoid, but which tug at him still, are (his list): The truth about the world is independent of what we think about it. What we say is true when it corresponds to the world. Anything with content that we affirm must be either true or false. The truths that we do know should be thought of as fragments of the one true complete description of reality. The fourth of these theses, which he calls Uniqueness, has been less noticed in recent philosophy than the other three, but it may be what most attracted Putnam in the first place. After a decade and more in which pragmatists, relativists and other kinds of anti-realist have been so busy maligning the three theses of Independence, Correspondence and Bivalence, it is good to have the fourth in focus.

On his last page Putnam emphasises his final ground for rejecting the most general version of his earlier attempts to formulate a philosophy of the human mind: ‘The project assumes from the outset that there is a single system (“the organisms and their physical environment”) which contains all the objects that anyone could refer to.’ This has been a great and perhaps painful discovery for him. Peter Strawson spoke for the opposite style of thinking when he wrote over thirty years ago in Individuals that ‘the idea of an “exhaustive description” is in fact quite meaningless in general, although sense may be given to it in a particular context of discourse.’ He was writing critically about Leibniz, whose credo Putnam shared: ‘the way to solve philosophical problems is to construct a better scientific picture of the world,’ as Putnam himself put it. Putnam has recanted. Hence his recent books use capital letters, italics, exclamation-marks, to shout that the idea of a theory of everything doesn’t make much sense. The immediate application is mind. There can’t be one universal theory of the human mind and its place in nature.

We say things like this: She was recalling with amusement how dreadful last night’s movie was, when she heard the telephone ring in the next room. She was about to go and answer when she remembered that it was probably the local organiser of her political party wanting her to canvass. She had already let him down once; she couldn’t bear to confess her disloyalty, so she decided to sit there and let the phone ring.

What was happening? For the past couple of centuries much philosophical psychology has tried to employ a general form of description that speaks of ‘mental states’ (by which are usually meant not amusement and embarrassment, but beliefs and desires) and mental events (hearing, remembering, deciding), all of which result in action or inaction. Since we take for granted that minds and brains have everything to do with each other, mental states and events are supposed to be related to states and events in the brain. How? The answers commonly canvassed are quite dismal.

First there is the neurological dump-the-mind movement. Yes, we do describe people as I have just done, and explain choices and actions accordingly. But all that is ‘folk psychology’. August Comte said that we pass from earlier eras of magic and theology to positive science: nowadays the kindly word is not ‘magic’ but ‘folk’, as in music and medicine. Oddly, what is usually dismissed (or defended) as folk psychology is a sophisticated early positivist description of the mind in terms of mental states. (A footnote of Sir William Hamilton, 1837: ‘The term State has, more especially of late years, and principally by Necessitarian philosophers, been applied to all modifications of mind, indifferently.’) I admire the directness of dumping the mind and proceeding to what Patricia Churchland calls Neurophilosophy, the title of her influential book on the subject. Doing so does not prevent us stick-in-the-muds from talking, in our folksy way, about being too embarrassed to pick up the phone. But Putnam wanted to understand the brain and the embarrassment under one global theory. He wanted everything: dumping the old-fashioned was not his forte. Our talk of beliefs and embarrassment is not a mistake, and Putnam had no desire, either in physics or in psychology, to conclude that commonplace descriptions (in terms of tables or beliefs) are false, while scientific accounts of what is ‘really’ there (swarms of atoms, neural nets) are true.

An inevitable Unitarian idea is that mental states are identical to brain states, and mental events to events in the brain. One believes it is the party organiser calling, one is embarrassed, one decides not to answer the telephone: the belief and the embarrassment just are states of the brain, on this view, and the decision an event in the brain.

There are two versions of this idea. One is bold and preposterous, the other feeble and merely verbal. The strong version says that there is a certain type of brain state, such that all and only those people who have a certain belief about who’s calling, or experience a certain sort of embarrassment, are in a brain state of precisely the same type as our disaffected party member. Each brain state of that type just is a mental state of the corresponding type. Likewise, there is a certain type of brain event that goes on in the mind of anyone who decides not to answer the telephone, and each brain event of that type is a mental event of that type. The weak version of ‘mind-brain identity theory’ says that there is no general identity between types of brain events and types of mind events. One person’s belief about the phone call is identical to a state of her brain. If you have the same belief about a caller, that is identical to a state of your brain, but the state in her brain may not be of the same type as the state in your brain. Each mental state in each person is identical to a brain state in that person, but the identities are entirely idiosyncratic. No theory could ever systematically correlate brain states and minds.

Some philosophers have devoted skill, ingenuity and sophistry to try to make the strong identity claim coherent or the weak identity thesis interesting. Others have been more moved by the reflections of a Wittgenstein, urging that the problems and solutions are misguided, a misconception of the mental life, doubting the very idea of a mental state, at any rate as a concept for explaining thought and behaviour. Putnam, however, remains firmly in the tradition of mental states. He made the most plausible proposals ever advanced for improving on the thesis that mind-states are brain-states. He directly linked mental states and the brain, not by asserting their identity, but by inserting a new layer between mind and brain.

His functionalism made an analogy with a computer program. In any industrialised part of the world where you have to compute your own tax bill (VAT or income tax or whatever), your neighbourhood software stores will sell you a program that instructs you to enter your income and expenses, and then you print out the tax due. Competing programs use different routines, and people who do it themselves with pencil, paper and hand calculator do something different again. Different electronic events occur in different makes of personal computer. In one sense, quite different things are going on – one in this model computer, this in that, another on paper or in my head. But insofar as all are correct procedures for computing tax due, all are equivalent. There is a single abstract representation of the different calculations, which can be realised by different machines or by old-fashioned clerks.

Putnam’s old idea was: and that’s how it is with mind and brain. ‘According to functionalism, the behaviour of, say, a computing machine is not explained by the physics and chemistry of the computing machine. It is explained by the machine’s program ... Similarly I believe that the psychological properties of human beings are not physical and chemical properties of human beings, although they may be realised by physical and chemical properties of human beings.’ Believing that, considering that, and so forth, are ‘simply “computational states” of the brain ... Our psychology is to be described as the software of [the brain conceived of as] a computer.’

Putnam’s new book is about his change of mind: ‘Functionalism argued that mental states cannot simply be physical-chemical states, although they are emergent from and supervenient on physical-chemical states; I shall now argue that mental states also cannot be computational states, or computational cum physical states ... although they may be emergent from and may be supervenient upon our computational states.’ ‘Identity theory’ identified mental states and brain states. Functionalism inserted a layer supervenient upon brain states – namely, computational states. Is this really the way to go, to put another layer in the sandwich? Never has a supervenience store had it so good.

Putnam advances a number of arguments to explain his change of view; the denial of the Uniqueness assumption mentioned above comes only at the end. Much of the action relies on the fact that the states on which Putnam is keen are specified by sentences: ‘believing that it is the party organiser on the line,’ ‘fearing that he’ll ask me to canvass again,’ ‘wishing that he’d stop calling.’ Functionalism held that these were computational states. Hence there had to be a representation internal to the program of the meaning of the sentence: ‘the party organiser is on the line again.’ That would ride well with much cognitive science, which, with various types of caution, proposes that we do represent what sentences mean ‘in the head’. No long noun occurs more often in expositions of that discipline than ‘representation’.

It was essential to Putnam’s original functionalism that representations of meaning were completely specified within the program itself: again, talking popularly, in the mind or in the head. That is where Putnam now balks. His shorthand form of conclusion is ‘Meanings aren’t in the head’; in longhand, in italics, ‘Meaning is interactional. The environment itself plays a role in determining what a speaker’s words, or a community’s words, refer to.’

How does the environment enter into meaning? Putnam starts with water and an argument which has become quite famous (it is included in the most recent popular anthology of paradoxes, William Poundstone’s Labyrinths of Reason). He holds that when I say something using the word ‘water’, I am among other things referring to a liquid with the composition H2O. He has a theory of meaning, elaborated elsewhere, according to which one element of the meaning of a word such as ‘water’ is the very stuff it denotes – namely, H2O. In the sentence ‘she saw a well of water; and she went and filled the bottle with water, and gave the lad drink,’ we use the word ‘water’ to mean just what the authors of the King James version of Genesis meant, and indeed what Hagar meant by a word in her language. The meaning of the word has not changed; the denotation H2O is part of its meaning; it follows from Putnam’s theory that this was an element in the meaning of the word ‘water’ in 1611, although the translators had no idea of hydrogen or oxygen.

Hence the meaning of the word ‘water’ depends upon facts about the liquid that may not be known to people who use the word. A tall tale brings the point home. Imagine a planet as like Earth as you please, except that the liquid out there that looks, tastes and so forth like water has an entirely distinct chemical constitution, as well-known nowadays to its inhabitants as H2O is to us. The two compounds could not be told apart by any means available in our or their 1611. By hypothesis, everything going on in the heads of everybody on both planets (omitting the chemical reactions which are no part of their thinking) is identical. Yet according to Putnam’s theory of meaning and reference, the meaning of the word ‘water’ used by our Biblical translators differs from that used by their counterparts. So meanings can’t be in the head.

This argument relies on the fine details of Putnam’s theory of meaning about words denoting kinds of substances such as water. Putnam moves right along, however, and many more readers will assent to the conclusion, that meanings aren’t in the head but are ‘international’, than need find the argument absolutely compelling.

Colin McGinn is one philosopher who does wholeheartedly accept the argument for water. He begins the book with it, and Earth’s twin planet is never far away. But he is more cautious or perhaps just more detailed than Putnam in seeing how far the argument goes. He calls Putnam’s conclusion externalist: mental states are characterised, in part, by items in the environment, not just by whatever is internal to a person. He distinguishes a strong and a weak version of the idea. The strong one requires that there be some ‘interaction’ between the person and the item, the weak one only that the item should exist. (In the strong version, content depends on the person’s passage through the world, says McGinn; in the weak one, it depends only on the kind of world through which the person passes. That should give a sense of the strange blend of metaphor and logic that fuels studies like this one.)

Because I’m sceptical that ‘mental state’ is a very good category, I have some resistance to a book about the content of such states. Worse, one half of the book, its first chapter, presents itself as a theory about individuating content. She’s embarrassed by the call, she does not want to canvass, she decides not to pick up the phone – what’s to individuate? The interest of the book lies in its other two quarters. These owe much to two warmly acknowledged voices, one from the past, one very much from the present.

The present voice is Ruth Millikan, whose book Language, Thought and other Biological Categories is being much read. She wants to drag philosophers away from their enthusiasm for sentences, logic machines and semantics, to regard thinking and talking and exchanging ideas as among our activities as living creatures. In evolutionary biology we have a fairly clear idea of the purposes served by species-specific activities, and we should use that ideological way of thinking to understand language and thought. It is plain at a simplistic level how this can fit into McGinn’s (weak) externalist philosophy of mental content. The objects that serve to individuate mental content have a role precisely because our kinds of thought have evolved in order to relate to the environment. But evidently much work needs to be done, to take us past this first simplistic reflection.

The voice from the past is Kenneth Craik, who in 1943 had printed a slim volume (less than a quarter inch thick), The Nature of Explanation, and whom McGinn considers the precursor of the ‘cognitive revolution’ of the Sixties. The attraction for McGinn is that Craik did not take the sentential tack, and suppose that states of affairs and thoughts about them are represented by the mind by sentence-like operators. Instead they are represented by brain processes that are analogues to ones in nature (this is a general form of metaphor: a digital watchhand represents time sententially, while the movement of the hand of a clock is an analogue to the passage of time). McGinn imagines that processes of modelling in the brain arose for various purposes in the evolution of our species, and then, in detail, were further modified in each of us as we learned how to get around and communicate. Such models may be transformed into symbolic representation: that is done for a purpose but is not of their essence. McGinn alludes to Philip Johnson-Laird’s Mental Models, as contemporary cognitive science trying to design the brain more on the lines of an analogical computer, and less along the lines of a digital computer. Just as Putnam’s functionalism imagined a science, McGinn is here imagining a science – saying what it would be like. He is stating a form of what the knowledge would be, not saying what we know or should find out. But even the imagining is still, I think, rather thin.

The book does, however, usefully supplement Putnam’s. McGinn has hardly a word for functionalism, and in certain ways starts where Putnam leaves off. He accepts versions of externalism: this just is Putnam’s ‘interactional’ theory. He urges a teleological/biological and analogical-modelling road to follow when thinking about mental states. Putnam did not yield up functionalism so easily. Instead of pointing to future roads, he is intent on closing down some roads from the past. His book is a cumulative bundle of considerations against any possible future functionalism. Well-worn arguments from synonymy and indeterminacy of translation are levelled at the possibility of identifying computational states with mental states. He tries out the thought that the real (though supervenient) functional states are externalised – part mind, part environment, part interaction.

Always he conceives of the functionalist project as trying to tell the whole story about the whole of the world. In the end, he is moved by the conviction with which I began – there can be no way of declaring what objects are in the world, nor, if we did, could anything prevent us from thinking about that totality and constructing new ways of thinking about them. Maybe the argument is more with Putnam (author of functionalism) than functionalism. Putnam wanted functionalism to give the one story about the place of the mind in nature. Functionalism was enormously successful, in that many cognitive scientists use functionalism as an off-duty philosophy of mind – what they say to themselves they are doing when they’re not doing it. I think they have a much more piecemeal approach, and did not experience Putnam’s lust for a mono-science of everything. Putnam is still quite happy to talk of computational states as bearing some relation to mental states, and at the same time being dependent on but not identical to physico-chemical states of the brain. Thus although his book is presented as a retraction of functionalism, it may be less disturbing to cognitive scientists than they would expect it to be. On the other hand, someone who is dubious about the philosophical psychology of mental states will find this a much less radical book than Putnam imagines.