Machines with a Point of View
- Minds and Mechanisms: Philosophical Psychology and Computational Models by Margaret Boden
Harvester, 311 pp, £20.00, October 1981, ISBN 0 7108 0005 3
Margaret Boden’s somewhat breathless book sings the praises of the new ‘computational’ models in psychology and of what she rightly calls ‘the computational metaphor’. A feature of her writing is the making of (what seem to be) strong claims followed or prefaced by judicious disclaimers. When she is functioning in what I am tempted to call her disclaimer mode, she warns us against various kinds of intellectual ‘imperialism’ that AI people engage in at times (‘AI’ is the acronym for Artificial Intelligence – computer simulation of ‘intelligent’ behaviour), urges a Popperian stance of trying to falsify strong claims, and even denies that computers could in principle be conscious or have intentional states (in a non-metaphorical sense). But when she is functioning in her selling mode we get claims like this: ‘We now have machines with a point of view of their own, machines with a subjective model (representation) of the world and their own actions in it by means of which they deliberate more or less carefully about what they should do and what they should not have done and why. The insidiously dehumanising effects of mechanism can thus be counteracted in a scientifically acceptable manner. By providing a richer image of machines that suggests how it is that subjectivity, purpose and freedom can characterise parts of the material world, current science helps us to understand how human values can be embodied in a mechanistic universe.’
How machines can have ‘a point of view of their own’ if no psychological predicate literally applies to them is never faced; if the reader is like me, he must come away from this book with the conjecture that the disclaimers are something Dr Boden believes only in her study (as Hume believed in sceptical arguments only in his study), and that the strong claims about computing machines having ‘a point of view of their own’ and ‘deliberating more or less carefully about what they should do and what they should not have done and why’ are what she thinks whenever she leaves her study.
What she is selling is the use of computational processes (of the kind involved in computer simulations of chess-playing, object classification, pattern recognition etc) as models in psychology. Her psychological knowledge is impressive, and she does an excellent job of making a prima facie case for the relevance of such models, even in kinds of psychology (e.g. Freudian and Piagetian) where one might be inclined to suppose that they would be unlikely to shed light. (We must remember, as she herself points out, that these models are, at present, metaphors: whether they can be cashed as useful psychological theories only time will tell.) It is when she claims to ‘help us to understand how human values can be embodied in a mechanistic universe’ that the writing and the argumentation become thin.
Part of the problem lies in the vagueness of the notion of ‘computational process’. In Dr Boden’s usage, the term covers not only computer programs, but apparently also specifications for programs – informal descriptions which can be realised on digital computers by innumerable different programs. In addition, programs and specifications for programs are generally described by her in a rich intentional language. This is, indeed, customary in computer science (‘this program can tell cubes from other geometrical shapes,’ a programmer might say); programmers seem to think of their programs as being run by little men inside the machine who understand the significance of the programs just as the programmer does. But the very fact that so much ascription of perceptions, conceptualisations, and even purposes, is part of the computational metaphor makes it difficult to assess just how much insight we are getting into how human values and intentionality are embodied in a mechanistic universe. The suspicion is that we pack in just as much intentionality – meaning, reference, truth-content – as we pack out.
Philosophers who take the case for a computational theory of the mind seriously (whether or not they ultimately find it convincing) have come to be worried about certain sorts of difficulties with that case. The worrying question is not whether a computational description of the mind or brain can shed light on thoughts and ‘mental events’ considered causally, as events to be described in terms of their causal connections with each other, with sensory inputs and with motor outputs. It is quite conceivable that it can. The question is whether it can shed light on how thoughts can be bearers of truth value and meanings. And cognitive scientists (not just philosophers, but some psychologists and AI types as well) have become increasingly aware of just how difficult this question is.
Truth value is a problem (as are all of the normative properties of thought – justification, validity etc.) because there is no reason to think that a causal account of thought, in whatever terms, can distinguish true (or justified, or inductively or deductively valid) thoughts from false ones (from unjustified ones, from invalid ones). If the machine prints CUBE when the shape we showed it was not a cube, we say the machine ‘made a mistake’. But the machine followed its program: it is just that from our point of view the result was ‘wrong’. There is no intrinsic rightness or wrongness in anything the machine ‘prints out’. But then if we are just computing machines evolved by nature, how can there be any rightness or wrongness in anything we print out? Dr Boden’s answer is so puzzling that I quote it in full:
This way of putting it reminds us that human beings and animals have intrinsic interests, and an intrinsic range of significances, which comprise the inescapable bedrock of their intentionality. No human significance, however bizarre, can arise except in the context of these underlying interests – which is why personality theorists such as Freud or McDougall have tried in differing ways to relate ‘unnatural’ motives or sentiments, such as religious asceticism or scholarship, to the basic biological givens of humanity. (Ethical and political theories, too, must somehow take account of the basic facts of human nature, as I argue in my discussion of optimism in Chapter 13.) Because computers are deliberately engineered to function as artificial embodiments of computational or representational systems (programs) whose significance is assigned by human beings, any ‘interests’ we may choose [sic] to ascribe to them are not intrinsic to their nature, but parasitic on our own. It is this which leads me to say in several essays that psychological predicates could not be ascribed in a literal sense to any imaginable computer, even if its program were so rich in computational power as to be a much closer analogue of human motivation and thought than any existing program is.
Dr Boden appears to be saying that meanings (truth criteria) are ‘hard-wired’ into our brains by evolution. This is one of those suggestions that has only the appearance of making sense. There aren’t any fixed standards of what is true or false (the same for all, independent of culture) built into our brains. Dr Boden is giving the appearance of having considered a serious philosophical problem when all she is really prepared to do is wave a hand in the general direction of biology.
Even apart from the problem of accounting for the fact that thought has truth value, there is also the problem of accounting for semantic relations, e.g. synonymy. The notion ‘cube’ may be expressed by infinitely many different ‘representations’, functioning in different ‘programs’. It is precisely the fact that we can speak of different symbols, in different languages, in different systems of knowledge and belief (different ‘procedures’, to employ the ‘computational metaphor’), as having the same meaning that gives the notion ‘meaning’ its life. Without a notion of synonymy (or, more broadly, of ‘translation’) one cannot speak of sense – only of symbol. My own view is that a computational criterion of synonymy itself, a master program for ‘translation’ between programs, is impossible (partly because translation depends upon society and the environment, not just on what is ‘inside’ the mind or machine). Be that as it may, the issues of what ‘content’ is, and of whether ‘sameness of content’ is a computational notion, are today at the centre of discussion in philosophy of psychology. They are not even mentioned in Dr Boden’s book.
From the first page of her book, she speaks of computational processes as manipulating ‘representations’ or ‘models’. That metaphor and analogy are essential to explanation (as she argues in one of the papers in this book) is not a sufficient defence of this practice. Metaphors and analogies are a help in grasping complex causal explanations (as when we think of a gas as a collection of colliding billiard balls). But the problem in the philosophy of mind is not how better to visualise a complex causal theory, but how to explain the very possibility of intentionality, of truth values and meanings, in a ‘mechanistic universe’. The metaphorical application of the notions at issue to machines does not, of itself, enable us to answer this question – not even if the machines happen to be our own brains. For Kant already knew that the study of thought as something in nature, as something with causes and effects, is, in principle, an ordinary empirical study. It is how thought can have non-causal properties that is the puzzle. And this is what all the talk about machines with ‘a point of view of their own’ misses.