No one interested in the spread of ideas can have failed to notice the influence that the computer is exerting not only on our habits of life but also on our ways of thought. Twenty years ago the computer was something of a joke – to those, at least, who prided themselves on their innumeracy: in the last few years it has become a threat not only to economic stability and the balance of terror but even to human dignity itself. Are we about to be displaced by the Ultra-Intelligent Machine, contemptuously indifferent to human fate – or is the Frankenstein image the product of an inflamed imagination, curable by a healthy dose of realism and clear thought?
In The Computer and the Mind Philip Johnson-Laird, one of our leading cognitive psychologists, escorts the educated layman through the fantastic landscape of ‘cognitive science’ – the modern science of the mind, and of possible minds. The idea that there might be laws of thought transcending our intelligence is rooted in antiquity; Aristotelian logic was long regarded as a statement of such laws, and in modern times the mathematical logician feels free to ignore the feebleness of human reason. As for how people actually think, it is the philosopher, the psychologist and the physiologist to whom one naturally turns for enlightenment about the mind and the brain, though the information they are able to provide is often less than satisfying to the curious inquirer. But there exists a type of mind (if you can call it that) that we really do understand, through and through, because it is our own invention, or rather the invention of Alan Turing and John von Neumann. The now famous Universal Turing Machine was an imaginary device that could rearrange any row of l’s and 0’s in accordance with any finite set of rules supplied to it; the von Neumann computer is its electronic incarnation, whether as a BBC micro or a Kray supercomputer. Turing realised that the computer poses a fascinating challenge to the engineer: could a computer be programmed to think, or at least to give a convincing appearance of doing so? The latter question now seems a little dated, though the former still provokes philosophers to uncontrolled frenzy. Johnson-Laird waves them aside, and turns to the much more interesting question: can people compute?
The Computer and the Mind is much more than just an answer to this question: it is an admirably well-balanced introduction to contemporary thinking about people’s mental and physical capacities. Occasional disagreements with the author only add spice to one’s enjoyment of the book as a whole. It is divided into six main sections (each of which comprises several chapters): computation and the mind; vision; learning, memory and action; cogitation; communication; and the conscious and the unconscious mind. The first chapter places the rise of cognitive psychology in a historical perspective, and the second declares the author’s decision to discuss the mind in computational terms: ‘What do mental processes process? The answer, of course, is a vast number of perceptions, ideas, images, beliefs, hypotheses, thoughts and memories. One of the tenets of cognitive science is that all these entities are mental representations or symbols of one sort or another.’ One is not immediately sure whether the use of words ‘symbol’ and ‘representation’ is an act of philosophical provocation or a creative extension of terms borrowed from the theory and practice of computation: a bit of both, I suspect. One might argue that a wisp of thought should only count as a representation or a symbol if someone – in this case, the thinker – has deliberately adopted it to represent something. The issue is, however, something of a red herring: the point of substance is that the central nervous system is an information processor of almost incredible complexity, and that in order to understand the mental processes it supports we are going to need the most sophisticated notions of modern information technology – and more. So let us begin, Johnson-Laird suggests, by thinking how one might design the simplest of homing robots, and see if that gives us any idea of the difficulty of such an elementary task and the computational resources needed to carry it out. This he proceeds to do, as neatly as you please: but in case you should feel like trying, it should be said that the task of designing even an unintelligent robot is much harder than it looks, as any engineer will testify.
No area of cognitive science has made more rapid progress in recent years than the study of vision – partly because of the military importance of mechanical contraptions that can see where they are going. The late David Marr, whose posthumous book Vision generated an upheaval in the subject, insisted that vision be regarded as an ongoing process of computation, which begins with the optical images formed at the back of each eye and culminates in a spatio-temporal world model, useful for picking things up and getting about. A visually competent robot would have to be controlled by a program that ‘made explicit’ such essential information as the robot’s own position in its visible world: Marr asserted that the human visual system must likewise adopt representations of the world that ‘make explicit’ items of information at successively higher levels of description. Even if Marr’s detailed ideas, ably expounded by Johnson-Laird, should not survive in their present form, it is now generally admitted that the computational problem of reconstructing the visual world would, as he claimed, be impossible without some reliable preconceptions about objects and their appearances. Johnson-Laird is understandably respectful of the memory of a remarkable young scientist, but the general reader may well prefer the light and air of the chapter on ‘Scenes, Shapes and Images’ to the somewhat oppressive orthodoxy of the preceding two chapters on vision.
Part Three, on learning, memory and action, is admirably even-handed in its coverage of current ideas, but the author might be suspected of leaning over backwards not to offend the vast majority of psychologists who prefer elegant phraseology and box diagrams to more rigorous computational theories. The trouble with rigorous theories is that they are as easy to demolish as they are difficult to construct, while there is a certain security in the incurable imprecision of ordinary language. The chapter on action and the control of movement is an excellent introduction to the study of motor skills, though, as Johnson-Laird makes plain, we have only the dimmest idea of the computations that mediate between seeing something and picking it up.
I would guess that Part Four was the section the author most enjoyed writing, because most of the ideas in it, and the evidence supporting them, are his own. Cogitation is a mixture of imagination, speculation, deduction and induction. Johnson-Laird has made systematic observations on how people actually draw inferences, and on the relation between inductive reasoning and the more primitive processes of generalisation and specialisation. In this area conventional psychology and artificial intelligence have come unusually close together, and Johnson-Laird is able to argue his general case with special cogency. The chapter on creation, with its ‘Darwinian’ and ‘Lamarckian’ modes of creativity, I am not so sure about, but it does provide a slot for a firsthand account of a jazz bass-line generating program, with examples of its output. In this business one or two actual results are worth volumes of promissory verbiage.
Part Five, on communication, covers language, speech, grammar and meaning, after a precautionary reference to bees and monkeys. (Why are theoretical linguists so infernally sensitive about the linguistic uniqueness of homo sapiens?) Speech and hearing are specially remarkable human skills which we are quite unable to confer on computers. This is at least partly because we have only the dimmest idea how sounds are represented in the mind. Hearing begins, as Johnson-Laird explains, with the analysis of the incoming waves by the cochlea, but how on earth they are subsequently sorted into speech sounds, musical notes and noises-off – or a combination of all three – is entirely obscure. (1 am surprised that Johnson-Laird should have given page-room to Roy Patterson’s extraordinary theory of musical intervals – a subject that has unhinged the judgment of many an amateur mathematician. But science abhors a vacuum, and when a question has no satisfactory answer, there is an almost irresistible temptation to offer an unsatisfactory one instead.)
The next two chapters, on grammar and on meaning, bear the stamp of real authority. There is a beautifully succinct account of the guiding ideas in Chomsky’s Syntactic Structures: 1. People speak and understand a language because they have a knowledge of its grammar. 2. A theory of the grammar of a natural language must specify the rules in an entirely explicit way so that the set of sentences that they characterise can be determined without the theorists exercising intuition or guesswork. 3. A theory of language must explain how children acquire the grammars of their native tongues. To paraphrase freely (with apologies to those concerned): 1. People don’t just imitate each other like parrots. 2. If you think your linguistic theory is watertight, try programming it. 3. There must be something special about human languages for children to be able to pick them up. The second of these assertions is a welcome reminder that the study of the mind is not mere hand-waving in a fog but a tough intellectual discipline in which there are right answers and wrong answers. If mental processes are computations, then we should, up to a point, be able to model them on the computer, and the output (in the case of natural language processing) will be either sense or nonsense, according to the quality of our theories.
In the last section of this admirably written and stimulating book Johnson-Laird kicks over the traces of formal theorising and, riding roughshod over Wittgenstein’s warning, ‘whereof one cannot speak, thereof one must be silent,’ plunges headlong into the problems of consciousness and the emotions. From the first of these sallies he emerges more or less unscathed, with the proposal that to be conscious – in the sense of self-aware – a computing machine must have an operating system that incorporates a model of itself. One would scarcely expect a firm answer to the question whether the possession of such an operating system would necessarily confer consciousness on the machine. The reader will doubtless examine his own intuitions in the matter, and will also form his own opinions on the following chapter, which is something of a throwback to old-style psychological philosophising. It is based in part on joint work with Keith Oatley, and its main message is that emotions are the subjective manifestation of biological needs, so that insofar as a robot might have needs, perhaps it could have feelings. There are, in fact, several other theories (if you can call them that) of the emotions on the market at the moment: among these my own preference is for that of Ortony, which at least makes interesting distinctions between different emotions for which we normally use the same word. But I would be even happier to leave such matters, for the time being, to the poets and the novelists.