Mental Models: Towards a Cognitive Science of Language, Inference and Consciousness 
by P.N. Johnson-Laird.
Cambridge, 513 pp., £27.50, August 1983, 0 521 24123 5
Show More
Show More

Cognition has become fashionable. Half a dozen academic disciplines are currently scrambling to establish ownership. The philosophers, who got there first, are being jostled by empiricists, but are relaxing none of their traditional claims. To psychologists, cognition is a ‘problem’ that only they can analyse experimentally. Social anthropologists feel that their concern for culture gives them some special claim to cognition. Computer scientists who create artificially intelligent systems now freely define cognition in their own likeness. Special cognitive intuitions about language provide basic data for linguists. Even neuroscientists now speak of cognitive brain processes. All put forward their favourite entitlements.

Until very recently each discipline proceeded as though its enterprise were the only one of consequence. Each knows that the others exist, but has little appreciation of potential conflicts or redundancies. The subject has become a village where every door bears a sign reading ‘COGNITION: Main Entrance’. Innocents wandering into town are inevitably confused by the different messages echoing from every side.

A new idea has lately invaded this community, an idea of federation and co-operation. It is called Cognitive Science, in the singular, and it embraces all those parts of the contending disciplines that have serious claims on the subject. So far, computer scientists have taken the lead, but cognitive psychologists and theoretical linguists are not far behind. Evidence that this federation might succeed has now taken shape in P.N. Johnson-Laird’s new book, Mental Models, which is surely the most authoritative guide to this intellectual territory now available. Its origin was a series of seven talks given at Stanford University early in 1980, but in the present version many explanations of the technical notions it draws upon have been added. Nowhere else have so many of the seminal ideas of this new field been pulled together and their relations explored. Not that there have not been excellent treatments of the individual fields – books on artificial intelligence, on cognitive psychology, on theoretical linguistics, on the philosophy of mind – at all levels of sophistication. What has been lacking, and what Johnson-Laird has come surprisingly close to providing, is a book that moves freely across territorial boundaries, a book addressed to everyone who has a sincere interest in how the mind works.

Taken for granted in most of the recent fuss over cognition is a questionable analytic assumption that those aspects of mental life concerned with knowing can be painlessly detached from those concerned with feeling or willing. This assumption seems to have originated in the 18th century’s tripartite division of the mind into cognitive, emotive and conative faculties, but in the 19th century it became little more than a manner of speaking as theories of the association of ideas replaced theories of mental faculties. At first glance, the reappearance of cognition on the scientific landscape of the 20th century might seem to be a regressive step. But not so. Today the separation of cognition from other mental phenomena is seen as a necessary idealisation, similar to ignoring wind resistance while working out the laws of gravitation. Only time will tell whether the idealisation is a good one, but meanwhile many cognitive scientists take comfort from the existence of that most purely cognitive of all systems – the modern, high-speed, stored-program, digital computer.

The paradox implicit in this situation – in mentalism revitalised by advances in mechanism – has been noted more than once. But it is a new brand of mentalism. What the computer provides that has completely redefined the study of cognition is, first, a language (or family of languages) in which very detailed theories of cognitive functioning can be stated, and, second, a general type of mechanism whereby the implications of those theories can be demonstrated. Working out the consequences of this redefinition is a critical assignment for the new science of cognition.

The most immediate consequence is that henceforth it must be possible to formulate cognitive theories as computer programs. More precisely, they must be formulable as ‘effective procedures’ for performing mental computations, theoretical descriptions from which programs for particular computers might be constructed. ‘If the long promised Newtonian revolution in the study of cognition is to occur,’ Johnson-Laird remarks, ‘then qualitative explanations will have to be abandoned in place of effective procedures.’

Johnson-Laird came to this enterprise from psychology, where his initial research interest was the experimental psychology of logical reasoning; that interest still shapes his views of other cognitive functions. As a serious scientist, however, he was determined to pursue his problem wherever it led. Thinking and talking are so intimately connected that he soon began exploring the psychology of language, which in turn demands a knowledge of theoretical linguistics. Moreover, psychology, logic and linguistics have all interacted vigorously with computer science in recent years: Johnson-Laird could hardly avoid adding that field to his list of proficiencies. Since he thinks clearly and writes fine expository prose, he is remarkably well prepared for the task.

His starting-point is a suggestion put forward forty years ago by Kenneth Craik, the brilliant Cambridge psychologist whose life was cut tragically short in 1945. ‘My hypothesis,’ Craik wrote, ‘is that thought models, or parallels, reality – that its essential feature is ... symbolism.’ Anyone who takes Craik’s hypothesis seriously assumes that the work of cognitive science must be to construct models of a system that constructs models of reality. Johnson-Laird illustrates this programme with loving care for syllogistic reasoning. Syllogisms are deductions based on two premises. Although no longer consequential for modern logic, the syllogism retains an attraction for psychologists, in part as a handy source of problems to bedevil people with, but principally because some syllogisms are so much harder to understand than others.

The hope is that if you could explain why syllogisms of the form

All A are B.

All B are C.

Therefore ... ?

are so much easier than those of the form

All B are A.

No C is B.

Therefore ... ?

and if you could order those of intermediate difficulty in between, you would have grasped something deeply significant about the workings of the human mind. (For those who, like Johnson-Laird’s subjects, find the second syllogism baffling, the valid conclusion has the form: Some A are not C.)

Everyone agrees that solving a syllogistic problem involves creating some mental representation of the premises from which a conclusion can be formulated. If asked for details, most people would start drawing Euler’s circles – mentally at first, on paper if pressed. Such diagrams are not an unmixed blessing, however. As Johnson-Laird notes, a single premise can be represented by several different Euler diagrams. Some A are B, for example, requires four separate and distinct diagrams: A and B overlapping, A inside B, B inside A, and A and B coincident. Two such premises together can easily call for consideration of more than a dozen pairs of diagrams.

A psychological theory, therefore, might try to take advantage of this combinatorial aspect of Eulerian representations. Can cognitive difficulty be predicted from diagrammatic complexity? Is it possible that the most difficult syllogisms are those that require a thinker to deal with the greatest number of combinations of diagrams? If so, then our psychological model of the syllogistic reasoner would be a system that builds Eulerian models of inclusion relations.

That conjecture turns out to be a good try, but not quite right. Johnson-Laird proposes an alternative form of representation that avoids much of the combinatorial complexity and fits better the fine grain of his experimental data on people’s syllogistic performances.

Johnson-Laird’s proposal can be simply illustrated. When given the premise, ‘All the artists are beekeepers,’ the thinker summons up mentally a troupe of actors who play the roles of beekeeping artists:

artist = beekeeper

artist = beekeeper

Lurking just offstage, ready to be called if needed, are some non-artistic beekeepers:

                (beekeeper)

Given a second premise, ‘All beekeepers are chemists,’ the procedure is repeated: all actors playing beekeepers are instructed to act like chemists as well, and non-beekeeping chemists lurk offstage:

artist = beekeeper = chemist

artist = beekeeper = chemist

       (beekeeper) = (chemist)

                (chemist)

From an inspection of this tableau, a thinker can easily conclude that everyone who is acting the part of an artist is also acting the part of a chemist. In this manner an average person can arrive at a valid conclusion in total ignorance of all rules of logical inference.

This way of representing propositions is a test case for what Johnson-Laird means by a mental model. He uses it to illustrate several important generalisations. He shows, for example, how individual differences in ability to reason syllogistically can be accounted for. He shows how the ability to solve such problems develops during childhood and concludes that we have no innate capacity for logic. He shows that these mental models are not limited to syllogistic reasoning, but can support other kinds of inference. And he describes an effective procedure for constructing and manipulating these mental models, thus acknowledging his methodological commitment to cognitive science. And while all that is unfolding, he manages to dispose of various alternative hypotheses proposed by rival theorists.

From this launching-pad Johnson-Laird takes off into free flight through meaning, grammar, discourse and consciousness. It is unlikely that his positions on these baffling topics will persuade everyone, but he hews to a consistent line, and, what newcomers to the study of cognition will certainly appreciate, he lays out the basic ideas that motivate other theorists as well as those that motivate him. Where else can you find intelligible introductions to computability and recursive functions, possible worlds and set theoretic semantics, transformational generative grammar and parsing algorithms, all in one book and all germane to cognitive psychology?

Johnson-Laird’s central preoccupation is with comprehension, with what it means to understand something. Syllogisms are just a handy way to explore the processes of comprehension – he assumes that the kind of model-building he finds in logical reasoning also occurs in understanding other linguistic constructions. So he approaches semantics, the study of meaning, as a study of how language is related to mental models of the world.

Psychologists generally suppose that meanings are in the mind, but this common-sense assumption has been challenged by realist philosophers. To know the meaning of ‘gold’, for example, is to know what the word refers to. But centuries of scientific research have been required to discover the real meaning of ‘gold’. Since most people are not competent to determine whether some bit of yellow metal is or is not true gold, it makes no sense to claim that the real meaning of ‘gold’ is something they have in their minds. And if that objection holds for ‘gold’, it must hold for other words as well.

By taking meaning to be a relation between a word and a mental model, rather than between a word and the real world, Johnson-Laird tries to sidestep this controversy between Realism and Psychologism. Out of the great pudding of enigmas that semanticists worry about he pulls a different kind of plum. For example, how do people understand ‘I saw the Azores flying the Atlantic’? Out of context, the ambiguity of this sentence is obvious, but in the normal context of a description of the speaker’s travels, the absurd reading that has the Azores flying would never reach conscious awareness. Such selectivity is explained, Johnson-Laird argues, if you think of comprehension as a process of constructing a mental model of the discourse. Only those models that can be embedded in the model corresponding to the world will be taken as true.

It is only fair to say that such claims will not inspire universal agreement. Whether or not contextual knowledge can suppress the retrieval of inappropriate senses is at present a highly controversial question for experimental psychologists; there is evidence on both sides. Moreover, the assumption that people have a general model of the world, while it may be true, is hardly the kind of simplifying assumption that one expects from a scientific theory. If by ‘a mental model of the world’ one means the whole of an individual’s personal knowledge, then that would seem to be the thing to be explained, not the thing to be postulated. How vast amounts of information can be stored and organised for rapid retrieval is surely one of the central problems of cognitive science. If Johnson-Laird is right, however, discovering the rules people use for constructing mental models becomes a prime target for future research.

For example, in order to explore how mental models of a discourse might be constructed, Johnson-Laird works out a computer program to interpret spatial relations. Given:

A is on the right of B.

C is in front of B.

D is on the left of C.

The program constructs an appropriate spatial representation and uses it to respond ‘False’ to, say, ‘A is on the left of D.’ The exercise serves to highlight several general problems involved in constructing mental models, problems that also arise in comprehending other, more nutritious forms of discourse.

For example, how is a decision reached to start constructing a new model? Presumably, if an assertion refers to anything already represented in the current model of the discourse, its information will be added to that model: otherwise a new model is begun. If an assertion relates things in hitherto separate models, the two models will be integrated. If an assertion concerns only things that are already represented, then the truth of the assertion in the current model will be determined. And so on. Step by step, Johnson-Laird moves toward an effective procedure for constructing mental models of a discourse.

It is hardly surprising that many subjective states and processes can be adequately described in terms of mental models, but as Johnson-Laird realises full well, a theory of mental models cannot claim explanatory adequacy until the set of all possible mental models can be characterised in some principled way. As a beginning, Johnson-Laird offers a list of constraints that mental models must satisfy. The list helps to clarify the difference between his mental models and various other kinds of representation that have been proposed – propositions, images, prototypes, schemata, or whatever – but it is clear that much remains to be done.

Hiding somewhere under, beside or behind the present interest in cognition is an incurable fascination with consciousness. What is it? Who needs it? What is it good for? What kind of a neurological organ secretes it? What would answers to such questions look like?

Cognitive science is currently concerned with information-processing, which can be done by unconscious machines as well as by conscious human beings. Cognitive science has accepted no special responsibility to explain consciousness: if any science has that responsibility, it is psychology. But it is difficult to believe that if cognitive science really solved the problems it has set for itself, we would not be much closer to a theory of consciousness. For example, if the construction of mental models were understood, it would be natural to consider systems that constructed models of themselves. Would such a system be anything more than a robot that carried around its own blueprint? Or might it be endowed with some degree of self-awareness?

The contents of consciousness are limited: thoughts flow serially, one at a time, and each seems to be the product of much implicit work we have no direct awareness of. The modern conception of thinking as computation has solidly supported the idea that even the simplest mental acts – recognising a face, understanding a word – are achieved by the elaborate processing of information, processing whose complexity eludes any conscious, introspective report. The brain that must perform these elaborate computations, however, consists of relatively sluggish components; the only way we can at present imagine how it could achieve the necessary speed is through parallel computations.

Whereas man-made computers are serial systems, performing one computation at a time at enormous speeds, there are many reasons to believe that the living brain is a parallel system: many computations are performed simultaneously, but each at a modest rate. Although parallel computation confers speed and flexibility, the obvious problem is co-ordination. In order to make the system work, therefore, a high-level, serial operating system is required. Johnson-Laird speculates that ‘the contents of consciousness are the current values of the parameters governing the high-level computations of the operating system.’

He does not try to duck the implication that a conscious computer could be built if we could assemble many, many parallel components to handle the input and output of information, and could equip it with a model of itself and a high-level processing system. His comments are worth quoting.

Any scientific theory of the mind has to treat it as an automaton. This is in no way demeaning or dehumanising, but a direct consequence of the computability of scientific theories. Above all, it is entirely consistent with the view that people are responsible agents. It is a proposition, however, that may lead us in time to revise our concept of computation.

One begins to get an appreciation of what a theory of consciousness might look like if we had one. It is interesting that this formulation is not purely cognitive: intentions and the will are clearly a part of it, and feelings and emotions are not excluded. The journey through cognitive science has wound finally home to psychology.

Johnson-Laird is not one of those psychologists who insist that your mind is not really what you think it is. Indeed, he stays so close to the folk model of the mind that some colleagues will surely criticise his conservatism. But the computational theory of mind is so radical a departure from traditional ideas that a little conservatism can be useful, a welcome aid in integrating the new with the familiar.

Nor is Johnson-Laird given to methodolatry, insistent that facts are suspect until confirmed by whatever method of investigation is currently fashionable. In this book, experimental data are freely supplemented by developmental or clinical data, and theories are drawn from all corners of cognitive science. Here again his critics will be at him, since no one can be simultaneously at the forefront of all the different realms he covers. But if cognitive science is to be more than an empty promise, this is the way it will have to be done.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences