Consciousness Explained 
by Daniel Dennett.
Little, Brown, 514 pp., $27.95, October 1991, 0 316 18065 3
Show More
Show More

For more than forty years, starting with the publication of Ryle’s very influential The Concept of Mind in 1949, some of the best of the analytic philosophers have devoted themselves to the question of whether we can find a satisfactory substitute for what Ryle sneeringly called ‘the ghost in the machine’ – Descartes’s picture of human beings as divided into a material body and an immaterial mind. Philosophy of mind is one of the few clear instances of intellectual progress which analytic philosophy has to its credit. If one reads the contributions of post-Rylean anti-Cartesians in chronological order – Wilfrid Sellars, J.J.C. Smart, David Armstrong, Hilary Putnam, Jerry Fodor, Donald Davidson, Ruth Millikan, Patricia and Paul Churchland – one gets a clear sense of a developing consensus. There is increasing agreement about which moves will and won’t work, which strategies are dead and which still alive. Bad questions have been gradually set aside and better ones posed. Discussion has become steadily more sophisticated.

Until now, however, no book in this area has matched the freshness, originality and power of Ryle’s. The Concept of Mind was a brilliant attempt to undo what Descartes had done and substitute a view of human beings more like Aristotle’s. Aristotle, Ryle thought, had sensibly seen that to talk about someone’s mind is to talk about features of her intellect and character, not to talk about something incorporeal mysteriously lodged within her. Ryle wrote with wit and bravura, in prose which was accessible to non-philosophers. His book toppled settled philosophical convictions like ninepins, and was rightly taken as the showpiece of post-war British philosophy.

Daniel Dennett, who was one of Ryle’s students, has spent the last twenty-five years writing about the topic – consciousness – which has always been the great obstacle to the acceptance of Ryle’s non-Cartesian account of mind. Most of this work, however, has been in the form of articles, each of them incisive and Vigorous, yet hard to fit together into a coherent doctrine. Now, however, he has brought off a remarkable feat: he has written a book with all the boldness, originality and panache of The Concept of Mind.

Consciousness Explained is a triumph of perspicuous organisation, as well as a fine piece of philosophical writing. If one were asked which contemporary philosopher crafts the best English prose, Dennett would be one of the first to come to mind. Like The Concept of Mind, this book can be read with genuine pleasure by a non-philosopher; anyone who picks it up will be swept up in the excitement of Dennett’s project. Even a reader who has never looked into a psychology book, or taken an interest in computers, will find herself absorbed in summaries of complicated psychological and physiological experiments, and in brief but clear accounts of curious computer programs. Dennett integrates all these into the development of his ‘Multiple Drafts’ model of consciousness with great skill, and moves smoothly back and forth between laboratory results and philosophical generalisations. Whereas The Concept of Mind was essentially the product of one man’s armchair reflections, Consciousness Explained is the upshot of active co-operation, of two decades’ worth of conferring, arguing and consensus-building by cognitive scientists – by the psychologists, linguists and philosophers who have been thinking about the resemblances and differences between computers and people.

The greatest difference between the two books, however, is methodological. Ryle thought that ‘conceptual analysis’ (the sort of thing which, in Ryle’s day, analytic philosophers were supposed to be skilled at – detecting the ‘real’ meaning, or the ‘real’ grammar, of linguistic expressions) sufficed to dethrone Descartes. Dennett realises that nothing will do that job except a brand-new description of ourselves, one whose use will gradually dissipate our tendency to build Cartesian presuppositions into our questions about the mind. The difference between Ryle and Dennett is the difference between saying that, since Descartes bamboozled us, we should get back to good old common sense (and to Aristotle, who embodied it), and saying something less bluff and more nuanced: that Descartes did a reasonably good job, a job which needed to be done. He developed a non-Aristotelian description of humans which harmonised with the corpuscularian, anti-Aristotelian tenor of 17th-century science. It was a description which worked well for some purposes. But it also produced a lot of philosophical headaches, and a lot of weird theories. We can avoid these headaches and that weirdness if we can come up with a new way of talking about our distinctively human abilities, one which will improve on both Aristotle’s and Descartes’s.

The heart of Dennett’s book is his attempt to persuade us that we can safely give up questions which presuppose what he calls ‘the Cartesian Theatre’ – presuppose that ‘I’ names a sort of observer of the events which are ‘present to consciousness’. As Dennett says, ‘the idea of a special centre in the brain’ – the relic of Descartes’s suggestion that mind and body interface at the pineal gland – ‘is the most tenacious idea bedevilling our attempts to think about consciousness.’ It seems as natural to contemporary psychologists as it did to Descartes to suggest that, ‘however difficult it might be to determine in practice the precise location of the Continental Divide in the brain’, still there must be ‘a highest point, a turning-point’, a place in between the afferent and the efferent nerves where a quasi-person sits. This quasi-person is the real, incorporeal you – the you that might, conceivably, survive even after the brain rots.

When Hume pointed out that no such quasi-person, no referent for the term ‘I’, was introspectible, Kant rejoined that the fact that ‘the “I think” can accompany all my representations’ showed that there was a ‘transcendental ego’ for which neither empirical psychology nor physiology could possibly account. German idealism, the great bulwark which the 19th century erected against scientific naturalism, owed its existence to Kant’s success in thus rehabilitating the ghost Descartes had postulated. The conviction which idealists shared with Husserlian phenomenologists – that consciousness must for ever remain opaque to natural science – has been the basis of an enormous amount of bad philosophy from Kant’s day to this. Many people have claimed that human dignity would somehow be compromised if psycho-physiology should ever succeed in de-transcendentalising the ego – if we should ever be revealed to be ‘mere machines’. Many others have claimed that we must drastically revise our self-image now that we have realised that we are just Turing machines made out of protoplasm.

Dennett’s claim is that, thanks in part to the development of parallel distributed processing, a way of programming computers which ‘blazes the first remotely plausible trails of unification in the huge terra incognita lying between the mind sciences and the brain sciences’, we are now in a position to tell a plausible story about how the brain does what it does, one which saves all the psychological phenomena without making any reference to the spectator in the Cartesian Theatre. The discovery of the double helix, followed by a lot of further concrete details about how amino-acid chains get built up, left us less inclined to talk about the ‘insoluble mystery of the nature of life’; as Dennett puts it, ‘the recursive intricacies of the reproductive machinery of DNA make élan vital about as interesting as Superman’s dread kryptonite.’ Analogously, Dennett thinks, the model of consciousness suggested by recent work in cognitive science – the one he calls ‘the Multiple Drafts model’ – will leave us less inclined to speak of ‘the nature’ of consciousness, and less inclined to ask: ‘how can consciousness “emerge” from the “merely physical”?’ Dennett thinks that meditation on the difference between the conscious and the non-conscious is as little relevant to our sense of what is possible and important for human beings as is meditation on the difference between living and non-living matter.

‘The fundamental implication of the Multiple Drafts model,’ Dennett says, is that if ‘one wants to settle on some moment of processing in the brain as the moment of consciousness, this has to be arbitrary. One can always “draw a line” in the stream of processes in the brain, but there are no functional differences that could motivate declaring all prior stages and revisions to be unconscious or preconscious adjustments, and all subsequent emendations to the content (as revealed by recollection) to be post-experimental memory-contamination. The distinction lapses in close quarters.’ Just as the distinction between living and non-living matter is of little relevance for the microbiologist tweezing strands of DNA apart and putting them back together, so the distinction between what the subject is conscious of at any given instant and what she is oblivious to is of little relevance for the psychologist trying to explain, for example, so-called ‘phi’ phenomena.

Dennett’s neatest example of these phenomena is what happens when the psychologist flashes two small spots of light, the first red and the second green, against a dark background in rapid succession. The subject of the experiment reports that she saw a spot moving to the left and changing its colour from red to green near the mid-point of its (illusory) passage. Dennett comments: ‘Unless there is “precognition” in the brain (an extravagant hypothesis we will postpone indefinitely), the illusory content, red-switching-to-green-in-mid-course, cannot be created until after some identification of the second, green spot occurs in the brain. But if the second spot is already “in conscious experience”, wouldn’t it be too late to interpose the illusory content between the conscious experience of the red spot and the conscious experience of the green spot? How does the brain accomplish this sleight of hand?’ Dennett goes on to say that people who are still ‘locked in the Cartesian Theatre’ will insist that since ‘the (illusory) discrimination of red-turning-to-green has to be accomplished after the discrimination of the green spot,’ it must be the case that ‘your consciousness of the whole event must be delayed until after the green spot is (unconsciously?) perceived.’

To help you escape from the images, and the inclination to ask bad questions, which make up our belief in the Cartesian Theatre, Dennett suggests that you consider an ‘Orwellian’ account of what the brain is up to, as an alternative to the ‘Stalinesque’ one just suggested (in which the censor holds up transmission until a more acceptable, properly edited, version is available for release). A Stalinesque editor, as Dennett puts it, splices some extra frames into the film before it is sent on to the Theatre whose sole spectator is the ‘conscious subject’. An Orwellian editor, on the other hand, ‘notices that the unvarnished history in this instance doesn’t make enough sense, so he interprets the brute events, red-followed-by-green, by making up a narrative about the intervening passage, complete with mid-course colour change, and installs this history ... in the memory library for future reference ... You say and believe that you saw the illusory motion and colour change, but that is really a memory hallucination, not an accurate recollection of your original consciousness.’ In the Stalinesque case some extra processing is done before you become conscious of both dots, in the latter case afterwards. How do you choose between these two explanations of the phi phenomenon?

You don’t, and you shouldn’t try – because nothing could ever resolve the question. You would only try to resolve it if you were convinced of the existence of a ‘subject of consciousness’ – of the reality of the ghost postulated by Descartes and redescribed by Kant. If you wave this ghost aside, if you stop asking about ‘the centre of consciousness’ and just think of people as Ryle wanted you to (as organisms with special abilities which they owe to special physiologies), you will probably agree with Dennett that since ‘all the evidence there is or could be fails to distinguish between the Orwellian and Stalinesque theories of conscious experience,’ there is no difference between these theories. We can just forget the question of ‘what was present to consciousness at that instant’ and instead say, as Dennett does, that ‘the observer’s point of view’ is ‘spatiotemporally smeared all over the brain’.

As Dennett is at pains to point out, the big question his book raises is whether this claim that there is no difference disregards ‘brute facts of the matter unreachable by science’ – brute facts about ‘our immediate conscious experience’. Most of his book is devoted to persuading you that your previous conviction that there are such facts is a result of your having been brought up on a set of images which you can now discard. In the splendidly honest final paragraph of his last chapter Dennett answers the hostile question ‘Are you explaining consciousness, or just explaining it away?’ by saying: ‘All I have done, really, is to replace one family of metaphors and images with another, trading in the Theatre, the Witness, the Central Meaner, the Figment, for Software, Virtual Machines, Multiple Drafts, a Pandemonium of Homunculi. It’s just a war of metaphors, you say – but metaphors are not “just” metaphors; metaphors are the tools of thought. No one can think about consciousness without them, so it is important to equip yourself with the best set of tools available.’

This is a very refreshing way to end a book – an ending which could have been written only by an exceptionally self-critical and fair-minded philosopher. Most of us cannot resist saying, or at least suggesting, that the topic on which we have written our book has at last been gotten right: that our book succeeded in taking the reader from appearance to reality, from myth to truth. That, for example, is what Ryle claimed to have done. In contrast, Dennett has the sense and the modesty to see that ‘appearance’ and ‘myth’ are not relevant notions in philosophy. The best we philosophers can do is to say: try doing it my way; try describing what goes on in my terms; perhaps you may find that I have given you better tools with which to achieve your ends.

The philosophers who will most dislike Dennett’s book – those who will be most convinced that he has merely explained consciousness away – are those who believe that some of the things which Dennett calls ‘old metaphors’ incarnate self-evident truths. As regular readers of these pages know, philosophers like Colin McGinn, Thomas Nagel and John Searle have no use for Dennett. All of them would agree that he has failed to respect ‘the brute facts of immediate conscious experience’. All of them would sympathise with Roger Penrose’s conviction, as put forward in The Emperor’s New Mind, that computers can’t feel like anything, whereas we can, and that consciousness is not dependent upon the possession of a language in which to formulate beliefs. So they will find absurd Dennett’s claim that ‘what distinguishes a conscious state from a non-conscious state’ is ‘the straightforward property of having a higher-order accompanying thought that is about the state in question’. They will claim that Descartes’s metaphors are faithful, as Dennett’s are not, to intuitions which are prior to our choice of vocabulary. In particular, Nagel would heartily agree with Penrose that ‘a child can see’ that ‘the conscious cannot work like a computer.’

Dennett has effectively criticised Penrose’s book insofar as its argument depends upon inferring from ‘algorithms cannot do everything’ to ‘an algorithmically functioning mind or brain cannot do what we can do.’ As Dennett puts it, ‘in-principle limits of algorithms are simply irrelevant in a biological setting.’ But this criticism does not touch the intuitive convictions which lie behind Penrose’s book – convictions which lead Penrose, Nagel and the child to agree that there is such a thing as phenomenology, in the sense of right answers to the question ‘What is immediately given to consciousness?’, where ‘given’ is construed in such a way that the subject herself may be quite wrong about what is given to her consciousness. In reply to such critics, Dennett protests ‘There seems to be phenomenology ... But it does not follow from this undeniable, universally attested fact that there really is phenomenology.’ Children have been wrong on such matters before, and so have philosophers.

The conflict of intuitions between Dennett and his distinguished and implacable opponents cannot be resolved by any direct process of argumentation. Dennett does all he can when he gives us dozens of examples of how his new metaphors can be put to use saving the phenomena. As with all really influential and durable philosophy books, this book’s power lies not in propounding a nice, clean, sharp, conclusive argument, but in its details: in the elegant way in which Dennett’s novel metaphors allow him to handle all sorts of odd and paradoxical little facts, and to undermine all sorts of seemingly self-evident truths.

The reader of Consciousness Explained will find a wealth of ideas which I have only been able to hint at. In particular, I have not mentioned what may prove to be the most seminal part of the book: the chapters called ‘The Evolution of Consciousness’ and ‘How words do things with us’. There Dennett puts forward a suggestion which I hope may provide the theme of his next book: the idea that consciousness is not built into human anatomy but is a cultural development. He puts this point in terms of Richard Dawkins’s notion of ‘memes’. Dawkins defines a meme as a kind of replicator – analogous to a gene or a crystal. This kind of replicator is a ‘pattern of information that can thrive only in brains or the artificially-manufactured products of brains – books, computers and so on’. Dennett follows up by saying: ‘The haven all memes depend on reaching is the human mind, but a human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes.’

This suggestion is exciting because it blurs the distinction between ‘natural’ and ‘human’ sciences; it undermines the traditional notion that nature provides consciousness, and culture then provides the language in which consciousness can find expression. It also undermines the traditional Cartesian assumption that consciousness is an all-or-nothing affair – like a light bulb in being either on or off. It leads to a more Hegelian view, according to which consciousness, like language, is a product of social relations, and has an intensity proportional to the complexity of those relations. Such a view allows for borderline cases of consciousness, in the same way as current microbiological notions let us countenance a fuzzy and shifting boundary between the organic and the inorganic.

This view chimes with Dennett’s suggestion that the sort of programming which can turn a brain into a conscious mind is analogous to the sort of software which can turn a piece of hardware into one. But what sort of software is that? Not, Dennett would argue, the sort beloved by the so-called ‘MIT school’ of cognitive science, the school led by Jerry Fodor, which favours the ‘computational’ kind of computer program that attempts to mimic inferences between English sentences. The claim that that sort of program, when run on a machine, is a case of consciousness, or even of thought, has always been dubious. For, as Searle and others have said, the thinking has all been done in advance by the programmer. There seems no better reason to attribute consciousness to that sort of machine than to attribute musical sensitivity to a tape cassette or a vivid imagination to a movie screen.

Rather, the software in question is the ‘connectionist’ sort. The big difference between the MIT, computational way of processing information and the connectionist way is that the former involves a kind of ‘language of thought’, a set of representations of features of the environment which can be paired with ordinary English-language descriptions of such features. In the case of a computational model, it makes sense to ask questions like ‘How does the computer represent the table? The star? The colour red? The over-all outline of my face?’ (The answer to the last question might be, as it was for the robot played by Yul Brynner in the movie Westworld, ‘it represents it as a pattern of little squares of different shades of darkness.’)

Such questions, however, don’t make sense for a connectionistically-programmed computer. When you do connectionist programming you do not draw up a flow chart which breaks the processing up into steps analogous to those which a human being would go through in order to solve the same problem. You do not tell the computer what will count as necessary and sufficient conditions for the solution of the problem in hand. Rather, you tell it, so to speak, to ‘blunder around for a while until we tell you that you are doing something we approve of’ – just as you do when you are bringing up a baby. The baby doesn’t know what would count as satisfying you until it has blundered into a pattern of behaviour that you approve of, and neither does this sort of computer. Neither the baby nor this kind of computer is ‘logical’ in its approach to problems, even though the computer’s behaviour (and the baby’s) could be predicted by somebody who had an algorithm for predicting the way in which electrical charges would be redistributed inside the computer (or the baby’s brain). Being predictable by reference to an algorithm, we now realise, has nothing to do with being ‘logical’ – but it has taken thirty years of work in cognitive science to bring this point home to us. Computers running connectionist programs are much less like Mr Spock, and much more like us, than computers running traditional computational programs.

Dennett is suggesting that we see a baby’s brain being gradually restructured by the surrounding culture on the model of a computer being gradually restructured by running connectionist software. The same hardware description will apply to the computer, and the same neuro-anatomical description to the brain, throughout this process. But the functional description, the one that explains how things are getting done, will keep changing. Dennett says that he likens human consciousness to culturally-provided programming rather than to biologically-provided hardware because ‘some important and otherwise extremely puzzling features of consciousness get illuminating explanations on the hypothesis that human consciousness is 1. too recent an innovation to be hard-wired into the innate machinery, 2. largely a product of cultural evolution that gets imparted to brains in early training, and 3. its successful installation is determined by myriads of micro-settings in the plasticity of the brain, which means that its functionally important features are very likely to be invisible to neuro-anatomical scrutiny in spite of the extreme salience of its effects.’ Dennett is confident that, since we in fact have enough micro-settings in the brain to permit it to run analogues to the sort of ‘parallel distributed processing’ programs which connectionist cognitive scientists are now running on computers, we have no need to long for breakthroughs at the sub-quark level, the sorts of breakthrough of which Penrose dreams.

Penrose unfortunately brushed aside this new kind of programming as irrelevant to the argument about the possibility of artificial consciousness. Dennett makes the relevance clear, and shows just how the connectionist model of the mind – though still (like all computer programs) algorithmic and thus, if you like, ‘mechanical’ – is something quite different from the standard computational model which has been made familiar by Fodor, Chomsky and others, The criticisms which have, for decades, been levelled at that model by Hubert Dreyfus – e.g. that no such model has given us a computer ‘with the flexibility of a six-month-old child’ – do not, as Dreyfus himself has pointed out in his recent writings, apply to the connectionist model. Flexibility, the ability to muddle through, is just what connectionistically-programmed computers are good at.

Dennett, however, invokes the connectionist model without going into much detail about just what parallel distributed processing is. Anyone sufficiently intrigued by his book to want a better understanding of this model will get what she wants from Andy Clark’s very clear, helpful and sensible Microcognition: Philosophy, Cognitive Science and Parallel Distributed Processing.* Whereas a lot of books on connectionist mind-modelling tell you more than you want to know about programming details, Clark (a philosopher himself) tells you just enough to let you understand why Dennett, Dreyfus, the Churchlands and many other contemporary philosophers find the recent development of parallel distributed processing so fascinating and enlightening.

In particular, Clark explains why connectionist programming is unperspicuous in a way that computational programming is not – why the connectionistically-programmed computer’s performance of its task has no clear relation to the underlying algorithm which causes the performance, and why you usually have no idea how to ‘debug’ such a program when it does not do what you hoped. Further, Clark helps us see why the fact that this sort of programming produces no ‘program states’ which are analogous to beliefs or desires should not lead us to conclude that ‘from a truly scientific point of view’ beliefs and desires are unreal – any more than the fact that there are no microstructural states within the skin which are analogous to the skin’s tint or texture makes the latter features ‘scientifically unreal’. Like Dennett’s, Clark’s book is a remarkably clear guide for non-philosophers to issues which are at the cutting edge of research in cognitive science. My hunch is that the line of thought which Dennett and Clark exemplify will provide one of the rare examples of philosophers offering quite specific and concrete help, as opposed to atmospheric encouragement, to several scientific disciplines. Usually philosophy, as Hegel put it, ‘paints its gray on gray’ only when some other area of culture has done all the hard work. But for once it may be different. Psychologists and computer scientists may find that the philosophers have actually broken some exciting paths, rather than simply doing their usual tidying-up job after the scientific bulldozers have knocked down still more of our previously unquestioned assumptions.

If I am right in this optimistic view, however we philosophers should not use the occasion to sit back and preen ourselves on our habits of ‘rigorous thought’, or our grasp of ‘powerful analytic techniques’. Dennett’s success is not a matter of rigour, nor an application of teachable techniques. Rather, it is due to Dennett’s combining an idiosyncratic mastery of metaphor with some homespun virtues: the energy to do a lot of reading around in widely diverse areas, the willingness to hear his opponents out, and the patience to give their arguments honest and sustained consideration.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 13 No. 24 · 19 December 1991

So Richard Rorty has joined Daniel Dennett (LRB, 21 November) on the connectionism-will-solve-all-our-philosophical-problems bandwagon. But like the similar enthusiasm for AI (Artificial Intelligence), this fervour is more a fashion created by what people think connectionism promises than a detailed understanding of what it can and cannot do. In the beginning AI promised much: but over time it has become apparent, as Rorty correctly observes, that it can only deliver what its programmers have already programmed into it. Rorty and Dennett enthuse over connectionism because it appears to be free of this problem. They are wrong. Connectionism suffers a similar, but undiscussed difficulty known as the training problem.

Essentially, connectionism concerns networks made up of interconnected units (and hidden units), each with adjustable ‘thresholds’ which learn generalisations by being trained upon exemplars and error correction. In somewhat simplified terms, networks learn by being given inputs which the network converts into outputs. Following the network’s output, the thresholds between its units are changed depending upon whether the output was correct or not. The training problem concerns where the network gets the information needed for this error-correction process. The problem is that error correction makes the training of a network dependent upon some kind of external assistance to tell it whether its output was the correct response to the input it was given: without this knowledge it cannot tell whether it has made an error or not and so cannot be trained. So while networks do not need programmers they do need some kind of external help.

Rorty and Dennett focus upon the wonderful generalisations networks can learn. They ignore where the external help needed to train networks might come from. But for many cognitive skills there is no easy way networks in the brain can obtain the error-correction feedback needed to train them. The problem is recursive: if there was a process in the brain which could provide this information then its own development would depend in turn upon some further process. For some cognitive processes like reading there may be ways out. Connectionist accounts of reading successfully account for human reading performance. It is likely that the error correction needed to train them comes in part from a separate kind of process which sounds out written words from their spelling (hence the difficulty in learning to read encountered by children with problems over sounding out words). However, there is no reason to assume consciousness is going to have such an easy way out of the training problem (and certainly Dennett has not provided one). It may be that even with connectionist models we will still find a need for God. Consciousness may have to remain unexplained for a bit longer.

John Skoyles
Psychology Department,

If ‘what distinguishes a conscious state from a non-conscious state’ is, as Dennett claims, the former’s having ‘a higher-order accompanying thought that is about the state in question’, then what should we call the supposedly ‘non-conscious’ states experienced by animals aware of the presence of food, say, to distinguish those states from the non-conscious states that rocks and machinery and the like apparently experience throughout their existence? More important, what do we call that which experiences either a conscious or a non-conscious state of awareness if not a ‘consciousness’? And how can a Cartesian be wrong in supposing this ‘experiencing mechanism’ to be significantly different from its contents?

On a more mundane plane, I’d also like to know why Rorty disruptively goes against established usage concerning the generic third-person pronoun. Is it only to signal her sympathy for the neurotically hyper-offendable, or does she have some intelligent reason for it?

Bob Grumman
Port Charlotte, Florida

Vol. 14 No. 1 · 9 January 1992

It’s a while since I’ve come across an example of that fine old genre, the Glowing Review that is also a narrative of mastery. But it’s made a major comeback in Richard Rorty’s piece on Dennett’s Consciousness Explained (LRB, 21 November 1991). Who better than Rorty to stage it, with its all-star dramatis personae: Aristotle, Descartes, Hegel, Hume, Kant, Orwell, Stalin, Ryle – with Roger Penrose, John Searle and Thomas Nagel appropriately designated walk-on roles. And it’s stirring stuff. Dennett, student of a great master who ‘toppled settled philosophical convictions like ninepins’ in a work that ‘was rightly taken as the show-piece of post-war British philosophy’, has pulled off a remarkable feat. He has produced a work to equal it in boldness, originality and panache, full of ‘seminal’ arguments, which also display ‘an idiosyncratic mastery of metaphor’ etc, etc.

Well, this is all fine and for those who like that sort of thing it is most definitely the sort of thing they like. I don’t mind a good Helden-sage myself if I’m in the mood, as a matter of fact but that’s strictly provided they leave me – that is ‘she’ – out of it. I do object to being inscribed in the text when the roles for women are so lousy. What do I get here? ‘Aristotle, Ryle thought, had sensibly seen that to talk about someone’s mind is to talk about the features of her intellect and character …’ Penrose & Co agree that there are right answers to the question ‘ “What is immediately given to consciousness?", where “given" is construed in such a way that the subject herself may be quite wrong about what is given to her consciousness’. ‘Even a reader who has never looked into a psychology book, or taken an interest in computers, will find herself absorbed in summaries of complicated psychological and physiological experiments, and in brief but clear accounts of curious computer programs.’ ‘Anyone sufficiently intrigued by this book to want a better understanding of this model will get what she wants from Andy Clark’s clear, helpful …’

After Aristotle has given my mind the once over and some of his distant descendants have surmised that I am wrong in the head, I get to be a B-stream reader who is to be offered brief, clear and helpful explanations about what goes on in the realms of seminal thought. Gee, thanks.

Jane Goodall
Bolton Point, New South Wales

Richard Rorty is obviously of the opinion that Daniel Dennett has shouted ‘fire’ in a crowded Cartesian Theatre. If so, many will be expected to make a beeline to the nearest available exit. But on reflection (whatever that may be), Dennett is not convinced there is a Cartesian Theatre – no theatre, no exit – perhaps no fire either. In any event, I suspect John Searle and Thomas Nagel, regardless of where they are now seated, will indeed be hanging around for the second feature.

Peter Kalac
Red Bank, New Jersey

Vol. 14 No. 4 · 27 February 1992

Richard Rorty’s review of Dennett’s Consciousness Explained was lucid and helpful (LRB, 21 November 1991). I wish to refer those interested in its themes to Edward Hundert’s Philosophy, Psychiatry and Neuroscience, where similar ideas are developed.

Leston Havens
Harvard Medical School

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences