Maybe, some day, we’ll have serious and well-confirmed theories about how minds work; theories that actually explain interesting things. Historians of science will then be able to consider psychology as just another episode in the long struggle. If so, I bet they’re struck by how often in 20th-century behavioural science methodological nuttiness got in the way. Why, they’ll wonder, did psychology feel compelled to embark on its investigations by tying one hand behind its back and using the other to shoot itself in the foot? Didn’t problems about the mind seem hard enough to bear without adding a freight of procedural inhibitions?
If a Lion Could Talk invites that sort of grouchy and tendentious speculation. Stephen Budiansky wants to know whether animals think. Quite sensibly, he has gone to what science there is for what answers it affords, and the book is a report for a lay audience. Well researched and easy to read, it’s a success by the standards of this kind of enterprise. But, in Budiansky’s book as among the scientists themselves, methodological nuttiness is pervasive. I’ll come back to this presently.
Whether animals can think is a hard question, and not just because the relevant experimental data are exiguous and anecdotal observations arguably unreliable. For one thing, it’s a question that isn’t easy to keep in focus; it’s forever getting mixed up with whether animals’ thinking is ‘conscious’, or ‘self-conscious’, or whether it’s ‘symbolic’, or ‘semantic’, or whether it’s ‘like ours’. Such conflations are practically irresistible, and Budiansky commits them regularly.
Budiansky is inclined to think that being conscious has something to do with having a capacity for ‘higher order’ thoughts (thoughts about one’s thoughts); and that these, in turn, are possible only for animals that are able to talk; hence, de facto, only for us. Both theses have, to be sure, a provenance in the philosophy of mind; but neither is a foregone conclusion, and one ought to hedge a lot on claims that presuppose them. I don’t know how ‘smart’ my cat is; I’m awfully afraid not very. Certainly I doubt that he ever thinks that he’s thinking that I’ve just stepped on his tail. But I don’t doubt at all that he finds having his tail stepped on painful, so when I do, I make it a point to apologise. Since being stepped on clearly pains my cat, I suppose it follows that he’s conscious, at least from time to time. If so, then his being conscious maybe doesn’t need much higher order thinking after all. Perhaps, indeed, thought and consciousness are largely independent in both directions. It’s been the burden of quite a lot of psychology since Freud that much of our thinking goes on without our noticing. In any case, it isn’t clear how being able to talk could create thought or consciousness since, to put it mildly, it isn’t clear how a creature that can’t think and isn’t conscious could learn a language. Budiansky doesn’t tell us, and neither do the philosophers he relies on.
I remark in passing that the animal rights movement is a target of some of Budiansky’s polemics. He claims it has overstated the case for the intelligence of animals in the course of arguing their moral status. Well, perhaps it has, but I wouldn’t have thought that it needed to. Surely what matters to whether it’s all right for me to step on the cat’s tail is primarily whether it hurts him, not what he thinks about it: still less whether he thinks about what he thinks about it. Budiansky writes that ‘our ability to have thoughts about our experiences turns emotions into something far greater and sometimes far worse than mere pain.’ Well, maybe, but in my experience, mere pain is quite bad enough. I like mine with lots of anaesthesia, or not at all. So, I imagine, does the cat.
One thing, then, that makes the problem whether animals think so hard is that it’s tangled, maybe inextricably, with other questions that one doesn’t know what to say about, like how thinking relates to talking and feeling. A second is that whether an animal can think may depend a lot on what it is trying to think about. Budiansky’s alertness to this point is perhaps what is best in his book. It seems increasingly clear that a lot of our intellectual capacity is, as one says these days, ‘modularised’; it recruits processes that are dedicated to solving problems in quite restricted domains. For example, it may well be that the (unconscious) procedures that mediate our very impressive ability to recognise faces at a glance, aren’t similarly available for the visual identification of other sorts of thing. Psychologists don’t agree about how much of our mind is modular; estimates run from none of it to all of it. Budiansky thinks that the right view is probably somewhere in the middle, and so do I. If that’s right, it opens the possibility that many animal minds are as smart as ours, or smarter, when the animals are doing their proprietary things. Bees and pigeons are notoriously not very bright overall, but they’re impressively good at navigation; likewise bats at catching bugs in the dark, and rats at solving tunnel mazes. Presumably, in each such case, the problems the animals are good at are ones their minds were designed to solve. It appears that evolution (or whatever) has put quite sophisticated mathematical machinery at their disposal for these specialised purposes.
Maybe, therefore, the difference between animal minds and ours has less to do with the quality, or even the quantity, of the available intelligence, than with the range of problems on which it can be brought to bear. Perhaps what’s special about us is that we’ve somehow got good, not just at doing the things that we’re designed for, like face recognition, but at lots of other things as well. Maybe, even, it’s our being able to talk that somehow frees our intellects from the innate domain specificity to which the intelligence of animals appears to be largely restricted. And maybe that does have something to do with consciousness. It’s notable that the best examples of our modularised cognitive processes are ones we aren’t aware of; whereas the obvious example of our conscious cognition is explicit, linear thinking-things-out. If, then, animal minds are modular more or less through and through, it may follow that animals are conscious of a lot less of their mental lives than we are of ours.
I don’t think such speculations are merely frivolous, or even that they are merely philosophical. But the questions they raise are very hard, and it’s permissible to doubt that we even know the right way to ask them. That being so, you might have thought that the sensible, civilised way to write a book about animal minds would be to lay out the geography of as many of these hard questions as one is able to, review such of the empirical evidence as is available, endorse such tentative conclusions as the case permits, and then close very gently. Budiansky, however, has chosen to write a sort of philippic. Animals don’t think; at least they don’t think consciously; at least they don’t think consciously and verbally; anyhow, they don’t think the way we do. Whatever it is that goes on in the animal mind ‘almost certainly’ has ‘nothing whatever to do with conscious thought as we [sic] know it’. Whence, one wonders, so much vehemence on a matter about which so little is known?
In fact, it’s pretty clear whence. Early in his discussion, Budiansky quotes a methodological principle called ‘Lloyd Morgans Canon’ which, like practically everybody in the behavioural science business, he enthusiastically endorses. ‘In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale.’ It is, Budiansky adds, ‘not merely unnecessary to invoke reasoning by analogy or other conscious thought processes’ – note the characteristic slip from ‘higher psychical faculties’ to ‘conscious’ thought processes – ‘to explain animal behaviour; it could often be shown on further investigation that such explanations were actually wrong.’ It is, I think, primarily his passion for Morgan’s Canon that turns Budiansky’s book into an obsessive hunt for flaws in arguments or experiments that suggest that animals might be smart. Much of the profession is in this rut. Clever Hans (a horse that seemed to be able to count but wasn’t) has become ‘a staple cautionary tale in animal behaviour research’. Which is to say: ‘We were taken in by Clever Hans, but by God we won’t make that mistake again.’ There’s an analogy from the Other Culture, where criticism, having been traumatically wrong to dislike the Fauves, now feels obliged not to dislike anything.
In fact, what’s so good about Morgan’s Canon? Certainly, we don’t want to endorse explanations of animal behaviour (or of anything else) that are ‘actually wrong’. But is there an a priori argument which, given the choice, would favour a theory that attributed a cognitive capacity to a psychically lower kind of process at the expense of one that attributed it to a ‘psychically higher’ one? This kind of question is important to scientific practice. For, although Budiansky hungers for a psychology of ‘how do we know for sure’ – that’s to say, for critical experiments that make a knockdown case either for animal thought or against it – science doesn’t often work that way, and behavioural science least of all. Usually, the case for an empirical theory is made bit by bit, out of a mix of methodological and experimental arguments, of which the former are rarely apodictic and the latter could always do with more controls. It’s generally the overall weight of considerations, not the conclusiveness of any one of them, that shapes scientific opinion. This is a less Olympian way of finding things out than one might like, but it’s about all we’ve got.
So, to repeat, why should Morgan’s Canon weigh so much? Why doesn’t Fodor’s Pop Gun tip the scales equally in the opposite direction? To wit: in no case may we interpret an action as the outcome of the exercise of a lower psychical faculty if it can be interpreted as the outcome of the exercise of one which stands higher in the psychological scale. As a matter of fact, in the history of psychology, violations of both these principles have frequently led scientists to adopt theories that later proved to be false. Thus, Clever Hans notwithstanding, it has turned out that a lot of animal behaviour once routinely assumed to be the product of ‘simple association’ actually relies on powerful (though probably modular) computing. The doctrine, once practically universal among psychologists, that the behaviour of human infants is mostly reflexive also turned out to be comprehensively wrong. Much of the last twenty years’ research on infant cognition had to be devoted to repairing the damage Morgan’s Canon did.
Maybe, when we’ve finally achieved a respectable psychology, its historians will decide that adhering to Morgan’s methodology caused behavioural scientists less grief than adopting mine would have done. Yet I’m not convinced that you could make that case on the evidence so far. I would say that the two methodological imperatives are both, and equally, silly. A priori, there is nothing to choose between the hypothesis that a certain behaviour has a ‘higher’ sort of mental aetiology and the hypothesis that it doesn’t. If you want to know which is true, you just have to go and look. I am therefore interested in why, while Morgan’s Canon is universally treated with great solemnity, mine is merely sniggered at.
It’s easy to get confused into thinking that Morgan’s Canon must be right because it is just a special case of the injunction to prefer the simplest theory among the candidates that are compatible with the data. Well, maybe it would be if it were, but it’s not. Here is how the confusion comes about. Step one is to take for granted that the ‘higher’ of two ‘psychical faculties’ is ipso facto the one that exploits the more powerful kinds of computation. (So, for example, solving problems by applying rules counts as a higher psychical faculty than solving them by rote because, whereas no problem can be solved by rote that can’t be solved by rule, some problems can be solved by rule that can’t be solved by rote.) Step two is to take for granted that, all else being equal, the simpler of two theories about the mental aetiology of a behaviour is the one that attributes it to the lower psychical faculty. If one supposes (as no doubt one should) that simplicity is a virtue, then Morgan’s Canon follows from these assumptions as a matter of logic.
But something has gone wrong here, surely. The game of tick-tack-toe has an entirely rote solution; one could simply memorise all the possible games: barring lapses of attention and the like, a player who did so would never lose. Yet that’s not the way I play tick-tack-toe. Nor would I consider programming a machine to play that way, since there are rule-based solutions that work equally well but take fewer computations to run and less memory to store. That being so, how could it follow from any general principle of scientific inference (such as the injunction to ‘prefer the simplest theory’) that, if I come across a friend (or a machine) playing tick-tack-toe, I’m required a priori to privilege the theory that he, she (or it) does so by rote?
Simplicity privileges the explanation that postulates the least elaborate causal mechanisms required to ‘save the appearances’. It therefore abhors Rube Goldberg machines. But Morgan’s Canon doesn’t follow from that, since it isn’t true that the ‘higher’ psychical faculties (the kinds that can perform the most powerful computations), are ipso facto the ones whose implementations are mechanically most complex. Nothing can compute with greater power than a Turing machine; but, as the computer scientist Joseph Weizenbaum once pointed out, all you need to make one of those is two kinds of pebble and an endless roll of lavatory paper. It sounds plausible, but it’s not true, that the simplicity of psychological theories varies with the computational power of the faculties that they postulate. Morgan just got this wrong.
Like so many behavioural scientists (especially cognitive neuropsychologists) Budiansky takes it for granted that ‘associative learning’ provides the anchor for Morgan’s Canon. Association is supposed to be intrinsically the simplest thesis about the mental aetiology of a behavioural capacity, and thus to provide the null hypothesis that an argument for animal minds would have to refute. Again and again, Budiansky’s tactic is to suggest about what seems to be an exercise of animal intelligence that it ‘is arguably nothing more than the product of associative learning’. Morgan’s Canon and ‘associative learning’ have taken in one another’s washing this way all through the history of modern psychology. But the jig is now up. Nobody knows whether there is such a thing as associative learning; or if there is, what it is, or whether it is simple, or what (if any) facts about cognition it could explain. Also, since nobody knows what ‘simple associative learning’ is, who knows what kind of neural circuits its implementation would require? Budiansky apparently thinks ‘neural networks’ might do the trick, but here his views are probably not consistent. He himself cites lots of examples where animal cognition appears to recruit mechanisms (for instance, timers and updatable memories in the case of navigation by dead reckoning) which, though of no great intrinsic complexity, are hard even to approximate with neural nets.
I guess there’s a moral: if you want to know how minds work, animal or otherwise, appeals to across-the-board methodological canons won’t get you far. Since none of the theories is methodologically privileged, there’s no short way with either the conceptual or the empirical issues; as I said before, you’ll simply have to go and look. Budiansky’s book isn’t a bad place to start. It’s full of interesting stuff and, for better or worse, it mirrors a lot of the muddles in the field. Read it, by all means; but, for heaven’s sake, don’t believe it.