How can meat think? What kind of thing, or process, might thinking and problem-solving be, such that physical stuff, nicely organised, can make it happen? More generally, how does order spontaneously arise in a physical universe? And what kinds of conceptual bridge link mathematics, physics and the biology and chemistry of life and mind? The interests of Alan Turing were remarkably various. In the 21 essays gathered here by Christof Teuscher, there is the mathematical biologist in search of new explanations of the emergence of patterns in nature; the proto-connectionist investigating neurally-inspired models of learning and cognition; the code-breaker, whose wartime contributions were crucial to the Allies’ success; and the fledgling roboticist, whose ideas concerning machines’ use of active learning to develop human-like intelligence are increasingly influential. There is also the putative father of so-called hypercomputation, or computation that soars beyond the (well-defined) limits of Turing’s own formal account of computability. There is some hot controversy in the book over the technical possibility of hypercomputation, and the (shaky) historical case for Turing’s own interest in such alternative models. Finally, there is Turing the man: gay, criminalised, condemned to receive injections of oestrogen, probably suicidal, and treated as a security risk by the countries whose freedom he helped to ensure.
Born in London in 1912, Turing read mathematics at King’s College, Cambridge, and became a fellow of the college in 1935, aged 22. Here he invented what is now the standard abstract description of a digital computing device. Not usually a real piece of equipment, the Turing Machine really just offers a way of thinking about mechanical problem-solving. It comprises a (potentially infinite) paper tape, and a simple processing unit. The processing unit has a finite set of internal states, is capable of recognising a finite set of symbols, and can write and erase symbols on the tape according to a set of instructions. In short, the Turing Machine is a device capable of reading and writing a limited set of symbols, in ways determined by a set of rules, onto a static but alterable medium. The idea allowed Turing to give a precise definition of the class of algorithmically solvable problems. The word ‘algorithm’, derived from the name of the ninth-century Muslim mathematician al-Khwarizmi, refers to problems that can be solved by the thought-free application of rules of thumb: in other words, by mechanically applying a set of rules to some initial encoding of the problem. Turing demonstrated that this class of problems could always be solved by a Turing Machine. Not all intelligible problems fall into this class, but if you can show that a Turing Machine can solve a problem, you can at least be assured that the problem is mechanically soluble and that there is no need for any real understanding to intervene in the process.
Turing then showed that if a problem was soluble by one such machine, it could be solved by a Universal Turing Machine. This super-machine could take the specification of some other machine as input, and be programmed to act like that machine. Thus, if a problem is algorithmically soluble at all, it is soluble by a suitably programmed Universal Turing Machine. It is now widely agreed that these abstract Turing Machine descriptions capture all the essential properties of real digital computing devices. What the Turing Machine tells us is that the behaviour of devices falling into this class will always turn out to be no more fundamentally intelligent than, say, water running down a slope.
It seemed clear to Turing, as it does to most neuroscientists and cognitive scientists today, that this is the kind of thinking needed to discharge the mystery of mind. If mind is to be explained, rather than simply accepted as a property of matter – neural tissue – we need to understand how mindless (and consciousness-less) stuff, nicely organised, makes for mindful behaviour and conscious awareness. Turing didn’t have much to say about consciousness, though he did say plenty about intelligent behaviour. One popular way of depicting what Turing did is to say that he showed us how rationality itself can be, in the sense just described, mechanically possible. This is the view favoured by, for example, Jerry Fodor. The story goes like this. Formal logic shows us that we can preserve truth simply by attending to form, not meaning. If we keep to certain rules we will never infer a falsehood from true premises, even if we have no idea what either the premises or the conclusions are about. Turing, as Fodor likes to remind us, effectively showed that for all such formally specifiable routines, a well-programmed machine could replace a human being. At which point, the mere assertion of materialist faith (‘We are just physical devices’) morphs into an investigation of reason-governed behaviour. The idea, enshrined in the research programme of classical Artificial Intelligence, was that reason could be mechanically explained in terms of computational processes performed on symbols.
At this point, the idea of using form to do duty for meaning, bolstered by the formal clout of Turing’s early work, was converted into a more specific, and more restrictive, vision of natural intelligence itself as consisting in the internal reading, writing and transposing of symbols ‘written’ in neural code. This was the view that found full expression in early work in Artificial Intelligence. Considered as a just-so story exposing the mechanical possibility of reason-guided thought, this early approach had a satisfying directness. It explained reasonable or sensible thought-transitions (‘they enjoyed the meal, so they probably left a tip’; ‘it’s raining, so I’ll take an umbrella’) by supposing that each thought has an inner symbolic form, and that these forms share aspects of the structure of thoughts as expressed in words. As a result, form-sensitive processes and encodings of potentially relevant information can regulate processes of inference in ways that respect the rational relations between thoughts.
The trouble is that all this is built on an uncomfortably narrow view of real-world rationality. Even if we grant that in some way or other, just as Turing believed, sensitivity to form, not meaning, is at the root of all intelligent behaviour, and that the limits of Universal Turing Machine computability are the limits of computability itself, the question of how the activity of real brains supports intelligent behaviour remains wide open. The idea of symbol processing makes for the easy explanation of simple inferences (from ‘it’s raining’ to ‘I’ll take an umbrella’), but fails to do justice to other classes of mental ability, including most forms of skilled behaviour (from driving a car to composing an essay), emotion, motor-control and learning.
Moreover, as Fodor is the first to admit, the resulting (call it Fodor-Turing) model of real-world rational inference is uncomfortably local. It’s great for explaining when you might decide to use an umbrella, but not so good when more globally sensitive inference is required; in cases, for example, in which the best explanation for some event might be hidden anywhere in the knowledge base of a system, a knowledge base so large that it can’t be searched exhaustively. Since even the decision to use an umbrella in the rain is potentially sensitive to countervailing information coming from anywhere in the knowledge base, we are left with a model of mechanical rationality which has depressingly little to say about most forms of genuine but non-deductive reasoning. In the end, the Fodor-Turing model of mechanical reason works best, as Fodor allows, in the domain of ‘informationally encapsulated systems’ – typically, perceptual systems that process a restricted range of input signals but are insensitive to ‘top-down’, or knowledge-driven, inference. Hardly the seat of reason.
There are lots of things we might try in an attempt to remedy these shortfalls: the use of ever more subtle symbolic descriptive languages, or neurally-inspired (‘connectionist’) architectures, or the study of ‘infant’ robots with a capacity for learning and action. But it’s important to recognise that the reasoning that inspired Turing’s own formalisations was that of clerks or civil servants, armed with books of rules and well supplied with pens, paper and erasers. In his design for the Automatic Computing Engine, Turing says as much, noting that ‘the class of problems capable of solution by the machine’ are ‘those problems which can be solved by human clerical labour, working to fixed rules, and without understanding’. In a lesser-known manuscript of 1948 entitled ‘Intelligent Machinery’, he comments that ‘a man provided with paper, pencil and rubber, and subject to strict discipline, is in effect a universal machine.’
The Universal Turing Machine is thus a model not of what goes on inside the individual brain, but of the kinds of symbol-based problem-solving that people typically perform in the real world: the process of breaking a problem up into parts, applying rules, inscribing the results onto a static external medium (paper), and repeating the procedure again and again. It is not (and, as Andrew Wells’s revealing essay in this collection shows, it does not seem likely that Turing ever intended it to be) a perspicuous image of the operation of the brain itself.
On the one hand, then, we shouldn’t be fooled into mistaking the basic apparatus of the Turing Machine for an explanation (at any useful level of abstraction) of the way biological brains support rational thought. On the other hand, we should be alert to the importance, for any such account, of the role and importance of human-constructed symbols and stable, non-biological media: speech, pen, paper and all the artefacts that aid the activity of the biological brain.
Put another way, the project of understanding how human reason is mechanically possible is frequently misconstrued as being that of understanding what is special about the human brain. No doubt there is something special about our brains. But understanding our peculiar qualities as reasoners, thinkers and knowers of our environment requires a broader perspective, an understanding of the contributions of biology, culture and technology, as well as the dense, reciprocal patterns of influence between them. Most human cognitive achievement probably flows from the combination of fluid, highly interactive, fundamentally unstable inner processes with anchor points in stable external symbolic systems, orchestrated by agreed social practices.
If he were alive today, my guess is that Turing’s interest in learning and the possibility of baby robots would combine with his later interest in morphogenesis – that is, in biological growth and the origins of form and pattern. He would remind us that the baby brain is not a fixed computational resource awaiting subtle tuning by a lifetime’s learning, but rather a remarkably plastic part of a complex evolving ecology of growth and change. The 390 grammes of brain we are born with has grown, by the time we reach 15, to 1315 g. During that period, neural structures and connectivity develop, supporting increasingly sophisticated capacities of perception, action and symbol manipulation. This combination of continuous learning in a densely structured context with long-term physical growth and change affords a range of cognitive and computational opportunities whose nature and potency we have yet fully to appreciate, let alone reproduce in an artificial system. To track and model it will require technical and conceptual innovation on a near cosmic scale. Just the kind of siren call Alan Turing could never resist.