Science is currently poised for its assault on the last two great peaks of ignorance. Having struggled with immense labour across the foothills of physics and biology, it has set up camp at the foot of cosmology and consciousness. Which will be the first to go? Will we understand the origin of the universe before we understand consciousness, or will consciousness be conquered first? If the problems are truly fundamental, they will be found to be united, and the assault on either will facilitate the assault on the other. As we unravel the simplicity of the web that makes up the material world and see how it emerged from nothing, so we can expect to expose the workings of that apotheosis of complexity, the response of the brain to its input and its own output, the response we call consciousness.
Yet mankind still remains oddly sensitive when confronted with the nakedness of understanding. Cosmologists strip off the shrouds that surround the creation and embarrass the religious, who would still prefer to see the event fully dressed in their obfuscations. Neurophysiologists – we need a better term, one that has the power of connotation of ‘cosmologist’ – are faced by even sterner looks as they pry into our intellectual private parts, for some still suppose that, with spirit abolished from the cosmos at large, there is still a lair for it inside the labyrinthine complexity of the space between our ears. Strictures against exploring that inner space have come from left and right. Any research that challenges the immaterial nature of the soul, and the mental grip that the religious seek to exert, has been for centuries repaid by the threat of death at least. There is also the underlying fear – who dare even now discuss it freely? – that discoveries about the intellect will have calamitous social consequences and justify social oppression. Let us stay uncomprehending, cry the faint-hearted. Changeux wonders – for the sake of wondering, not because he thinks so – whether it would be best to ‘de-cerebrate’ the social order and hide away our most splendid and dangerous organ. I find it extraordinary that in the Eighties a publisher should still think it necessary to tread cautiously, and to cotton-wool a book with remarks that it presents a ‘radical and controversial hypothesis: that there is no “mind” in man, nothing psychic, but rather only neurons, synapses, electricity, and chemistry’. Sadly there remain people who think differently. Had the book presumed otherwise, it would not have been worth the death of a tree. As it is, it develops the view cogently and usefully, and the benighted will share with me a thrill if they trouble to read it carefully. For them, it will be a thrill of fear.
Science, however, is becoming daily more capable of confronting and elucidating with its clever, careful, cautious public prying. But how do we get inside our own heads? There are two important stages, and they should not be confused. One step is to enter the head physically, and to identify the structures we find there and their mode of action. The second is to seek to discover what it ‘actually’ means to be me. The first is the mechanistic phase of exploration, the one to which Changeux largely confines his account. That is wise, on two grounds. One is that we as yet know next to nothing of the second. The other is more important, is buried in Changeux’s thoughts, and is one with which I have great sympathy. It is that, maybe, mechanism is all. Maybe once we discover all there is to know about mechanism, then the epiphenomenon of consciousness will be seen to be no more than that. Understanding mechanism is perhaps all there is to understanding function, consciousness being no more than function.
The brain is a puzzle, and the kinds of thoughts we pass through when thinking casually about it are essentially those adopted through the ages. It even took time for the realisation to grow that fancy lay not in the heart but in the head. The Mesopotamians, the Hebrews and Homer identified thought with the heart, and the brain, a mushy grey-cream mass to the casual eye, was regarded as an inconveniently heavy and ill-placed gland. Plato and Galen regarded the head as the seat of rationality, but Aristotle, that object lesson in encouraging us to beware of armchair brains, revived the Homeric view, and for centuries taught us to think of the brain as no more than an elaborate cooling-plant for the organism.
The emergence of our modern views can be traced back to the readiness of the prison authorities in Alexandria to lend their condemned criminals for vivisection. Herophilus was effectively state executioner, but with the scalpel, not the sword. He inferred from his work the distinction of the cerebellum and the spinal cord, identified the cavities of the brain, and established that its cortex is extensively folded, and that nerves were different from blood vessels and originate in the head and not the heart. He and his immediate successors also distinguished two types of nerve – what we now call motor and sensory nerves – running between hand and brain. No longer was there any doubt about the possibility that the head could communicate with the far-flung corners of the body.
The Church Fathers, particularly Nemesius, Bishop of Emesa, and St Augustine, contributed a mode of thought that still remains useful in modern discussion of the brain. They identified certain regions as allied with its particular functions. Although simplistic, theirs was an attitude that at least provided a framework for an understanding, and in a more elaborate form their regionalisation of the brain is used today, even though its meaning is becoming ever more sophisticated, delocalised and abstract.
At the beginning of the 18th century, Franz Joseph Gall struck a more modern attitude with his rejection of simple introspection as a reliable route to knowledge of the brain, and thought that he had found an experimental path inwards. He opened the way to some kind of progress by rejecting the concept of a rational soul, together with the dualism of Descartes, and regarded man as composed of a great number of moral and intellectual faculties. Lacking direct routes into the brain to localise these hypothetical centres, he had recourse to the information carried to its surface, much as geologists do today when seeking to discover the internal structure of the earth, and founded Phrenology. Studying the brain in this way is akin to seeking to discover the mechanism of a very complex supercomputer by feeling its shape from the outside. Some information can be obtained, and certain gross functions may be localised, but we really must get inside the brain if we are to make much progress, and identify the details of its circuitry. Unlucky Gall’s materialistic approach led to his prohibition from teaching in Austria, his persecution by the Church, and his exile to Paris, where he provoked the animosity of Napoleon.
Although largely fanciful, Gall’s phrenology opened up the way to experiments. They led to the correct localisation of certain functions by observing the consequences of accidental lesions, and gradually permitted maps to be drawn up of areas correlated with certain activities, such as language. Now our techniques are much more powerful, and using positron emission tomography we can observe the regions of the brain that become more active during different kinds of tasks. We can look at the inside of people’s heads as they listen to a mystery story on the radio and see how it differs from the activity there when they listen to a concerto. Yes, the images are crude, for they are coarsely grained and averaged over the time it takes to make the scan: but can we do all this now and not expect to do more in the future? Changeux wonders whether it is possible to find a trace, an engram, that persists between two evocations of a mental image. If that is so, then we shall have an extraordinary microscope into the organisation of thought and will be able to discover to what extent memories and perceptions are global, and not local, outcomes of activity.
Simultaneously with the development of our understanding of the general lay-out of the architecture of the brain there developed our understanding of its anatomical atoms. A boiled, meninges-free brain was first subjected to a magnifying glass by Marcello Malpighi in 1685. More detailed understanding of the microstructure came with the development of microscopy and what we now take to be its current apotheosis, electron microscopy. The nerve cell first appeared in the scientific literature in 1824. Then its single axon leaving the cell body was found, and finally its multiple branching dendrites.
That picture was established by 1865, and like all good pictures it raised further questions. In particular, was the brain a continuous network, a neuronal net, or was it composed of a huge collection of contiguous, physically independent cells? The possibility of the neuronal net theory was still sufficiently alive in 1933 for Ramon y Cajal to consider it necessary to publish an authoritative review to lay it to rest. Then, in 1955, came the final proof of the contiguous neuron picture, for electron microscopy showed conclusively the presence of a gap, about 20-millionths of a millimetre wide, between the end of one neuron and the membrane of another. The synapse had at last been seen. Now there were two major problems. One problem was how the neurons were organised, and the other was how they communicated.
Our understanding of the organisation of neurons is still very tentative but some general principles are beginning to emerge. Most fundamentally, gone are any hopes of discovering types of neuron or mechanism in the cortex that are specific to humans and not to other species. Our special abilities are a consequence of the organisation of enhanced aggregates of standard types of cell – not of cells that might correlate with any special abilities we might be presumed to have. Moreover, the number of neurons per unit surface area of the cortex – about a hundred and forty-six thousand per square millimetre – is very similar in all mammalian species, so that at that level, too, we are not special.
The struggle to disentangle our 30 billion cortical neurons, each with about ten thousand synaptic connections, depends on drawing a chemical Ariadne’s thread through the network of interconnections. This is beginning to be possible through the agency of homely-sounding horseradish peroxidase, and we are beginning to trace the route that some impulses take. The complex message we read has a simple, common component: non-linearity. That is, messages out from the cortex are largely channelled back into the cortex, so that a result of the processing goes back in to contribute to the input. Axons from the cortex in one region feed back into the cortex in another, and also feed down into the thalamus – an entry point to the cortex itself. Only some axons go back out into the spinal column and lead to the action of motor responses and hence to perceptible behaviour.
With all that feedback, with input affecting input and output output, is it any wonder that our response to stimuli is so complex? More importantly, does this suggest that there is no hope for comprehending the activity of the cortex? I am inclined to the view that it does not. I suspect that we will in due course find classes of connection, and avoid having to trace individual ones. Moreover, the situation is strongly reminiscent of the newly emerging science of complexity theory, in which it is acknowledged that the outcome of processes may be unpredictable even though the input and the rules of its transformation are known. Complexity theory deals with processes well-known in the computer game Life, which is also highly non-linear yet intrinsically simple. Just as the evolution of an arbitrary pattern in Life is in general unpredictable, so may be the thought or behaviour that stems from the simple, non-linear transformations that go on in our cortex. Our privacy may be invaded, but our creativity will remain unpredictable.
Science, having discovered the synapse, has also now largely unravelled the mechanism of nerve propagation. In the 17th and 18th centuries nerve propagation was thought of as perhaps either pneumatic or hydraulic (it is a common feature of an age to ascribe its engineering to nature). Then, with electricity in the 19th century came the realisation that nerve transmission was electrical. But is it? In short, not really, and certainly not if we take electricity to mean transmission of electrons. As in so many things (or so we arrogantly but perhaps justifiably assume), enlightenment came in the 20th century, and it came with two major discoveries.
During this century we have managed to get inside neurons and can now observe their function from within. Hodgkin and Huxley established that the universal mechanism of transmission along nerve fibres was ionic, and showed that it depended on the existence of two kinds of channel in the nerve membrane: one permeable to sodium ions (positively-charged atoms of sodium) and the other to potassium. The transmission of a nerve impulse was then (although at the time they did not have the language) a soliton, like the Severn bore, a solitary one-peaked wave of changing voltage as the sodium channels opened and the sodium ions flooded in.
Hodgkin and Huxley effectively ‘electrified’ the neuronal network and allowed us to express it quantitatively: however, it must never be forgotten that an electric circuit is only an analogy, and that the actual behaviour depends on the presence of ions rushing in through a membrane. The analogy is close, and it turns out to have many of the properties that we expect of electric circuits, including their ability to act as oscillators. The sea slug Aplysia is, like those Alexandrian prisoners, particularly compliant to studies of this kind: for it has a highly dispersed nervous system, with no brain or spinal column, but just a series of ganglia around its body with each containing a few thousand neurons, some of which are so large that they are almost visible to the naked eye. Whole circuits can be discerned, and pulsing, oscillating behaviour observed in them, as in the circuits of a radio or TV. These are chemical oscillators, and, as if by chance, chemists have been developing their own theories of their behaviour just in time to see that they are needed to help us to understand how we think. The common feature of the oscillation of the cells is that they seem to depend on only two types of ion channel: one for potassium and the other for calcium.
Transmission between neurons is also becoming more and more understood. The school that took the transmission to be electrical (that is, by ion transfer) was locked in combat with the school that favoured chemical transmission. Now, in a happy compromise, we know that both types of transmission occur. Ionic transmission occurs across some tightly bonded synapses involved in the oscillatory, electrical-circuit behaviour observed in the global time-keeping of closely coupled cells: here the neurons are almost plugged together into rapidly responsive circuits. Much more flexible and varied, however, are the synapses that make connection between neurons by way of chemical substances. These neurotransmitters, which include substances like acetylcholine, are held in bags in the pre-synaptic membrane, are released by the appropriate signal arriving down the axon by the sodium-potassium ion mechanism, and flood into the synaptic cleft. They drift across to the post-synaptic membrane, and are recognised as keys are recognised by locks. They fit into protein molecules, distort their shape, open an ion channel, and fire another sodium-potassium signal down the dendrite. The fate of the signal is to be integrated with other signals coming into the cell along similar routes, and for the entire assembly of messages to induce the cell to fire a signal down its axon.
Many neurotransmitters are known, and they occur with different concentrations in different regions of the brain. They raise one question and answer another. The question they raise is this: with so many modes of communication between neurons, is it possible that we will ever be able to model their action using the simple monomodal electronic transmission of current computers? I see no deep difficulty here, for we can always contrive an electronic model of complex multimodal neurotransmission and should not think ourselves thwarted by the fact that nature found it easier to contrive multimodal, chemical processes (selective pressures not having the resources necessary to develop a perhaps initially more cumbersome electronic mode). The question it answers is the point Changeux emphasises: no transmitter, no receptor and no ion channel have been found that are specific to humans. The mechanisms of neurotransmission are uniform throughout the peripheral and central nervous systems, and are the same in the cortex of a human as in the electric organ of a torpedo fish. There are no special cells in humans, and within cells there are no special mechanisms. Human cerebration depends on the same components as the sea slug’s physical movement.
Having almost reached the bedrock of consciousness, we must return to the surface manifestations, which Changeux does by exploring what it means to have mental images. Here his thoughts are diametrically opposed to those of Bergson, who wrote that ‘the nervous system has nothing in the way of an apparatus to make or even to prepare representations’. Changeux holds, and adduces evidence, that, on the contrary, the human brain contains representations of the outside world in the anatomical organisation of its cortex, and that it is also capable of building representations of them and using them in computations. That is, he adopts a structuralist point of view, in which mental images are both topological dispositions of neurophysiological activity and dynamical too. He treats mental objects as global physical entities characteristic of a transient but presumably sustained activity in large populations of neurons in several specific cortical areas. Storage, and memory, then become a kind of deep Darwinism, in which only those transient global and perhaps non-local neuronal activities are retained that survive selection by comparison with physical reality as imported by the senses.
Here the richness of the organisation of the brain becomes allied again with the problem of complexity theory. For how do brains grow and yet turn out to be all so similar? Why can you and I have similar thoughts? Changeux also explores the question of complexity in his chapter on epigenesis, where briefly I part company with his exposition. By ‘complexity’ Changeux appears to mean ‘an order of which one does not know the code’. I think it means exactly the opposite. I think that it means an order of which one does know the code, but cannot predict the outcome of an application of that code.
That said, I think Changeux comes back onto the tracks as he examines epigenesis by the selective stabilisation of neurons. Epigenesis is the emergence of neuronal networks that operate at the level of whole groups of neurons, allowing only those groups to survive that correspond to the signals received through the senses – and, I would add, that lead to self-consistency in the signal-processing that occurs in the network. The principal evidence here seems to be that the maturing of individuals is accompanied by the regression of the number of synapses. It appears that the genes allow an exuberance in the number and variety of synapses that occur, but that only the viable connections are burned in by experience and learning. Learning is death to the unexercised. This may correspond to the imposition of the transformational rules of one language on an individual’s deep linguistic structure, and may even account for the difficulty an older individual has in learning a second language, for the alternative synaptic network has been burned away by the initial learning process. We become conscious of the world by burning out the synapses that lead to mental images inconsistent with the world outside. Maybe, once those inconsistencies have gone, we can dream internally yet profitably.
The book is a joy. It is written in an engaging, stylish way that captures attention at every twist of the complex but convincing argument. It is a slow but rewarding read, and the author has been well served by the lucidity of his translator.