Brain science has historically been populated by corpse diggers, pig torturers, vivisectionists, logicians, entertainers, quacks, and quiet careful people who spend a great deal of time with fruit flies. Indeed it hasn’t always been taken as read that the brain was responsible for much at all. Thinking and feeling were most often attributed to the heart, though the second-century physician Galen had other ideas. He set up a travelling show in which he demonstrated that if you squeeze the heart of a live pig, the animal keeps whimpering; if you squeeze the brain, it becomes unconscious; once un-squeezed, consciousness is regained. (Galen was kinder to monkeys: he said vivisection shouldn’t be done on them because it was too distressing to look at their faces.) But his idea that the brain was a control centre had little lasting influence. Evidence and experiment were not yet fashionable modes of pursuing truth. After the work of the 16th-century Flemish anatomist Andreas Vesalius, the belief that the brain controlled movement and thinking began to take hold. It wasn’t that Vesalius’s anatomy studies were so novel, but his mastery of the new (to Europe) technology of printing brought to the attention of many something that had been only sporadically known: there were no remarkable structural differences between the brains of humans and those of goats, cows, cats or birds. When Descartes, a hundred years later, looked at the brain of an ape and the brain of a human, he saw only one notable anatomical difference: a pine-cone shaped gland, the size of a grain of rice. That, he decided, must be the seat of the soul. His brain research wasn’t published during his lifetime. He knew it would cause him too much trouble.
In the 19th-century equivalent of Galen’s travelling shows, a curious public looked on as electricity was applied to the bodies of dead animals. They twitched terrifyingly. (Mary Shelley attended such a demonstration before writing Frankenstein.) At some demonstrations the heads of decapitated dogs convulsed. At others, cicadas sang. The effects of electricity on human corpses were also demonstrated, or performed. Fraudsters took their turn in the brain carnival culture: one claimed to have made a live cat out of voltaic batteries. Electricity, the thinking went, was the mechanism by which the brain did its work. But no one had shown that electricity travelled down a nerve, or explained how an electrical charge might be generated without external stimulation. Some thinkers, even some scientists, held that there could be no natural explanation for the human mind. The British naturalist Alfred Russel Wallace argued that evolution could never make a human mind – even though he was a strong supporter of evolution. He came to his conclusion after attending a séance, during which the medium floated above the table. He took this as evidence of supernatural forces.
The faltering anabasis towards knowledge is all the more uncertain when the knowledge pursued is knowledge of the self. But the little tricks our minds play on us have proved to be fine doors into the way our minds work. Why, asked the 19th-century German physician Hermann von Helmholtz, do we see coloured patterns when we press on our closed eyes? Why do amputees still feel their missing limbs? Why is there a blind spot in our vision, and how is it that we don’t perceive it? The brain, he argued, doesn’t simply record or transcribe perceptions. It draws inferences, it interprets. When it receives signals along neuronal pathways that were once connected to a limb, it presumes (incorrectly) that the limb is still there. When it lacks information, it fills in the gap – as it does in a blind spot, which correlates with the part of the retina occluded by the attachment point of the optic nerve. But what is ‘it’ and how do these interpretations take place?
In The Idea of the Brain, Matthew Cobb topples some common fallacies. He says the use of the terms ‘left brain’ and ‘right brain’ to refer to thinking styles is ‘profoundly mistaken’. The description of ‘mirror neurons’ – motor cortex neurons that fire not only when a monkey carries out a particular action but also when it sees another monkey doing the same thing – as mind-reading cells, or as demonstrating the existence of an element of empathy, is described by Cobb as untrue. Same for the old idea – a beautiful one – that our inner brain is ‘reptilian’ and ancient and responsible for our most base behaviour, and that childhood nightmares of monsters may be traces of ancestral encounters with dinosaurs. (Like the idea that the heart is the centre of emotion, this seems to me a poetically precise description of a real phenomenon.) The connection of dopamine to addictive behaviour – Cobb cites a Facebook founder saying the site was meant to be addictive, to ‘give you a little dopamine hit’ – is ‘nonsense’ and ‘neurobollocks’.
According to Cobb, our knowledge of the brain (and the nervous system, the brain’s communication network to the body) is at once tremendous and vanishingly slight. Researchers at 29 separate institutions have spent years making a detailed, synapse-level map of the neural connections in the brain of a single maggot. We are nearing the point of really understanding the nervous system of the stomach of a crustacean – but we aren’t there yet. At the same time, technologies exist that allow paralysed patients to move robotic arms with their thoughts. It feels at once like the year 1900 and the year 3000.
Chance has played a large role in producing clues to the mysteries of mind. In 1943, for example, a Swiss chemist took a dose of a drug he was developing to help with breathing. When Albert Hofmann cycled home afterwards, he saw kaleidoscopic swirls. Later, he felt very anxious. That was LSD. The study of brain chemistry was off to a colourful start. Around the same time, a French military surgeon was working with a drug company to develop an antihistamine. The drug turned out to be only weakly histaminic, but it had an unexpected and welcome side effect: it cleared mania and psychosis. This was Thorazine. An early story associated with it tells of a patient who had been psychotic for years, but then, after a few weeks of medication, was able to leave the hospital and dine with his doctors. It was noted in trials of another drug, intended to treat tuberculosis, that it left the patients in a much sunnier mood (though it failed to treat their TB). This was iproniazid, a drug from a class known at MAO-inhibitors, the antecedents to the SSRIs and SNRIs widely prescribed today. There are many such stories.
These drugs must at first have appeared miraculous. And they illuminated, to a degree, the role of neurochemicals in the brain. Yet, as therapies, each of them later proved seriously problematic. Iproniazid causes liver damage; Thorazine sedates patients, and has a particularly bad side effect called tardive dyskinesia, in which one loses control of bodily movements. LSD is making a therapeutic return, but that may well have more to do with the fact that people find it fun. A full understanding of the way these drugs work remains elusive. Scientists can describe the receptors they interact with, but that is a narrow, albeit substantive, knowledge.
Cobb isn’t convinced by what is called ‘the chemical imbalance theory of depression’: he points out that we still don’t understand, for example, why mood symptoms tend to be alleviated only after an SSRI has been taken for weeks, while anxiety lifts more quickly. What does this suggest about the role serotonin depletion plays? Cobb doesn’t think we know nothing of neurochemistry, he just emphasises how modest our knowledge is. He quotes Thomas Insel, the former head of the National Institute for Mental Health, who said after thirteen years in the job: ‘I don’t think we moved the needle in reducing suicide, reducing hospitalisations, improving recovery for the tens of millions of people who have mental illness.’
Cobb’s intention isn’t to diminish the insights of science, or to cast doubt on its stochastic approach. Rather he aims to describe the field accurately. ‘Researchers like myself,’ he writes,
working on systems far removed from mental health disorders, can recognise this immense gap between our fundamental knowledge of how the human brain works and the distant prospect of any effective treatment, and then turn back to the day job (in my case, studying the maggot nose). Things are not so easy for physicians faced with someone in desperate need, or, above all, for patients or their families (I am a member of such a family).
That’s quite a pair of parentheses. They echo the mixture of reserve and attentiveness evident throughout the book.
Descartes thought the brain functioned as a system of hydraulics, much like the statues he saw in the gardens of Saint-Germain-en-Laye. Later thinkers also saw in the brain what they saw around them: electricity, or magnetism, or Boolean logic, or Bayesian logic, or computers that question us about our mothers. Androids do dream of electric sheep. What’s remarkable is that our body of knowledge grows out of these dreams, albeit erratically.
The more recent history of stem cell research in relation to brain injury offers an example of stumbling yet real progress – though not the progress that was expected. A stem cell is a ‘pluripotent’ cell – a cell that can develop into any kind of cell. Stem cells in our bone marrow regenerate blood cells every ninety days; a liver can similarly regenerate and heal itself. But, until recently, scientists didn’t think stem cells existed in the brain. In retrospect this is curious because it had been demonstrated that such cells exist in 1965. This work wasn’t taken seriously for decades: it wasn’t until 2001 that it was repeated and confirmed.
Then stem cells became the rage. And also a subject of controversy, since human stem cells were sometimes harvested from embryos. But that fork in the road was bypassed: a 2006 discovery by Shinya Yamanaka of Kyoto University means that stem cells can now be generated from mature cells harvested from skin, hair and blood. Still, the ‘magic’ of stem cells has been overpromised. Headlines proclaimed that they would heal stroke victims, treat Parkinson’s and prevent ageing.
In The Future of Brain Repair, the developmental neurobiologist Jack Price tells a modified conversion story, from confident sceptic to measured optimist, even as he offers his work as a counterpoint to overhopeful news stories. He notes that very few therapies for brain disorders have appeared in the past decades, despite George H.W. Bush having declared the 1990s the Decade of the Brain, and Obama having set up a Brain Initiative in 2013. Price, like Cobb, emphasises how modest our knowledge of the brain remains.
Early stem cell research seemed to offer hope of treating brain injuries resulting from strokes, or of helping with neuronal loss in Parkinson’s patients. A number of studies in animals showed that when stem cells were injected into areas of brain injury, the animals showed marked improvement. The measures for tracking this were reliable, if mildly comical to a layperson – a rat’s ease in removing Sellotape from a paw, for example. Physicians wanted to move quickly to human trials, and the animal studies provided some good arguments for doing so, even if none of the techniques had yet been standardised, not even what kinds of cell to inject, or how many.
Several clinical trials for stroke patients were undertaken. The idea was that an injection of stem cells would regenerate damaged areas. But the procedure was invasive and expensive, and patients’ symptoms did not improve. Later evidence showed that almost no new neurons had been generated in any of the trials. But why had the treatment seemed to work so well in animals? It turned out, on closer inspection, that almost no new cells had been generated in the animals’ brains either. The hypothesis for the underlying improvement – cell regeneration – had been wrong. The improvement had probably been the result of secretions from the stem cells – what’s known as the paracrine effect. The studies were a failure therapeutically, but not scientifically. Something about the brain’s potential repair system had been revealed.
Similar stem cell treatments for people with Parkinson’s also failed, but they did show that even the few new cells that were generated quickly fell prey to the Lewy body degeneration that is the marker of the disease. Again, the treatment failed, but something about Parkinson’s was learned. The rapid degeneration of new brain tissue led some researchers to change their thinking about the disease mechanism. One hypothesis is that Parkinson’s may be caused by a protein known as a prion. Prions are responsible for mad cow disease, among other things. Price is cautiously optimistic that therapies can be developed that will use stem cells to trigger the brain’s own repair mechanisms. If that comes to pass, stem cell therapies will have worked, but not in the way that was expected.
Neuroscientists, wisely, tend to be keen to point out the limitations of their own methods of inquiry. One tool simultaneously glorified and mocked is the functional MRI, which gives researchers an image of parts of the brain where blood flow is enhanced during a particular task or mood. Cobb notes that more than a hundred thousand scientific papers have been written about fMRIs in the past thirty years or so, even as many researchers stress that these studies give little detail and are prone to errors of interpretation. To demonstrate this, a team of scientists led by Craig Bennett conducted an fMRI study on a dead salmon. The fish was shown a series of photographs of people displaying different emotions and scanned to see which areas ‘lit up’ and with which emotions. There were findings! This isn’t to discredit all fMRI work, but it does remind me of the story of the German mathematician Michael Stifel, a friend of Martin Luther, who decoded the name Pope Leo X to discover that it represented 666. Stifel received death threats, and Luther hid him, but also asked him to give up on the numerology.
Another charming but deflating experiment was done by two researchers who analysed an MOS 6507 processor, as found in the 1970s and 1980s computers that ran video games like Donkey Kong and Space Invaders. Eric Jonas and Paul Kording studied the chip using the same methods they used to study the brain. They simulated all the connections on the chip in their computer, just as they might simulate the connections among neurons. Then they studied the equivalent of ‘lesions’, to get a sense of what each part of the chip was responsible for. (Often when we say a function, such as speech, is associated with a certain area in the brain, this comes from looking at the deficits in patients who have lesions in that part of the brain.) Though certain deletions prevented Donkey Kong from loading, that didn’t mean they were the loci of Donkey Kong, just that they were a necessary part. In the end, they concluded that their techniques did not produce ‘a meaningful understanding’ of how the chip was organised or how it worked.
Not all neuroscientists are careful sceptics, fond of detailed lab work. One made the cover of the New York Times in 1965 by implanting a chip in the brain of a bull and using a remote control to stop the bull charging when it was just feet away from him. The performance was never written up as a scientific study: it didn’t show anything neuroscientists didn’t already know. But magic shows and neuroscience have gone hand in hand since Galen’s performances with the vivisected pigs. Livewired, by the neuroscientist David Eagleman, comes closer than either of the other books to this tradition of popular entertainment. One of his main goals is to emphasise that brains are formed in large part by engagement with the world – by what a person sees, smells, experiences, trips on. And this relates to a brain’s plasticity, its ability to adapt to its circumstances: a young person who loses vision will become adept at ‘seeing’ a room through enhanced hearing. Much of this is familiar, including the fact that youthful brains are more ‘plastic’ than older ones. Eagleman has a dozen analogies to illustrate this, down to Mila Kunis’s accent.
His explanation of the way parts of the brain can, in essence, switch from one task to another, uses the story of Tokyo’s military engineers, who after the Second World War applied their expertise to cars and electronics, leading to Japan’s emergence as a consumer power. Eagleman describes technologies that in essence translate visual information into taps on skin, so that a blind person can in some sense ‘see’, as the part of the brain devoted to receiving these skin signals adapts and learns to make sense of the new input. Eagleman asks us to think of the brain as a Mr Potatohead: we can put the ears where the lips are and so on. Sometimes his analogies are amusing and illuminating; sometimes they are irritating and imprecise.
I suspect there is a larger audience for Eagleman’s variety of neuroscience writing than for the more detail-oriented writing of Cobb or Price. Eagleman’s book is crowded with extreme and often sad medical stories, like the one about a child who is still able to function after the removal of half of his brain (for intractable seizures), or another about a child who, locked away from the world for the first seven years of her life, was unable to develop basic language skills. But when reading Eagleman, I missed the hedging and counterpointing voice of Cobb especially. There was a moment early on, reading Livewired, when it occurred to me that Eagleman would be a better sci-fi writer than a neuroscience writer – and then he mentioned being a consultant on the TV series Westworld. That information helped me read the rest of his book in a (somewhat) better spirit.
We don’t know as much as we like to think we know about the brain – but that overconfidence is characteristic of the way our brains work. One interesting set of clinical observations comes from ‘split brain’ experiments, conducted on brains in which the corpus callosum – the bridging tissue that connects the left and right halves – has been cut. Such experiments were carried out on animals in the 1950s. Then, in 1962, a man with intractable epilepsy, known as WJ, consented to having his own corpus callosum cut, in an attempt to curtail his seizures. ‘Even if it doesn’t help my seizures,’ he said, ‘if you learn something it will be more worthwhile than anything I’ve been able to do for years.’ After the surgery, the researcher, Michael Gazzaniga, placed a box in WJ’s right visual field – an area ‘seen’ by the left side of the brain. ‘Do you see a box?’ he asked. WJ said yes. But when the box was placed in the left visual field, an area seen by the ‘right’ side of the brain, WJ said he saw nothing. Was this just an instance of what’s known as hemispatial neglect?
Speech production is primarily governed by the left side of the brain. In later experiments Gazzaniga repeated the procedure but instead of asking WJ to say if he saw the box, he asked him to point to it, a gesture controlled by the other side of the brain. WJ then pointed at the box that he had previously said he couldn’t see. The speech-producing side of his brain was unaware of the box that was pointed at by the hand controlled by the box-aware side of his brain. The right side of his brain was knowing and doing things without the left side’s awareness.
But things got even spookier. When experimenters showed a split-brain patient a card in the left visual field that said ‘stand’, then the patient would stand; but when asked why he was standing, the patient would make up an excuse rather than say he didn’t know why. The patient wasn’t lying, so much as confabulating. Similarly, when a card showing a nude woman – it was the 1960s – was placed in the left visual field of another patient, he laughed, and when asked why, said: ‘That’s a funny machine you have there.’ His brain wouldn’t accept that it was doing something mysterious to itself. The patient didn’t just tell the researchers a story, he told himself one. The split brain, as far as we can tell, doesn’t know when it’s blanketing over the gaps. And neither do we.