Cognitive neuroscience, a broad field of mind–brain research, took off in the 1980s after languishing in the doldrums for decades. An important catalyst of this advance was the invention of noninvasive scanners for scrutinising the grey stuff. The CT (computed tomography) machine could peer into a living brain without destroying what it probed. Scanners have been extraordinarily helpful in the diagnosis of lesions and tumours, as well as in assisting attempts to find correlations between specific brain locations and a host of mental activities.
The late 1980s and 1990s (dubbed the Decade of the Brain in the United States) saw rapid advances in psychopharmacology, boosted by designer molecules, improved computer modelling of neurones, and the diversion of government funding into neuroscience after the fall of the Berlin Wall. The pharmaceutical industry, which had much to gain from the proposition, had already noted that some 350 billion dollars were being lost by the US economy every year because of mental illnesses, especially depression. The hypothesis that depression was caused by lowered levels of a natural biochemical called serotonin in the synapses of the brain led to a slew of antidotes – among them Prozac, which quickly became a staggering commercial success.
In parallel with these developments there emerged a number of philosophically minded cognitive neuroscientists, such as Francis Crick, co-discoverer of the structure of DNA. A latecomer to neuroscience, which he described as the science of the future, Crick appealed to the mechanics of brain function to explain the phenomenon of consciousness in his book The Astonishing Hypothesis: The Scientific Search for the Soul (1994). He noted that when an animal, such as a cat, fixes its attention on an object, such as a mouse, its neurons oscillate at the rate of 40 hertz. How oscillations relate to the subjective experience of smelling a rose, or getting the point of a joke, seemed to him beside the point. But then it is routine among reductionist expositors of the mind–brain relationship to confuse necessary correlatives between experience and the ‘wet stuff’ (as they like to call brain cells) with explanations that would sufficiently satisfy all those vexing, fundamental questions about the human mind.
Against this background Professor Vilayanur S Ramachandran’s new book, The Tell-Tale Brain, should carry a major intellectual health warning. By training he is a physician, and he acquired considerable neurological expertise before embarking on neuroscientifically informed studies of ‘human nature’. He is author of the acclaimed Phantoms in the Brain and broadcast the Reith lectures in 2003. He belongs, loosely, to the genre of case-study literature that runs from the late nineteenth-century French neurologist Jean-Martin Charcot to Oliver Sacks. Yet whereas Sacks and his antecedents employed their case histories to reveal the presence, despite a devastating lesion or trauma, of an individual’s unique personality, Ramachandran has a much bigger aim. He is seeking in groups of unusual brain conditions – such as autism, synaesthesia, and apraxia (uncontrolled movements) – general explanations for human mental capacities: imagination, free will, artistic endeavour, the ability to make metaphors, and, of course, consciousness.
There is a queasy, almost comic, mixture of grandiosity and self-deprecation in Ramachandran’s approach. Like a neuroscientific Tommy Cooper, he makes prodigious claims only to discard them. ‘As heady as our progress has been,’ he comments typically, ‘we need to stay completely honest with ourselves and acknowledge that we have only discovered a tiny fraction of what there is to know about the human brain.’ Yet category confusions between necessary and sufficient causes abound. Take his section on freedom of the will. From the outset he fails to define free will in a way that would satisfy the requirements of any self-respecting philosopher of mind, let alone match the rich descriptions of art and literature. Free will, he asserts, involves envisaging ‘alternate courses of action’, as well as their consequences, and the ability to ‘withhold the action’. That would be well within the capacity of most laptops. Then comes the brain-based correlative: ‘The upper gyrus branching from the left inferior parietal lobule,’ he writes, ‘is very much involved in this ability to create a dynamic internal image of anticipated actions.’ Why? Because damage to this brain region, he goes on, results in apraxia. Were he pressed, I suspect that Ramachandran would be the first to admit that apraxia does not stand in anything like meaningful opposition to freedom of the will. Yet he asserts that a form of apraxia, ‘alien-hand syndrome’, underscores ‘the important role of the anterior cingulate in free will, transforming a philosophical problem into a neurological one’.
Having dismissed philosophical and analytical studies of ‘consciousness’ as being too abstract and untestable, he concludes that ‘neuroscience and neurology provide us with a new and unique opportunity to understand the structure and function of the self, not only from the outside by observing behavior, but also from studying the inner workings of the brain’. And that, precisely, is my quarrel with him. To correlate the anterior cingulate with the ‘inner’ phenomenon of the self is to create the most glaring category error of all. Just because a part of the brain is ‘inside’ my skull, does not mean that it is ‘inner’ in the sense that my conscious self, looking out at the world, is ‘inner’. As has been pointed out by countless philosophers of mind (and indeed Ramachandran himself in less hubristic mood), no amount of factual, objective information about the neurophysiological basis of colour perception will enable a person born blind to know what it is to experience the redness of a red rose.
Knowledge of the workings of the brain will continue to offer benefits in neurological diagnosis, and interesting relationships between the brain and the mind may well be discovered. But to believe, as does Ramachandran, that brain studies have brought us to the point of departure for ‘the greatest adventure humankind has ever embarked on’ seems to me a form of misperception comparable, in intellectual and cultural terms, to Oliver Sacks’s unfortunate patient who found difficulty distinguishing his wife from a hat.