The Future of the Brain: An Excerpt
‘Better Brains’ shouted the front cover of a special edition of Scientific American in 2003, and the titles of the articles inside formed a dream prospectus for the future: ‘Ultimate self-improvement’; ‘New hope for brain repair’; ‘The quest for a smart pill’; ‘Mind-reading machines’; ‘Brain stimulators’; ‘Genes of the psyche’; ‘Taming stress’. These, it seems, are the promises offered by the new brain sciences, bidding strongly to overtake genetics as the Next Big Scientific Thing. The phrases trip lightly off the tongue, or shout to us from lurid book covers. There is to be a ‘post-human future’ in which ‘tomorrow’s people’ will be what another author describes as ‘neurochemical selves’. But just what is being sold here? How might these promissory notes be cashed? Is a golden ‘neurocentric age’ of human happiness ‘beyond therapy’ about to dawn? So many past scientific promises – from clean nuclear power to genetic engineering – have turned out to be so perilously close to snake oil that one is entitled to be just a little skeptical. And if these slogans do become practical technologies, what then? What becomes of our self-conception as humans with agency, with the freedom to shape our own lives? What new powers might accrue to the state, to the military, to the pharmaceutical industry, yet further to intervene in, to control our lives?
I am a neuroscientist. That is, I study how the brain works. I do this because, like every other neuroscientist, I believe that learning ‘how brains work’ in terms of the properties of its molecules, cells and systems, will also help us understand something about how minds work. This, to me, is one of the most interesting and important questions that a scientist – or indeed any other searcher after truth – can ask. Yet what I and my fellow neuroscientists discover provides more than mere passive knowledge of the world. Increasingly, as the Scientific American headlines suggest, this knowledge offers the prospect of sophisticated technologies for predicting, changing and controlling minds. My purpose in this book is to explore just how far the neurosciences’ increasing capacity to explain the brain brings in its train the power to mend, modulate and manipulate the mind.
Of course, for many neuroscientists, to ask how the brain works is equivalent to asking how the mind works, because they take almost for granted that the human mind is somehow embodied within the 1500 grams of densely packed cells and connections that constitute the brain. The opening sentences of a book are no place to prejudge that question, which has concerned not merely science but philosophy, religion and poetry for millennia, and I will come back to it for sure in due course. For now, let me continue unpacking what it means to be a neuroscientist. In particular, I am interested in one of the most intriguing, important and mysterious aspects of how minds work: how we humans learn and remember – or, to be more precise, what are the processes that occur in our brains that enable learning and memory to occur. To approach this problem, I use a variety of techniques: because non-human animal brains work very much in the same way as do our own, I am able to work with experimental animals to analyze the molecular and cellular processes that occur when they learn some new skill or task, but I also use one of the extraordinary new imaging techniques to provide a window into the human brain – including my own – when we are actively learning or recalling.
It is this range – from the properties of specific molecules in a small number of cells to the electrical and magnetic behavior of hundreds of millions of cells; from observing individual cells down a microscope to observing the behavior of animals confronted with new challenges – that constitutes the neurosciences. It is also what makes it a relatively new research discipline. Researchers have studied brain and behavior from the beginnings of recorded science, but until recently it was left to chemists to analyze the molecules, physiologists to observe the properties of ensembles of cells, and psychologists to interpret the behavior of living animals. The possibility – even the hope – of putting the entire jigsaw together only began to emerge towards the end of the last century.
In response, the US government designated the 1990s as The Decade of the Brain. Some four years later and rather reluctantly, the Europeans declared their own decade, which therefore is coming to its end as I write these words. Formal designations apart, the huge expansion of the neurosciences which has taken place over recent years has led many to suggest that the first ten years of this new century should be claimed as The Decade of the Mind. Capitalizing on the scale and technological success of the Human Genome Project, understanding – even decoding – the complex interconnected web between the languages of brain and those of mind has come to be seen as science’s final frontier. With its hundred billion nerve cells, with their hundred trillion interconnections, the human brain is the most complex phenomenon in the known universe – always, of course, excepting the interaction of some six billion such brains and their owners within the socio-technological culture of our planetary ecosystem!
The global scale of the research effort now put into the neurosciences, primarily in the US, but closely followed by Europe and Japan, has turned them from classical ‘little sciences’ into a major industry engaging large teams of researchers, involving billions of dollars from government – including its military wing – and the pharmaceutical industry. The consequence is that what were once disparate fields – anatomy, physiology, molecular biology, genetics and behavior – are now all embraced within ‘neurobiology’. However, its ambitions have reached still further, into the historically disputed terrain between biology, psychology and philosophy; hence the more all-embracing phrase: ‘the neurosciences’. The plural is important. Although the thirty thousand or so researchers who convene each year at the vast American Society for Neuroscience meetings, held in rotation in the largest conference centers that the US can offer, all study the same object – the brain, its functions and dysfunctions – they still do so at many different levels and with many different paradigms, problematics and techniques.
Inputs into the neurosciences come from genetics – the identification of genes associated both with normal mental functions, such as learning and memory, and the dysfunctions that go with conditions such as depression, schizophrenia and Alzheimer’s disease. From physics and engineering come the new windows into the brain offered by the imaging systems: PET, fMRI, MEG and others – acronyms which conceal powerful machines offering insights into the dynamic electrical flux through which the living brain conducts its millisecond by millisecond business. From the information sciences come claims to be able to model computational brain processes – even to mimic them in the artificial world of the computer.
Small wonder then that, almost drunk on the extraordinary power of these new technologies, neuroscientists have begun to lay claim to that final terra incognita, the nature of consciousness itself. Literally dozens of – mainly speculative – books with titles permutating the term ‘consciousness’ have appeared over the last decade; there is a Journal of Consciousness Studies, and Tucson, Arizona hosts regular ‘consciousness conferences’. I remain skeptical. This book is definitely not about offering some dramatic new ‘theory of consciousness’, although that particular ghost in the machine is bound to recur through the text. Indeed, I will try to explain why I think that as neuroscientists we don’t have anything very much useful to say about that particular Big C, and why therefore, as Wittgenstein said many years ago, we would do better to keep silent.
The very idea of a ‘consciousness conference’ implies that there is some agreement about how such an explanation of consciousness should be framed – or indeed what the word even means – but there is not. The rapid expansion of the neurosciences has produced an almost unimaginable wealth of data, facts, experimental findings, at every level from the submolecular to that of the brain as a whole. The problem, which concerns me greatly, is how to weld together this mass into a coherent brain theory. For the brain is full of paradoxes. It is simultaneously a fixed structure and a set of dynamic, partly coherent and partly independent processes. Properties – ‘functions’ – are simultaneously localized and delocalized, embedded in small clusters of cells or aspects of the working of the system as a whole. Of some of these clusters, and their molecular specialisms, we have partial understanding. Of how they relate to the larger neural scene, we are still often only at the handwaving stage.
Naming ourselves neuroscientists doesn’t of itself help us bring our partial insights together, to generate some Grand Unified Theory. Anatomists, imaging individual neurons at magnifications of half a million or more, and molecular biologists locating specific molecules within these cells see the brain as a complex wiring diagram in which experience is encoded in terms of altering specific pathways and interconnections. Electrophysiologists and brain imagers see what, at the beginning of the last century, in the early years of neurobiology, Charles Sherrington described as ‘an enchanted loom’ of dynamic ever-changing electrical ripples. Neuroendocrinologists see brain functions as continuously being modified by currents of hormones, from steroids to adrenaline – the neuromodulators that flow gently past each individual neuron, tickling its receptors into paroxysms of activity. How can all these different perspectives be welded into one coherent whole, even before any attempt is made to relate the ‘objectivity’ of the neuroscience laboratory to the day-to-day lived experience of our subjective experience? Way beyond the end of the Decade of the Brain, and at halfway through the putative Decade of the Mind, we are still data-rich and theory-poor.
However, our knowledges, fragmented as they are, are still formidable. Knowledge, of course, as Francis Bacon pointed out at the birth of Western science, is power. Just as with the new genetics, so the neurosciences are not merely about acquiring knowledge of brain and mind processes but about being able to act upon them – neuroscience and neurotechnology are indissolubly linked. This is why developments occurring within the neurosciences cannot be seen as isolated from the socio-economic context in which they are being developed, and in which searches for genetic or pharmacological fixes to individual problems dominate.
It is clear that the weight of human suffering associated with damage or malfunction of mind and brain is enormous. In the ageing populations of Western industrial societies, Alzheimer’s disease, a seemingly irreversible loss of brain cells and mental function, is an increasing burden. There are likely to be a million or so sufferers from Alzheimer’s in the UK by 2020. There are certain forms of particular genes which are now known to be risk factors for the disease, along with a variety of environmental hazards; treatment is at best palliative. Huntington’s disease is much rarer, and a consequence of a single gene abnormality; Parkinson’s is more common, and now the focus of efforts to alleviate it by various forms of genetic therapy.
Whilst such diseases and disorders are associated with relatively unambiguous neurological and neurochemical signs, there is a much more diffuse and troubling area of concern. Consider the world-wide epidemic of depression, identified by the World Health Organization (WHO) as the major health hazard of this century, in the moderation – though scarcely cure – of which vast tonnages of psychotropic drugs are manufactured and consumed each year. Prozac is the best known, but only one of several such agents designed to interact with the neurotransmitter serotonin. Questions of why this dramatic rise in the diagnosis of depression is occurring are rarely asked – perhaps for fear it should reveal a malaise not in the individual but in the social and psychic order. Instead, the emphasis is overwhelmingly on what is going on within a person’s brain and body. Where drug treatments have hitherto been empirical, neurogeneticists are offering to identify specific genes which might precipitate the condition, and in combination with the pharmaceutical industry to design tailor-made (‘rational’) drugs to fit any specific individual – so called psychopharmacogenetics.
However, the claims of the neurotechnologies go far further. The reductionist fervor within which they are being created argues that a huge variety of social and personal ills are attributable to brain malfunctions, themselves a consequence of faulty genes. The authoritative USbased Diagnostic and Statistical Manual now includes as disease categories ‘oppositional defiance disorder’, ‘disruptive behavior disorder’ and, most notoriously, a disease called ‘Attention Deficit Hyperactivity Disorder’ (ADHD) is supposed to affect up to 10 per cent of young children (mainly boys). The ‘disorder’ is characterized by poor school performance and inability to concentrate in class or to be controlled by parents, and is supposed to be a consequence of disorderly brain function associated with another neurotransmitter, dopamine. The prescribed treatment is an amphetamine-like drug called Ritalin. There is an increasing world-wide epidemic of Ritalin use. Untreated children are said to be likely to be more at risk of becoming criminals, and there is an expanding literature on ‘the genetics of criminal and anti-social behaviour’. Is this an appropriate medical/psychiatric approach to an individual problem, or a cheap fix to avoid the necessity of questioning schools, parents and the broader social context of education?
The neurogenetic-industrial complex thus becomes ever more powerful. Undeterred by the way that molecular biologists, confronted with the outputs from the Human Genome Project, are beginning to row back from genetic determinist claims, psychometricians and behaviour geneticists, sometimes in combination and sometimes in competition with evolutionary psychologists, are claiming genetic roots to areas of human belief, intentions and actions long assumed to lie outside biological explanation. Not merely such long runners as intelligence, addiction and aggression, but even political tendency, religiosity and likelihood of mid-life divorce are being removed from the province of social and/or personal psychological explanation into the province of biology. With such removal comes the offer to treat, to manipulate, to control. Back in the 1930s, Aldous Huxley’s prescient Brave New World offered a universal panacea, a drug called Soma that removed all existential pain. Today’s Brave New World will have a multitude of designer psychotropics, available either by consumer choice (so called ‘smart’ drugs to enhance cognition) or by state prescription (Ritalin for behavior control).
These are the emerging neurotechnologies, crude at present but becoming steadily more refined. Their development and use within the social context of contemporary industrial society presents as powerful a set of medical, ethical, legal and social dilemmas as does that of the new genetics, and we need to begin to come to terms with them sooner rather than later. To take just a few practical examples: if smart drugs are developed (‘brain steroids’ as they have been called), what are the implications of people using them to pass competitive examinations? Should people genetically at risk from Alzheimer’s disease be given life time ‘neuroprotective’ drugs? If diagnosing children with ADHD really does also predict later criminal behavior, should they be drugged with Ritalin or some related drug throughout their childhood? And if their criminal predisposition could be identified by brain imaging, should preventative steps be taken in advance of anyone actually committing a crime?
More fundamentally, what effect do the developing neurosciences and neurotechnologies have on our sense of individual responsibility, of personhood? How far will they affect legal and ethical systems and administration of justice? How will the rapid growth of human-brain/machine interfacing – a combination of neuroscience and informatics (cyborgery) – change how we live and think? These are not esoteric or science fiction questions; we aren’t talking about some fantasy human cloning far into the future, but prospects and problems which will become increasingly sharply present for us and our children within the next ten to twenty years. Thus yet another hybrid word is finding its way into current discussions: neuroethics.