The Human Brain Project risks becoming a missed opportunity

Image concept of a network of neurons in the human brain.

The brain is much on our minds at the moment. David Cameron is advocating a step-change in dementia research, brain-computer interfaces promise new solutions to paralysis, and the ongoing plight of Michael Schumacher has reminded us of the terrifying consequences of traumatic brain injury. Articles in scholarly journals and in the media are decorated with magical images of the living brain, like the one shown below, to illuminate these stories. Yet, when asked, most neuroscientists will say we still know very little about how the brain works, or how to fix it when it goes wrong.

DTI-sagittal-fibers
A diffusion tensor image showing some of the main pathways along which brain connections are organized.

The €1.2bn Human Brain Project (HBP) is supposed to change all this. Funded by the European Research Council, the HBP brings together more than 80 research institutes in a ten-year endeavour to unravel the mysteries of the brain, and to emulate its powers in new technologies. Following examples like the Human Genome Project and the Large Hadron Collider (where Higgs’ elusive boson was finally found), the idea is that a very large investment will deliver very significant results. But now a large contingent of prominent European neuroscientists are rebelling against the HBP, claiming that its approach is doomed to fail and will undermine European neuroscience for decades to come.

Stepping back from the fuss, it’s worth thinking whether the aims of the HBP really make sense. Sequencing the genome and looking for Higgs were both major challenges, but in these cases the scientific community agreed on the objectives, and on what would constitute success. There is no similar consensus among neuroscientists.

It is often said that the adult human brain is the most complex object in the universe. It contains about 90 billion neurons and a thousand times more connections, so that if you counted one connection each second it would take about three million years to finish. The challenge for neuroscience is to understand how this vast, complex, and always changing network gives rise to our sensations, perceptions, thoughts, actions, beliefs, desires, our sense of self and of others, our emotions and moods, and all else that guides our behaviour and populates our mental life, in health and in disease. No single breakthrough could ever mark success across such a wide range of important problems.

The central pillar of the HBP approach is to build computational simulations of the brain. Befitting the huge investment, these simulations would be of unprecedented size and detail, and would allow brain scientists to integrate their individual findings into a collective resource. What distinguishes the HBP – besides the money – is its aggressively ‘bottom up’ approach: the vision is that by taking care of the neurons, the big things – thoughts, perceptions, beliefs, and the like – will take care of themselves. As such, the HBP does not set out to test any specific hypothesis or collection of hypotheses, marking another distinction with common scientific practice.

Could this work? Certainly, modern neuroscience is generating an accelerating data deluge demanding new technologies for visualisation and analysis. This is the ‘big data’ challenge now common in many settings. It is also clear that better pictures of the brain’s wiring diagram (the ‘connectome’) will be essential as we move ahead. On the other hand, more detailed simulations don’t inevitably lead to better understanding. Strikingly, we don’t fully understand the brain of the tiny worm Caenorhabtis elegans even though it has only 302 neurons and the wiring diagram is known exactly. More generally, a key ability in science is to abstract away from the specifics to see more clearly what underlying principles are at work. In the limit, a perfectly accurate model of the brain may become as difficult to understand as the brain itself, as Borges long ago noted when describing the tragic uselessness of the perfectly detailed map.

jorge_luis_borges_por_paola_agosti
Jorge Luis Borges at Harvard University, 1967/8

Neuroscience is, and should remain, a broad church. Understanding the brain does not reduce to simulating the collective behaviour of all its miniscule parts, however interesting a part of the final story this might become. Understanding the brain means grasping complex interactions cross-linking many different levels of description, from neurons to brain regions to individuals to societies. It means complementing bottom-up simulations with new theories describing what the brain is actually doing, when its neurons are buzzing merrily away. It means designing elegant experiments that reveal how the mind constructs its reality, without always worrying about the neuronal hardware underneath. Sometimes, it means aiming directly for new treatments for devastating neurological and psychiatric conditions like coma, paralysis, dementia, and depression.

Put this way, neuroscience has enormous potential to benefit society, well deserving of high profile and large-scale support. It would be a great shame if the Human Brain Project, through its singular emphasis on massive computer simulation, ends up as a lightning rod for dissatisfaction with ‘big science’ rather than fostering a new and powerfully productive picture of the biological basis of the mind.

This article first appeared online in The Guardian on July 8 2014.  It appeared in print in the July 9 edition, on page 30 (comment section).

Post publication notes:

The HBP leadership have published a response to the open letter here. I didn’t find it very convincing. There have been a plethora of other commentaries on the HBP, as it comes up to its first review.  I can’t provide an exhaustive list but I particularly liked Gary Marcus’ piece in the New York Times (July 11). There was also trenchant criticism in the editorial pages of Nature.  Paul Verschure has a nice TED talk addressing some of the challenges facing big data, encompassing the HBP.

 

 

The importance of being Eugene: What (not) passing the Turing test really means

Image
Eugene Goostman, chatbot.

Could you tell difference between a non-native-English-speaking 13-year old Ukranian boy, and a computer program? On Saturday, at the Royal Society, one out of three human judges were fooled. So, it has been widely reported, the iconic Turing Test has been passed and a brave new era of Artificial Intelligence (AI) begins.

Not so fast. While this event marks a modest improvement in the abilities of so-called ‘chatbots’ to engage fluently with humans, real AI requires much more.

Here’s what happened. At a competition held in central London, thirty judges (including politician Lord Sharkey, computer scientist Kevin Warwick, and Red Dwarf actor Robert Llewellyn) interacted with ‘Eugene Goostman’ in a series of five-minute text-only exchanges. As a result, 33% of the judges (reports do not yet say which, though tweets implicate Llewellyn) were persuaded that ‘Goostman’ was real. The other 67%  were not. It turns out that ‘Eugene Goostman’ is not a teenager from Odessa, but a computer program, a ‘chatbot’ created by computer engineers Vladimir Veselov and Eugene Demchenko. According to his creators, ‘Goostman’ was ‘born’ in 2001, owns a pet guinea pig, and has a gynaecologist father.

The Turing Test, devised by computer science pioneer and codebreaker Alan Turing, was proposed as a practical alternative to the philosophically challenging and possibly absurd question, “can machines think”. In one popular interpretation, a human judge interacts with two players – a human and a machine – and must decide which is which. A candidate machine passes the test when the judge consistently fails to distinguish the one from the other. Interactions are limited to exchanges of strings of text, to make the competition fair (more on this later; its also worth noting that Turing’s original idea was more complex than this, but lets press on). While there have been many previous attempts and prior claims about passing the test, the Goostman-bot arguably outperformed its predecessors, leading Warwick to noisily proclaim “We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday”.

Image
Alan Turing’s seminal 1950 paper

This is a major overstatement which does grave disservice to the field of AI. While Goostman may represent progress of a sort – for instance this year’s competition did not place any particular restrictions on conversation topics – some context is badly needed.

An immediate concern is that Goostman is gaming the system. By imitating a non-native speaker, the chatbot can make its clumsy English expected rather than unusual. Hence its reaction to winning the prize: “I feel about beating the Turing test in quite convenient way”. And its assumed age of thirteen lowers expectations about satisfactory responses to questions. As Veselov put it “Thirteen years old is not too old to know everything and not too young to know nothing.” While Veselov’s strategy is cunning, it also shows that the Turing test is as much a test of the judges’ abilities to make suitable inferences, and to ask probing questions, as it is of the capabilities of intelligent machinery.

More importantly, fooling 33% of judges over 5 minute sessions was never the standard intended by Alan Turing for passing his test – it was merely his prediction about how computers might fare within about 50 years of his proposal. (In this, as in much else, he was not far wrong: the original Turing test was described in 1950.) A more natural criterion, as emphasized by the cognitive scientist Stevan Harnad, is for a machine to be consistently indistinguishable from human counterparts over extended periods of time, in other words to have the generic performance capacity of a real human being. This more stringent benchmark is still a long way off.

Perhaps the most significant limitation exposed by Goostman is the assumption that ‘intelligence’ can be instantiated in the disembodied exchange of short passages of text. On one hand this restriction is needed to enable interesting comparisons between humans and machines in the first place. On the other, it simply underlines that intelligent behaviour is intimately grounded in the tight couplings and blurry boundaries separating and joining brains, bodies, and environments. If Saturday’s judges had seen Goostman, or even an advanced robotic avatar voicing its responses, there would no question of any confusion. Indeed, robots that are today physically most similar to humans tend to elicit sensations like anxiety and revulsion, not camaraderie. This is the ‘uncanny valley’ – a term coined by robotics professor Masahiro Mori in 1970 (with a nod to Freud) and exemplified by the ‘geminoids’ built by Hiroshi Ishiguro.

Image
Hiroshi Ishiguro and his geminoid.  Another imitation game.

A growing appreciation of the importance of embodied, embedded intelligence explains why nobody is claiming that human-like robots are among us, or are in any sense imminent. Critics of AI consistently point to the notable absence of intelligent robots capable of fluent interactions with people, or even with mugs of tea. In a recent blog post I argued that new developments in AI are increasingly motivated by the near forgotten discipline of cybernetics, which held that prediction and control were at the heart of intelligent behaviour – not barefaced imitation as in Turing’s test (and, from a different angle, in Ishiguro’s geminoids). While these emerging cybernetic-inspired approaches hold great promise (and are attracting the interest of tech giants like Google) there is still plenty to be done.

These ideas have two main implications for AI. The first is that true AI necessarily involves robotics. Intelligent systems are systems that flexibly and adaptively interact with complex, dynamic, and often social environments. Reducing intelligence to short context-free text-based conversations misses the target by a country mile. The second is that true AI should focus not only on the outcome (i.e., whether a machine or robot behaves indistinguishably from a human or other animal) but also on the process by which the outcome is attained. This is why considerable attention within AI has always been paid to understanding, and simulating, how real brains work, and how real bodies behave.

Image
How the leopard got its spots: Turing’s chemical basis of morphogenesis.

Turing of course did much more than propose an interesting but ultimately unsatisfactory (and often misinterpreted) intelligence test. He laid the foundations for modern computer science, he saved untold lives through his prowess in code breaking, and he refused to be cowed by the deep prejudices against homosexuality prevalent in his time, losing his own life in the bargain. He was also a pioneer in theoretical biology: his work in morphogenesis showed how simple interactions could give rise to complex patterns during animal development. And he was a central figure in the emerging field of cybernetics, where he recognized the deep importance of embodied and embedded cognition. The Turing of 1950 might not recognize much of today’s technology, but he would not have been fooled by Goostman.

[postscript: while Warwick &co have been very reluctant to release the transcript of Goostman's 2014 performance, this recent Guardian piece has some choice dialogue from 2012, where Goostman polled at 28%, not far off Saturday's 33%. This piece was updated on June 12 following a helpful dialog with Aaron Sloman].

Darwin’s Neuroscientist: Gerald M. Edelman, 1929-2014

Image
Dr. Gerald M. Edelman, 1929-2014.

“The brain is wider than the sky.
For, put them side by side,
The one the other will include,
With ease, and you beside.”

Dr. Gerald M. Edelman often used these lines from Emily Dickinson to introduce the deep mysteries of neuroscience and consciousness. Dr. Edelman (it was always ‘Dr.’), who has died in La Jolla, aged 84, was without doubt a scientific great. He was a Nobel laureate at the age of 43, a pioneer in immunology, embryology, molecular biology, and neuroscience, a shrewd political operator, and a Renaissance man of striking erudition who displayed a masterful knowledge of science, music, literature, and the visual arts who at one time could have been a concert violinist. He quoted Woody Allen and Jascha Heifetz as readily as Linus Pauling and Ludwig Wittgenstein, a compelling raconteur who loved telling a good Jewish joke just as much as explaining the principles of neuronal selection. And he was my mentor from the time I arrived as a freshly minted Ph.D. at The Neurosciences Institute in San Diego, back in 2001. His influence in biology and the neurosciences is inestimable. While his loss marks the end of an era, his legacy is sure to continue.

Gerald Maurice Edelman was born in Ozone Park, New York City, in 1929, to parents Edward and Anna. He trained in medicine at the University of Pennsylvania, graduating cum laude in 1954. After an internship at the Massachusetts General Hospital and three years in the US Army Medical Corp in France, Edelman entered the doctoral program at Rockefeller University, New York. Staying at Rockefeller after his Ph.D. he became Associate Dean and Vincent Astor Distinguished Professor, and in 1981 he founded The Neuroscience Institute (NSI). In 1992 the NSI moved lock, stock, and barrel into new purpose-built laboratories in La Jolla, California, where Edelman continued as Director for more than twenty years. A dedicated man, he continued working at the NSI until a week before he died.

In 1972 Edelman won the Nobel Prize in Physiology or Medicine (shared independently with Rodney Porter) for showing how antibodies can recognize an almost infinite range of invading antigens. Edelman’s insight, the principles of which resonate throughout his entire career, was based on variation and selection: antibodies undergo a process of ‘evolution within the body’ in order to match novel antigens. Crucially, he performed definitive experiments on the chemical structure of antibodies to support his idea [1].

Image
Dr. Edelman at Rockefeller University in 1972, explaining his model of gamma globulin.

Edelman then moved into embryology, discovering an important class of proteins known as ‘cell adhesion molecules’ [2]. Though this, too, was a major contribution, it was the biological basis of mind and consciousness – one of the ‘dark areas’ of science, where mystery reigned – that drew his attention for the rest of his long career. Over more than three decades Edelman developed his theory of neuronal group selection, also known as ‘neural Darwinism’, which again took principles of variation and selection, but here applied them to brain development and dynamics [3-7]. The theory is rich and still underappreciated. At its heart is the realization that the brain is very different from a computer: as he put it, brains don’t work with ‘logic and a clock’. Instead, Edelman emphasized the rampantly ‘re-entrant’ connectivity of the brain, with massively parallel bidirectional connections linking most brain regions. Uncovering the implications of re-entry remains a profound challenge today.

Image
The campus of The Neuroscience Institute in La Jolla, California.

Edelman was convinced that scientific breakthroughs require both sharp minds and inspiring environments. The NSI was founded as a monastery of science, supporting a small cadre of experimental and theoretical neuroscientists and enabling them to work on ambitious goals free from the immediate pressures of research funding and paper publication. This at least was the model, and Edelman struggled heroically to maintain its reality in the face of increasing financial pressures and the shifting landscape of academia. That he was able to succeed for so long attests to his political nous and focal determination as well as his intellectual prowess. I remember vividly the ritual lunches that exemplified life at the NSI. The entire scientific staff ate together at noon every day (except Fridays), at tables seemingly designed to hold just enough people so that the only common topic could be neuroscience; Edelman, of course, held court at one table, brainstorming and story-telling in equal measure. The NSI itself is a striking building, housing not only experimental laboratories but also a concert-grade auditorium. Science and art were, for Edelman, two manifestations of a fundamental urge towards creativity and beauty.

Edelman did not always take the easiest path through academic life. Among many rivalries, he enjoyed lively clashes with fellow Nobel laureate Francis Crick who, like Edelman himself, had turned his attention to the brain after resolving a central problem in a different area of biology. Crick once infamously referred to neural Darwinism as ‘neural Edelmanism’ [8], a criticism which nowadays seems less forceful as attention within neurosciences increasingly focuses on neuronal population dynamics (just before his death in 2004, Crick met with Edelman and they put aside any remaining feelings of enmity). In 2003 both men published influential papers setting out their respective ideas on consciousness [9, 10]; these papers put the neuroscience of consciousness at last, and for good, back on the agenda.

The biological basis of consciousness had been central to Edelman’s scientific agenda from the late 1980s. Consciousness had long been considered beyond the reach of science; Edelman was at the forefront its rehabilitation as a serious subject within biology. His approach was from the outset more subtle and sophisticated than those of his contemporaries. Rather than simply looking for ‘neural correlates of consciousness’ – brain areas or types of activity that happen to co-exist with conscious states – Edelman wanted to naturalize phenomenology itself. That is, he tried to establish formal mappings between phenomenological properties of conscious experience and homologous properties of neural dynamics. In short, this meant coming up with explanations rather than mere correlations, the idea being that such an approach would demystify the dualistic schism between ‘mind’ and ‘matter’ first invoked by Descartes. This approach was first outlined in his book The Remembered Present [5] and later amplified in A Universe of Consciousness, a work co-authored with Giulio Tononi [11]. It was this approach to consciousness that first drew me to the NSI and to Edelman, and I was not disappointed. These ideas, and the work they enabled, will continue to shape and define consciousness science for years to come.

My own memories of Edelman revolve entirely around life at the NSI. It was immediately obvious that he was not a distant boss who might leave his minions to get on with their research in isolation. He was generous with his time. I saw him almost every working day, and many discussions lasted long beyond their allotted duration. His dedication to detail sometimes took the breath away. On one occasion, while working on a paper together [12], I had fallen into the habit of giving him a hard copy of my latest effort each Friday evening. One Monday morning I noticed the appearance of a thick sheaf of papers on my desk. Over the weekend Edelman had cut and paste – with scissors and glue, not Microsoft Word – paragraphs, sentences, and individual words, to almost entirely rewrite my tentative text. Needless to say, it was much improved.

The abiding memory of anyone who has spent time with Dr. Edelman is however not the scientific accomplishments, not the achievements encompassed by the NSI, but instead the impression of an uncommon intellect moving more quickly and ranging more widely than seemed possible. The New York Times put it this way in a 2004 profile:

“Out of free-floating riffs, vaudevillian jokes, recollections, citations and patient explanations, out of the excited explosions of example and counterexample, associations develop, mental terrain is reordered, and ever grander patterns emerge.”

Dr. Edelman will long be remembered for his remarkably diverse scientific contributions, his strength of character, erudition, integrity, and humour, and for the warmth and dedication he showed to those fortunate enough to share his vision. He is survived by his wife, Maxine, and three children: David, Eric, and Judith.

Anil Seth
Professor of Cognitive and Computational Neuroscience
Co-Director, Sackler Centre for Consciousness Science
University of Sussex

This article has been republished in Frontiers in Conciousness Research doi: 10.3389/fpsyg.2014.00896

References

1 Edelman, G.M., Benacerraf, B., Ovary, Z., and Poulik, M.D. (1961) Structural differences among antibodies of different specificities. Proc Natl Acad Sci U S A 47, 1751-1758
2 Edelman, G.M. (1983) Cell adhesion molecules. Science 219, 450-457
3 Edelman, G.M. and Gally, J. (2001) Degeneracy and complexity in biological systems. Proc. Natl. Acad. Sci. USA 98, 13763-13768
4 Edelman, G.M. (1993) Neural Darwinism: selection and reentrant signaling in higher brain function. Neuron 10, 115-125.
5 Edelman, G.M. (1989) The remembered present. Basic Books
6 Edelman, G.M. (1987) Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, Inc.
7 Edelman, G.M. (1978) Group selection and phasic re-entrant signalling: a theory of higher brain function. In The Mindful Brain (Edelman, G.M. and Mountcastle, V.B., eds), MIT Press
8 Crick, F. (1989) Neural edelmanism. Trends Neurosci 12, 240-248
9 Edelman, G.M. (2003) Naturalizing consciousness: a theoretical framework. Proc Natl Acad Sci U S A 100, 5520-5524
10 Crick, F. and Koch, C. (2003) A framework for consciousness. Nature Neuroscience 6, 119-126
11 Edelman, G.M. and Tononi, G. (2000) A universe of consciousness : how matter becomes imagination. Basic Books
12 Seth, A.K., Izhikevich, E.I, Reeke, G.N, and Edelman, G.M. (2006) Theories and measures of consciousness: An extended framework. Proc Natl Acad Sci U S A 103, 10799-804

 

How does the brain fit into the skull?

Image

Announcing a new paper co-authored with David Samu and Thomas Nowotny, published yesterday in the open-access journal PLoS Computational Biology.

Influence of Wiring Cost on the Large-Scale Architecture of Human Cortical Connectivity

Macroscopic regions in the grey matter of the human brain are intricately connected by white-matter pathways, forming the extremely complex network of the brain. Analysing this brain network may provide us insights on how anatomy enables brain function and, ultimately, cognition and consciousness. Various important principles of organization have indeed been consistently identified in the brain’s structural connectivity, such as a small-world and modular architecture. However, it is currently unclear which of these principles are functionally relevant, and which are merely the consequence of more basic constraints of the brain, such as its three-dimensional spatial embedding into the limited volume of the skull or the high metabolic cost of long-range connections. In this paper, we model what aspects of the structural organization of the brain are affected by its wiring constraints by assessing how far these aspects are preserved in brain-like networks with varying spatial wiring constraints. We find that all investigated features of brain organization also appear in spatially constrained networks, but we also discover that several of the features are more pronounced in the brain than its wiring constraints alone would necessitate. These findings suggest the functional relevance of the ‘over-expressed’ properties of brain architecture.

New: Image from this paper featured as MRC biomedical image of the day on April 29th 2014!

brainteaser

 

The amoral molecule

Image

The cuddle drug, the trust hormone, the moral molecule: oxytocin (OXT), has been called all these things and more.  You can buy nasal sprays of the stuff online in the promise that some judicious squirting will make people trust you more. In a recent book neuroscientist-cum-economist Paul Zak goes the whole hog, saying that if we only let ourselves be guided by this “moral molecule”, prosperity and social harmony will certainly ensue.

Behind this outlandish and rather ridiculous claim lies some fascinating science. The story starts with the discovery that injecting female virgin rats with OXT triggers maternal instincts, and that these same instincts in mother rats are suppressed when OXT is blocked.  Then came the finding of different levels of OXT receptors in two closely related species of vole. The male prairie vole, having high levels, is monogamous and helps look after its little vole-lets.  Male meadow voles, with many fewer receptors, are aggressive loners who move from one female to the next without regard for their offspring. What’s more, genetically manipulating meadow voles to express OXT receptors turns them into monogamous prairie-vole-a-likes. These early rodent studies showed that OXT plays an important and previously unsuspected role in social behaviour.

Studies of oxytocin and social cognition really took off about ten years ago when Paul Zak, Ernst Fehr, and colleagues began manipulating OXT levels in human volunteers while they played a variety of economic and ‘moral’ games in the laboratory.  These studies showed that OXT, usually administered by a few intranasal puffs, could make people more trusting, generous, cooperative, and empathetic.

For example, in the so-called ‘ultimatum game’ one player (the proposer) is given £10 and offers a proportion of it to a second player (the responder) who has to decide whether or not to accept. If the responder accepts, both players get their share; if not, neither gets anything.  Since these are one-off encounters, rational analysis says that the responder should accept any non zero proposal, since something is better than nothing.  In practice what happens is that offers below about £3 are often rejected, presumably because the desire to punish ‘unfair’ offers outweighs the allure of a small reward. Strikingly, a few whiffs of OXT makes donor players more generous, by almost 50% in some cases. And the same thing happens in other similar situations, like the ‘trust game’: OXT seems to make people more cooperative and pro-social.

Even more exciting are recent findings that OXT can help reduce negative experiences and promote social interactions in conditions like autism and schizophrenia.  In part this could be due to OXTs general ability to reduce anxiety, but there’s likely more to the story than this.  It could also be that OXT enhances the ability to ‘read’ emotional expressions, perhaps by increasing their salience.  Although clinical trials have so far been inconclusive there is at least some hope for new OXT-based pharmacological treatments (though not cures) for these sometimes devastating conditions.

These discoveries are eye-opening and apparently very hopeful. What’s not to like?

Image

The main thing not to like is the idea that there could be such a simple relationship between socially-conditioned phenomena like trust and morality, and the machinations of single molecule.  The evolutionary biologist Leslie Orgel said it well with his ‘third rule’: “Biology is more complicated than you imagine, even when you take Orgel’s third rule into account”.  Sure enough, the emerging scientific story says things are far from simple.

Carsten de Dreu of the University of Amsterdam has published a series of important studies showing that whether oxytocin has a prosocial effect, or an antisocial effect, seems to depend critically on who the interactions are between. In one study, OXT was found to increase generosity within a participant’s ingroup (i.e., among participants judged as similar) but to actually decrease it for interactions with outgroup members.  Another study produced even more dramatic results: here, OXT infusion led volunteers to adopt more derogatory attitudes to outgroup members, even when ingroup and outgroup compositions were determined arbitrarily. OXT can even increase social conformity, as shown in a recent study in which volunteers were divided into two groups and had to judge the attractiveness of arbitrary shapes.

All this should make us look very suspiciously on claims that OXT is any kind of ‘moral molecule’ as some might suggest.  So where do we go from here? A crucial next step is to try to understand how the complex interplay between OXT and behaviour is mediated by the brain. Work in this area has already begun: the research on autism, for example, has shown that OXT infusion leads to autistic brains better differentiating between emotional and non-emotional stimuli.  This work complements emerging social neuroscience studies showing how social stereotypes can affect even very basic perceptual processes. In one example, current studies in our lab are indicating that outgroup faces (e.g., Moroccans for Caucasian Dutch subjects) are literally harder to see than ingroup faces.

Neuroscience has come in for a lot of recent criticism for reductionist ‘explanations’ in which complex cognitive phenomena are identified with activity in this-or-that brain region.  Following this pattern, talk of ‘moral molecules’ is, like crime in multi-storey car-parks, wrong on so many levels. There are no moral molecules, only moral people (and maybe moral societies).  But let’s not allow this kind of over-reaching to blind us to the progress being made when sufficient attention is paid to the complex hierarchical interactions linking molecules to minds.  Neuroscience is wonderfully exciting and has enormous potential for human betterment.  It’s just not the whole story.

This piece is based on a talk given at Brighton’s Catalyst Club as part of the 2014 Brighton Science Festival.

 

Accurate metacognition for visual sensory memory

Image

I’m co-author on a new paper in Psychological Science – a collaboration between the Sackler Centre (me and Adam Barrett) and the University of Amsterdam (where I am a Visiting Professor).  The new study addresses the continuing debate about whether the apparent rich content of our visual sensory scenes is somehow an illusion, as suggested by experiments like change blindness.  Here, we provide evidence in the opposite direction by showing that metacognition (literally, cognition about cognition) is equivalent for different kinds of visual memory, including visual ‘sensory’ memory which reflects brief, unattended, stimuli.  The results indicate that our subjective impression of seeing more than we can attend to is not an illusion, but is an accurate reflection of the richness of visual perception.

Accurate Metacognition for Visual Sensory Memory Representations.

The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition-the degree of knowledge that subjects have about the correctness of their decisions-for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

The 30 Second Brain

Image

This week I’d like to highlight my new book, 30 Second Brain,  published by Icon Books on March 6th.  It is widely available in both the UK and the USA.  To whet your appetite here is a slightly amended version of the Introduction.

[New Scientist have just reviewed the book]

Understanding how the brain works is one of our greatest scientific quests.  The challenge is quite different from other frontiers in science.  Unlike the bizarre world of the very small in which quantum-mechanical particles can exist and not-exist at the same time, or the mind-boggling expanses of time and space conjured up in astronomy, the human brain is in one sense an everyday object: it is about the size and shape of a cauliflower, weighs about 1.5 kilograms, and has a texture like tofu.  It is the complexity of the brain that makes it so remarkable and difficult to fathom.  There are so many connections in the average adult human brain, that if you counted one each second, it would take you over 3 million years to finish.

Faced with such a daunting prospect it might seem as well to give up and do some gardening instead.  But the brain cannot be ignored.  As we live longer, more and more of us are suffering  – or will suffer – from neurodegenerative conditions like Alzheimer’s disease and dementia, and the incidence of psychiatric illnesses like depression and schizophrenia is also on the rise. Better treatments for these conditions depend on a better understanding of the brain’s intricate networks.

More fundamentally, the brain draws us in because the brain defines who we are.  It is much more than just a machine to think with. Hippocrates, the father of western medicine, recognized this long ago:  “Men ought to know that from nothing else but the brain come joys, delights, laughter and jests, and sorrows, griefs, despondency, and lamentations.” Much more recently Francis Crick – one of the major biologists of our time  – echoed the same idea: “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules”.  And, perhaps less controversially but just as important, the brain is also responsible for the way we perceive the world and how we behave within it. So to understand the operation of the brain is to understand our own selves and our place in society and in nature, and by doing so to follow in the hallowed footsteps of giants like Copernicus and Darwin.

But how to begin?  From humble beginnings, neuroscience is now a vast enterprise involving scientists from many different disciplines and almost every country in the world.  The annual meeting of the ‘Society for Neuroscience’ attracts more than twenty thousand (and sometime more than thirty thousand!) brain scientists each year, all intent on talking about their own specific discoveries and finding out what’s new.  No single person – however capacious their brain – could possible keep track of such an enormous and fast-moving field.  Fortunately, as in any area of science, underlying all this complexity are some key ideas to help us get by.  Here’s where this book can help.

Within the pages of this book, leading neuroscientists will take you on a tour of fifty of the most exciting ideas in modern brain science, using simple plain English.  To start with, in ‘Building the brain’ we will learn about the basic components and design of the brain, and trace its history from birth (and before!), and over evolution.  ‘Brainy theories’ will introduce some of the most promising ideas about how the brain’s many billions of nerve cells (neurons) might work together.  The next chapter will show how new technologies are providing astonishing advances in our ability to map the brain and decipher its activity in time and space.  Then in ‘Consciousness’ we tackle the big question raised by Hippocrates and Crick, namely the still-mysterious relation between the brain and conscious experience – how does the buzzing of neurons transform into the subjective experience of being you, here, now, reading these words? Although the brain basis of consciousness happens to be my own particular research interest, much of the brain’s work is done below its radar – think of the delicate orchestration of muscles involved in picking up a cup, or in walking across the room.  So in the next chapter we will explore how the brain enables perception, action, cognition, and emotion, both with and without consciousness.  Finally, nothing – of course – ever stays the same. In the last chapter – ‘the changing brain –we will explore some very recent ideas about how the brain changes its structure and function throughout life, in both health and in disease.

Each of the 50 ideas is condensed into a concise, accessible and engaging ’30 second neuroscience’.  To get the main message across there is also a ‘3 second brainwave’, and a ‘3 minute brainstorm’ provides some extra food for thought on each topic. There are helpful glossaries summarizing the most important terms used in each chapter, as well as biographies of key scientists who helped make neuroscience what it is today.  Above all, I hope to convey that the science of the brain is just getting into its stride. These are exciting times and it’s time to put the old grey matter through its paces.

Update 29.04.14.  Foreign editions now arriving!

30SecBrainMontage

All watched over by search engines of loving grace

google-deepmind-artificial-intelligence

Google’s shopping spree has continued with the purchase of the British artificial intelligence (AI) start-up DeepMind, acquired for an eye-watering £400M ($650M).  This is Google’s 8th biggest acquisition in its history, and the latest in a string of purchases in AI and robotics. Boston Dynamics, an American company famous for building agile robots capable of scaling walls and running over rough terrain (see BigDog here), was mopped up in 2013. And there is no sign that Google is finished yet. Should we be excited or should we be afraid?

Probably both. AI and robotics have long promised brave new worlds of helpful robots (think Wall-E) and omniscient artificial intelligences (think HAL), which remain conspicuously absent. Undoubtedly, the combined resources of Google’s in-house skills and its new acquisitions will drive progress in both these areas. Experts have accordingly fretted about military robotics and speculated how DeepMind might help us make better lasagne. But perhaps something bigger is going on, something with roots extending back to the middle of the last century and the now forgotten discipline of cybernetics.

The founders of cybernetics included some of the leading lights of the age, including John Von Neumann (designer of the digital computer), Alan Turing, the British roboticist Grey Walter and even people like the psychiatrist R.D. Laing and the anthropologist Margaret Mead.  They were led by the brilliant and eccentric figures of Norbert Wiener and Warren McCulloch in the USA, and Ross Ashby in the UK. The fundamental idea of cybernetics was consider biological systems as machines. The aim was not to build artificial intelligence per se, but rather to understand how machines could appear to have goals and act with purpose, and how complex systems could be controlled by feedback. Although the brain was the primary focus, cybernetic ideas were applied much more broadly – to economics, ecology, even management science.  Yet cybernetics faded from view as the digital computer took centre stage, and has remained hidden in the shadows ever since.  Well, almost hidden.

One of the most important innovations of 1940s cybernetics was the neural network, the idea that logical operations could be implemented in networks of brain-cell-like elements wired up in particular ways. Neural networks lay dormant, like the rest of cybernetics, until being rediscovered in the 1980s as the basis of powerful new ‘machine learning’ algorithms capable of extracting meaningful patterns from large quantities of data. DeepMind’s technologies are based on just these principles, and indeed some of their algorithms originate in the pioneering neural network research of Geoffrey Hinton (another Brit), who’s company DNN Research was also recently bought by Google and who is now a Google Distinguished Researcher.

What sets Hinton and DeepMind apart is that their algorithms reflect an increasingly prominent theory about brain function. (DeepMind’s founder, the ex-chess-prodigy and computer games maestro Demis Hassibis, set up his company shortly after taking a Ph.D. in cognitive neuroscience.) This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control.  Put simply, the brain learns about the statistics of its sensory inputs, and about how these statistics change in response to its own actions. In this way, the brain can build a model of its world (which includes its own body) and figure out how to control its environment in order to achieve specific goals. What’s more, exactly the same principle can be used to develop robust and agile robotics, as seen in BigDog and its friends

Put all this together and so resurface the cybernetic ideals of exploiting deep similarities between biological entities and machines.  These similarities go far beyond superficial (and faulty) assertions that brains are computers, but rather recognize that prediction and control lie at the very heart of both effective technologies and successful biological systems.  This means that Google’s activity in AI and robotics should not be considered separately, but instead as part of larger view of how technology and nature interact: Google’s deep mind has deep roots.

What might this mean for you and me? Many of the original cyberneticians held out a utopian prospect of a new harmony between people and computers, well captured by Richard Brautigan’s 1967 poem – All Watched Over By Machines of Loving Grace – and recently re-examined in Adam Curtis’ powerful though breathless documentary of the same name.  As Curtis argued, these original cybernetic dreams were dashed against the complex realities of the real world. Will things be different now that Google is in charge?  One thing that is certain is that simple idea of a ‘search engine’ will seem increasingly antiquated.  As the data deluge of our modern world accelerates, the concept of ‘search’ will become inseparable from ideas of prediction and control.  This really is both scary and exciting.

The limpid subtle peace of the ecstatic brain

Image

In Dostoevsky’s “The Idiot”, Prince Mychkine experiences repeated epileptic seizures accompanied by “an incredible hitherto unsuspected feeling of bliss and appeasement”, so that “All my problems, doubts and worries resolved themselves in a limpid subtle peace, with a feeling of understanding and awareness of the ‘Supreme Principal of life’”. Such ‘ecstatic epileptic seizures’ have been described many times since (usually with less lyricism), but only now is the brain basis of these supremely meaningful experiences becoming clear, thanks to remarkable new studies by Fabienne Picard and her colleagues at the University of Geneva.

Ecstatic seizures, besides being highly pleasurable, involve a constellation of other symptoms including an increased vividness of sensory perceptions, heightened feelings of self-awareness – of being “present” in the world – a feeling of time standing still, and an apparent clarity of mind where all things seem suddenly to make perfect sense. For some people this clarity involves a realization that a ‘higher power’ (or Supreme Principal) is responsible, though for atheists such beliefs usually recede once the seizure has passed.

In the brain, epilepsy is an electrical storm. Waves of synchronized electrical activity spread through the cortex, usually emanating from one or more specific regions where the local neural wiring may have gone awry.  While epilepsy can often be treated by medicines, in some instances surgery to remove the offending chunk of brain tissue is the only option. In these cases it is now becoming common to insert electrodes directly into the brains of surgical candidates, to better localize the ‘epileptic focus’ and to check that its removal would not cause severe impairments, like the loss of language or movement.  And herein lie some remarkable new opportunities.

Recently, Dr. Picard used just this method to record brain activity from a 23-year-old woman who has experienced ecstatic seizures since the age of 12. Picard found that her seizures involved electrical brain-storms centred on a particular region called the ‘anterior insula cortex’.  The key new finding was that electrical stimulation of this region, using the same electrodes, directly elicited ecstatic feelings – the first time this has been seen. These new data provide important support for previous brain-imaging studies which have shown increased blood flow to the anterior insula in other patients during similar episodes.

The anterior insula (named from the latin for ‘island’) is a particularly fascinating lump of brain tissue.  We have long known that it is involved in how we perceive the internal state of our body, and that these perceptions underlie emotional experiences. More recent evidence suggests that the subjective sensation of the passing of time depends on insular activity.  It also seems to be the place where perceptions of the outside world are integrated with perceptions of our body, perhaps supporting basic forms of self-consciousness and underpinning how we experience our relation to the world.  Strikingly, abnormal activity of the insula is associated with pathological anxiety (the opposite of ecstatic ‘certainty’) and symptoms of depersonalization and derealisation, where the self and world are drained of subjective reality (the opposite of ecstatic perceptual vividness and enhanced self-awareness). Anatomically the anterior insula is among the most highly developed brain regions in humans when compared to other animals, and it even houses a special kind of ‘Von Economo’ neuron. These and other findings are motivating new research, including experiments here at the Sackler Centre for Consciousness Science, which aim to further illuminate the role of the insula in the weaving the fabric of our experienced self. The finding that electrical stimulation of the insular can lead to ecstatic experiences and enhanced self-awareness provides an important advance in this direction.

Picard’s work brings renewed scientific attention to the richness of human experience, the positive as well as the negative, the spiritual as well as the mundane. The finding that ecstatic experiences can be induced by direct brain stimulation may seem both fascinating and troubling, but taking a scientific approach does not imply reducing these phenomena to the buzzing of neurons. Quite the opposite: our sense of wonder should be increased by perceiving connections between the peaks and troughs of our emotional lives and the intricate neural conversations on which they, at least partly, depend.