Would you have an IQ-boosting microchip implanted in your brain if you had the chance? What if everyone else around you did? Imagine your work colleagues outperforming you, and your friends having conversations you can’t quite follow. Would you upgrade your brain then? Should you?
It sounds like science fiction, but it’s not such an outlandish idea. Earlier this year, an announcement from tech entrepreneur Elon Musk’s company Neuralink caught the attention of the world’s media.
A range of different ways to link brain signals and computers – brain-computer interfaces, or BCIs – already exist, but Neuralink has improved this technology using impressively small, super-thin, flexible micro-electrodes, which enable a tiny device to be implanted in the brain to read (and potentially write) neural signals. They have trialled this in monkeys, and seek to trial it in humans.
So far, research has focused on the many possible medical applications for BCIs, but Neuralink also wants to create a device that can be used by healthy people for brain improvement. Cognitive enhancement could be the future Botox.
But although a tuned-up brain could expand human possibilities, some experts are already cautioning of the dangers that may lie ahead. Brain enhancement of healthy individuals is not yet possible, but Dr Davide Valeriani is one expert who thinks that it could become an option within his lifetime.
“All big companies are interested in jumping into brain-computer interfaces,” explains Valeriani, a postdoctoral researcher in BCIs at Harvard Medical School in the US.
He lists Amazon, Facebook and Microsoft, as well as agencies such as the US military. “If big companies work on this then we can push the research. They have more resources.”
As well as the technical challenges associated with implanting a chip in the brain, Valeriani points out that there are other, more intangible problems to solve.
The benefits of BCIs in helping individuals who are paralysed or brain-damaged are clear to see, but the advantages for healthy individuals would have to be extra special to outweigh the risks of the invasive surgery, and overcome a range of ethical dilemmas.
Potential problems include the possibility of ‘brain-hacking’ (a person or agency somehow taking control of the chip or accessing data), and ethical compromises if the technology is used experimentally in countries with a poor human rights record.
In addition, the animal experiments needed to develop the technology are one thing when viewed in the context of helping paraplegia, and quite another when viewed in the context of souping up the brain.
So what are the potential benefits? Why would we want electrodes implanted in our brains? Valeriani is working on improving decision-making, especially decisions that might have a large negative impact if we get them wrong, for example a doctor misdiagnosing a medical problem, or a soldier making a bad choice in a military situation.
“What I see as the advantage of BCIs is that you keep the human in the loop,” says Valeriani. Rather than handing over all our decision-making to artificial intelligence (AI), BCIs could assist us with our dilemmas, helping to modulate and correct our inherent biases and blind spots.
“If a human is assisted by a BCI and then there is another completely independent machine that makes a decision based on the same information, you can merge the two decisions together, and we showed recently that they work better together than each of them alone,” says Valeriani.
Research is also advancing BCI-assisted communication. “BCIs are not reading thoughts,” says Valeriani, “they’re looking for patterns.” Computers can be trained to recognise patterns of brain activity that occur when, for example, we’re thinking about a certain object, or willing a particular body part to move.
It’s this technology that allows people to move prosthetic limbs with their thoughts. Getting better at this pattern recognition might eventually allow us to identify the specific contents of people’s thoughts, therefore opening up a whole world of possibilities such as telepathic communication, being able to update our Facebook statuses with our minds, or drive our cars by thought.
We’re far from the required resolution yet, but the potential is there.
Another possibility is computer-based memory extension. We already treat our computers and smartphones as a kind of memory aid, using them to store our work, photos, calendars and conversations.
What if BCIs could one day increase the amount of memory that is instantly available to our brain, allowing us to store memories of everything we’ve ever experienced, and never forget a face or a name?
Last year, researchers led by Dr Robert Hampson at the Wake Forest School of Medicine, North Carolina, successfully improved people’s short-term memory by directly stimulating brain cells in their hippocampus – an area of the brain involved in memory.
The scientists recorded the pattern of brain cell activity during remembering, and then used the same pattern to stimulate the cells while a memory task was being carried out – increasing performance by over 35 per cent.
The tech behind Neuralink, Elon Musk’s brain-reading machine
Neuralink’s new ‘n1’ sensor fits into a case measuring 8mm in diameter and 4mm in height.
All of the components of the Neuralink are stacked inside the case and hermetically sealed.
Each sensor is connected to 1,024 flexible, thread-like electrodes capable of reading and writing to nerve cells (neurons) in the brain. Each thread is about a tenth of the width of a human hair.
The flexible electrodes are individually inserted into the brain’s outer layer (cortex) through an 8mm-wide hole in the skull, using a high-precision surgical robot with a 24-micrometre needle (one micrometre = one-thousandth of a millimetre).
The sensors are inserted through the same hole, with the skin being closed up over them. Up to 10 sensors could be implanted, meaning as many as 10,000 electrodes.
The sensors are connected to an induction coil beneath the skin behind the ear, via thin wires tunnelled under the scalp.
The induction coil connects through the skin to a wearable device called ‘The Link’, which sits behind the ear and communicates with the implanted sensors via Bluetooth.
The participants in this experiment were epilepsy patients who were already having electrodes implanted in order to monitor their seizures, but the scientists are hoping to develop this technology to help with dementia, and it may one day find its way into BCIs for healthy individuals, too.
Although the technology isn’t there yet, Valeriani thinks that a removable device would be a better option for healthy individuals, so that it could be kept outside the body and switched off when necessary. “So if we don’t want to use it, we don’t have to… we’d be able to separate ‘what is me’ from ‘what is technology’.”
The question ‘what is me? takes BCIs into the realm of philosophy. Dr Susan Schneider, a philosopher and cognitive scientist at the University of Connecticut, is interested in the links between future technology, the mind and the self.
“Imagine walking into a mind design centre of the future, like a cosmetic neurology centre, and seeing a menu in front of you with all these enhancements,” she says. She imagines being able to reach the meditative states of a Zen master, or gain the musical abilities of Mozart – or even sculpt your personality.
“I understand the pull of all of this,” says Schneider. “But if you decide you’re going to purchase a bunch of these enhancements … is the person who emerges truly you?” She thinks there will come a point when a person replaces so much of their brain with artificial components that they’ve actually killed themselves without realising it.
This riffs on classic thought experiments. How much of our brain do we need to keep in order to be the same person? If we suddenly lose our memories, does this mean we’re not us? What about if a brain injury affects our personality? What makes me ‘me’?
University of Sussex neuroscientist Prof Anil Seth likes to think about the potential problems around BCIs in terms of a ‘worry budget’. As we only have so much worry to go round, he argues, we should spend it on more immediate concerns related to our use of technology, such as social media algorithms, which influence what we see online and therefore our behaviour.
In the realm of decision-making, he thinks we need to consider, now, who will be responsible if future AI helps us to make a bad decision.
He’s not so worried about AI becoming conscious, or accidentally modifying ourselves out of existence. “I’m not sure I’d worry too much specifically about no longer being the same person,” he says. “We are always changing who we are, even if we do not perceive this.”
Seth is also concerned about equality. “We can get caught up in the technological and scientific excitement, but equality of access is important. We could start to see people who are developing and purchasing this stuff pull apart from the rest of the population. That’s something that ought to keep people up at night.”
Whatever we choose to spend our personal worry budgets on, BCI technology is advancing fast. Some transhumanists (who think technology should be used to enhance the human condition) have already implanted microchips in their bodies to act as door keys or credit cards.
The technology to implant microchips in the brain, or perhaps a non-invasive, removable alternative, is not inconceivably far away.
Whatever happens next, Valeriani, Schneider and Seth agree that we need to keep the ethical and philosophical dilemmas in mind as the technology evolves.
The answer to whether we should upgrade our brain relates to bigger ‘shoulds’ about fairness, responsibility and who we consider ourselves to be. Perhaps it also relates to what sort of people we want to be. Is it enough to want to be upgraded? Or should we be upgrading our social and ethical ambitions instead?
This article first appeared in the September 2019 issue of BBC Science Focus Magazine – subscribe here.
Brain-computer interfaces: the story so far
The first brain-computer interface (BCI) is created by Jacques Vidal at the University of California. He uses non-invasive electroencephalogram (EEG) recordings of brain activity to communicate with a computer.
Researchers in the former Yugoslavia use EEG brain signals to control a physical object for the first time – issuing commands to a robot by simply opening and shutting their eyes.
A 100-electrode device called the Utah Array (above) is invented by Richard A Normann. It can be implanted in the brain to stimulate brain cells, or to record their output to electronic circuitry.
Deep brain stimulation, which involves implanting electrodes into the brain (above), is approved by the US Food and Drug Administration for the treatment of the tremors of Parkinson’s disease.
Researchers at Duke University, North Carolina, develop a BCI that can decode brain activity in monkeys and reproduce the monkeys’ arm movements in a robot.
Matthew Nagle (below) becomes the first person to control an artificial hand using thought. Paralysed from the neck down, Nagle also uses the brain-reading technology – developed by Massachusetts-based company Cyberkinetics – to play games, operate a TV and access emails.
Another paralysed man, Nathan Copeland (above), is the first to be given a sense of touch through a mind-controlled robotic arm, thanks to a BCI developed at the University of Pittsburgh that stimulates the sensory region of the brain.
Elon Musk (above) unveils Neuralink’s plans for its advanced BCI technology, which involves using a specially built surgical robot to insert thousands of flexible, thread-like electrodes into the brain.