A brain-machine interface is just a way of us being able to understand the brains signals, and then convert this into something that we can use manipulate our surroundings, such as a mobile phone or computer. In theory, the same interface could also be used to feed signals to the brain as well. Effectively augmenting our brains capabilities. Now there is a lot of examples of this functionality in science fiction, but is this something we could realistically do. Well, the answer is, yes, absolutely, but it won’t be easy.
How the Brain Works
So the human brain is made up of approximately 86 billion neurons. And neurons are just one of many different types of cells in your brain.
The basic function of a neuron is to send and receive information. Neurons come in many different types, but they generally have three parts in common: a dendrite which receives a signal, a cell body called a soma which computes the signal, and an axon which sends a signal out either to other neurons, or other specialised types of cells.
The signal that is received and sent through a neuron is actually electricity. All of our thoughts, feelings, emotions, and actions are computed from these electrical signals within your brain.
Neurons are connected to one another via something called a synapse. At one side of a synapse is an axon, and at the other is a dendrite. When an electrical signal is sent down an axon, it stimulates the axon to release chemicals referred to as neurotransmitters. This effectively converts the electrical energy from the axon into chemical energy. The neurotransmitters travel across the synapse and open channels in the dendrite which causes an influx of ions. This changes the potential across the membrane and converts the chemical energy back into electrical energy within the dendrite. Don’t worry about the details about this too much just know that this process transfers the electrically signal from one neuron to another.
If a neuron receives a strong enough signal from its dendrites, then it fires something called an action potential, therefore passing the signal on further to the next neuron. An action potential is known as an all or nothing response. So if the stimulus does not produce a large enough response, then an action potential will not be fired, and the signal will stop there. However, if the stimulation threshold is crossed then an action potential will be fired, therefore propagating the signal further.
This is a simple view of how neurons in your brain work and is the basis of how we are able to see, think, move, feel, basically everything you do is due to these electrical signals.
Now because these are electrical signals, if we place electrodes next to these axons, we’re able to detect when an action potential is fired, and this is what Neuralink is taking advantage of.
Now we are a long way away from creating a brain-machine interface for the whole brain. So rather than creating an interface for the whole brain, Neuralink are focussing on specific areas.
Particular areas of the brain are associated with different tasks. So for example, our occipital lobe is associated with visual processing, whereas our frontal cortex is associated with higher cognitive functions such as memory, emotions, and impulse control.
There is an interesting story about how we first discovered this, about a man called Phineas Gage.
Phineas Gage was a foreman of a crew working on the railroad in Cavendish, Vermont in 1848, when tragedy struck. As he was using a tamping iron to pack explosive powder into a hole, the powder detonated. The tamping iron shot skyward, penetrated Gage’s left cheek, ripped into his brain and exited through his skull, and then landing several dozen feet away.
Incredibly, Phineas survived this incident and was left only blinded in his left eye. This alone may have afforded you some degree of celebrity, but Phineas’ name has been etched into the history books due to the observations of Dr John Martyn Harlow in the months following the incident. Gage’s friends described him to be “no longer Gage,”. They found the balance between his “intellectual faculties and animal propensities” almost non-existent. He could not stick to plans, uttered “the grossest profanity” and showed “little deference for his fellows.” From this, considering Phineas had lost much of his frontal cortex in the incident, it was presumed that this area of the brain may be responsible for the higher cognitive functions such as memory, emotions, and impulse control.
Now, I must say, for me that’s a bit of a stretch. As after having had half my face blown off by an iron rod, I might also be a little grumpy and impulsive too. But subsequent to this incident, many studies have found this to be the case, and it is this principle that is the basis for the horrific practice of a frontal lobotomy, which is a topic I won’t cover but if you’ve ever read one flew over the cuckoos nest, or watched shutter island, you’ll know what I’m talking about.
Anyway, enough of the family guy style segue. Back to areas of the brain. The particular area of the brain Neuralink is interested in is known as the primary motor cortex, and is located here. This area is responsible for the motor functions of our body, so when you go to move your arm, the movement originates from this area of the brain.
Neuralink is designing an implant that detects these electrical signals in the brain, with the intention of allowing you to control a computer or mobile device using only your thoughts.
The implant, which the company calls The Link, is made up of two parts. The first part is a small sealed implantable device which transmits the signals that the device detects. The signals themselves are detected by tiny micron-scale threads containing hundreds of electrodes.
The threads are inserted into areas of the brain that control movement, i.e. the primary motor cortex. Each thread contains many electrodes which detect the electrical impulses within the axons of neurons and relay these back to the Link transmitter.
The device is placed entirely inside the head, so no part is exposed to charge it. The device is therefore charged using an inductive charger, allowing it to be charged from outside the body like modern smartphones are.
The threads themselves are so fine and need to be placed with such accuracy, that they can’t be placed by the human hand, even by the steady hand of a neurosurgeon. So the team at Neuralink are making a robot, based on a prototype initially developed at the University of California, to allow for the accurate and efficient insertion of the microfibers.
How Neuralink Differs
So it’s important to note that Neuralink’s BMI is not the first interface ever to be made. Neuralink’s technology builds on decades of BMI research in academic labs, including several ongoing studies with human participants.
There are currently only a few approved devices for recording or stimulating from the human brain, including devices for deep brain stimulation (DBS), which can treat neurological disorders such as Parkinson’s Disease, and devices for the detection and disruption of seizures.
These are designed to modulate the activity of whole-brain areas, rather than transferring information to and from the brain. Therefore, they only have a small number of electrodes (less than 10) and are much larger than the threads in the Neuralink device. For example, DBS leads have only 4–8 electrodes and are about 800 times larger than the Link device.
There are other devices being used in clinical trials for BMI movement control or sensory restoration. However, none of these devices has more than a few hundred electrodes, and they are all either placed on the surface of the brain or in fixed arrays of single rigid electrodes. The Link is being designed with an order of magnitude more electrodes and with flexible threads that are individually placed to avoid blood vessels and to best cover the brain region of interest.
The whole point of Neuralink’s device is to shift the industry ahead by an order of magnitude, providing a level of accuracy and functionality so far unseen with this type of device.
What stage are they at?
So the implant has not been tested in humans yet, but it has been implanted into a Pig. One pig, called Gertrude, was showcased at a Neuralink event given by Elon Musk.
In this video, we can see Gertrude, who has had the device implanted and the electrodes placed in a region of the brain that detects sensory information from the snout of the pig. This is a particularly large area of the pig's brain, as a pigs snout is exceptionally sensitive to stimuli.
We can see the white spots above, which are the spikes from the individual electrodes, and then the blue area shows when a large enough signal is detected to warrant a response from the device.
This video shows Neuralink has created an implant device that can deliver brain recordings to a computer in real-time while the brain’s owner is moving around and interacting with the world.
That’s a pretty big step forward, and it’s definitely an element that has been missing from the research on brain-computer interfaces thus far. While some other wireless brain implants exist, they require major surgeries to implant and are typically either bulky or limited in where in the brain they can be placed.
There is a lot of research on how to decode data from the brain and the readings generated from more traditional brain-monitoring devices, but we don’t have good ways to collect that data.
So if Neuralink can get this device into humans, and it works, that would be hugely exciting for researchers.
The Future of Neuralink Technology
But what is the future of Neuralink technology? In the short term, we can expect to see such technology revolutionise many debilitating neurological conditions. But of course, what we all want to know, is when will I be able to download knowledge directly into my brain, matrix style.
Well, unfortunately, no time soon I’m afraid.
Though Elon Musk has very ambitious goals discussing his dream of ultimately merging human and artificial intelligence (AI). This is certainly not “trivial”, and we are leaps and bounds away from this being a reality.
Today, most brain-machine interfaces use an approach called “biomimetic” decoding. First, brain activity is recorded while the user imagines various actions such as moving their arm left or right. Once we know which brain cells prefer different directions, we can “decode” subsequent movements by tallying their action potentials like votes. This is exactly what an app developed by Neuralink does, in the hope of training their model around the responses within the individual’s brain.
This approach works adequately for simple movements, but can it ever generalise to more complex mental processes? Even if Neuralink could sample enough of the 86 billion neurons in my brain, how many different thoughts would I first have to think to calibrate a useful mind-reading device, and how long would that take? Does my brain activity even sound the same each time I think the same thought? And when I think of, say, creating a human settlement on Mars, does my brain sound anything like Elon’s?
The evidence so far is that this is not the case. Our brains have a property of plasticity, no two brains are the same, and they can be moulded by external factors to change over time. Meaning that trying to map individuals responses to work with some sort of generalised brain API, would be incredibly difficult due to the variability within each individual.
Some researchers hope that AI can sidestep these problems, in the same way, it has helped computers to understand speech. Perhaps given enough data, AI could learn to understand the signals from anyone’s brain. However, unlike thoughts, language evolved for communication with others, so different speakers share common rules such as grammar and syntax.
While the large-scale anatomy of different brains is similar, at the level of individual brain cells, we are all unique. Recently, neuroscientists have started exploring intermediate scales, searching for structure in the activity patterns of large groups of cells. Perhaps, in future, we will uncover a set of universal rules for thought processes that will simplify the task of mind reading. But the state of our current understanding offers no guarantees.
Alternatively, we might exploit the brain’s own intelligence. Perhaps we should think of brain-machine interfaces as tools that we have to master, like learning to ride a bike. When people are shown a real-time display of the signal from individual cells in their own brain, they can often learn to increase or decrease that activity through a process called neurofeedback. So perhaps creating a brain-machine interface will be like any skill, taking time to master, with some people perhaps more attuned to it than others.
Now that’s only covering using our brains to affect the machinery. When it comes to influencing, rather than reading, the brain, in a form of an inception-like process, the challenges are orders of magnitude greater.
Electrical stimulation will activate many neurons around each electrode. But cells with different roles are mixed together within the brain, so it is hard to produce a meaningful experience. For example, stimulating visual areas of the brain may allow blind people to perceive flashes of light, but we are still far from reproducing even simple visual scenes.
But if the last decade has shown us anything. Elon Musk is not a man you want to bet against. The work Neuralink are doing has incredible potential and is already providing insights that were previously impossible to record using other BMI devices.
But the brain does not yield its secrets easily, and I think a true bidirectional brain-machine interface on this scale is still decades in the future, but Neuralink is doing an exciting job of bringing it closer to being a reality.