Engineering a Brain-Computer Interface

109

Many ideas that were purely the stuff of science fiction have started to become a reality since the turn of the millennium: flying cars, “replicators,” even the tricorder—Star Trek’s multipurpose medical device. Developments in the last few years add another technology to that list: brain-computer interfaces (BCIs).

BCIs connect computers and electronic devices to the brain’s electrically charged neural pathways, allowing the devices and the brain to communicate with each other by sending electrical signals back and forth. Establishing this connection lets the human brain control devices simply by thinking. Those devices can also send information to the brain, which is interpreted as sensory inputs.

A BCI could allow a human to control a robotic arm in exactly the same way they control their other limbs, or transmit data about the color of an object to the parts of the brain that interpret color, restoring sight to those with visual impairments. These are just the most obvious examples. BCIs could also enable such fanciful sci-fi concepts as telepathy, telekinesis, cognitive enhancement and perhaps even the merging of human and artificial intelligence.

It’s an exciting field for engineering, and one that’s been a long time coming.

A Brief History of BCIs

The term ‘brain-computer interface’ dates all the way back to the 1970s, when research into the concept first began at the University of California, Los Angeles. It wasn’t until the 1980s that reports first emerged of the capacity to control a robot arm using an electroencephalogram (EEG), the same technology doctors used to brain activity in their patients.

Years of experimentation with the technology using animals led to the first tests of rudimentary neuroprosthetic devices on humans in the 1990s.

Behavioral setup and control loops for an early BCI, consisting of the data acquisition system, the computer running multiple linear models in real time, the robot arm equipped with a gripper, and the visual display. The pole was equipped with a gripping force transducer. Robot position was translated into cursor position on the screen, and feedback of the gripping force was provided by changing the cursor size. (Image courtesy of Carmena et al. 2005)

Behavioral setup and control loops for an early BCI, consisting of the data acquisition system, the computer running multiple linear models in real time, the robot arm equipped with a gripper, and the visual display. The pole was equipped with a gripping force transducer. Robot position was translated into cursor position on the screen, and feedback of the gripping force was provided by changing the cursor size. (Image courtesy of Carmena et al. 2005)

Until recently, the technology behind BCIS has remained extremely unwieldy because the devices had to be hardwired to massive computer mainframes. And while they are still a bit on the awkward side due to the size of the equipment, advances in wireless technology and computing power have made BCIs a much less cumbersome technology.

Some astounding innovations in this area of research have happened just in the past few years, including a project by engineering and computer science students at the University of Florida that saw them create an interface that lets people pilot a small aerial drone with their thoughts and research at the University of Washington that had participants playing video games via direct brain stimulation.

With companies like Elon Musk’s Neuralink and Bryan Johnson’s Kernel working to accelerate BCI research, the technology has nowhere to go but up. However, there are many engineering challenges to overcome along the way.

 

Engineering and the Brain

(Image courtesy of Kernel.)

(Image courtesy of Kernel.)

Perhaps the most significant engineering challenge facing those researching BMIs is the human brain itself. Weighing an average of three pounds, the human brain contains 80 to 100 billion neurons, each of which is connected on a chemical and electrical level with approximately 10,000 others via synapses, forming the most complex object in the known universe.

With between 100 trillion and 1,000 trillion synapses, our brains have more neural interconnections that there are stars in the entire Milky Way.

Thus, the first challenge is understanding which of these trillions of connections are most suitable as a point of contact for a computer interface. This means we also need to overcome the current limitations in electrode arrays for the implants to interact with brain tissue. A typical electrode array consists of 16-96 electrodes, and only 20 to 30 percent of those draw any meaningful information from the brain when they’re implanted.

A cap holds electrodes in place while recording an EEG. (Image courtesy of Wikimedia Commons.)

A cap holds electrodes in place while recording an EEG. (Image courtesy of Wikimedia Commons.)

Although EEG signals have been used with some success in the past, they’re prone to interference, since they measure only tiny voltage potentials. As a result, the process of reading brain signals with current technology tends to be like a “bad phone connection,” filled with lots of static.

Add to that the fact that EEGs only pick up the brain’s electrical functions, completely ignoring all the important neurochemical processes, and you’ve got a technology that’s far from perfect.

 

Engineering a Brain-Machine Interface

While it’s currently impossible to build a machine that comes even close the complexity of the human brain, the interface between brains and machines need not be quite so advanced. However, anything connected to the brain will need to have some capacity to learn and adapt to its inputs.

As biomedical engineering student, Subash Padmanaban writes on Quora, the process of mapping neural patterns is non-linear and non-stationary, so the computers used in devices such as BCI prosthetics need to have complex and adaptable learning algorithms that can capture the dynamics of the complex relationship between input and output.

Even though BCI devices have come a long way from the days when they were chained to massive mainframe computers, they’re still not where they need to be to become fully integrated into everyday life. Some BCIs still need to be directly wired to equipment, while others are wireless but need a full-size computer nearby.

Current experiments with BCIs, which tend to largely focus around improving the lives of people with disabilities, still tend to produce movements that are significantly slower, less precise and less complex that those of a fully functioning human body. They also require surgically implanting electrodes into the brain. Not exactly an out-patient procedure.

Work is also being done in non-invasive BCIs, which generally use EEG recordings taken from the scalp and amplified using sophisticated algorithms to do things like control cursors, wheelchairs, robotic arms and drones. But these tend to take place under controlled laboratory conditions and so are still largely impractical for real-world uses.

That being said, research in creating BCIs to help people with disabilities has been making some significant progress. University of Washington researchers at the National Foundation Center for Sensorimotor Neural Engineering are using direct brain stimulation to provide sensory feedback through artificial electrical signals. This enables patients to control the movement of prosthetics by opening and closing their hands. Meanwhile, researchers at the University of Minnesota are working on a non-invasive, EEG-based BCI that lets people control a computer cursor via a cap on their head, which in turn moves a robotic arm.

 

Enter Neuralink and Kernel

Elon Musk has his fingers in just about every emerging technology pie, and BCIs are no exception. Launched in 2016, Musk’s company Neuralink has almost no information on its website aside from job postings, including the following engineering positions:

  • Microfabrication Engineer (MEMS and Sensors)
  • Mechatronics Engineer
  • Medical Device Engineer
  • Analog and Mixed-Signal Engineer
  • Software Engineer
  • Biomedical Engineer
  • Hardware System Integration Engineer
  • Microelectronics Packaging Engineer

The company’s stated mission is to develop ultra-high-bandwidth brain-machine interfaces to connect humans and computers. Musk has also made public statements regarding the possibility of developing a “neural lace” (the common sci-fi term for a BCI) made up of many small modular units.

According to The Verge, Musk said recently at the World Government Summit in Dubai that the key to successfully creating a BCI lies in speed:

“Over time I think we will probably see a closer merger of biological science and digital intelligence,” he said. “It’s mostly about bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.”

According to the website Wait But Why, Neuralink is working on creating “micron-sized devices,” which will initially help people with brain injuries. “We are aiming to bring something to market that helps with certain severe brain injuries, (stroke, cancer lesion, congenital) in about four years,” Musk told the site, adding that the timeline for people without a disability to use this sort of technology is about eight to 10 years.

Musk has a reputation for whipping up enthusiasm when it comes to technology, but other companies are also track to create viable BCIs.

Founded by venture capitalist Bryan Johnson, Kernel is also on its way towards developing the BCIs of the future. The company’s stated goal is build neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence and extend cognition.

Kernel and Neuralink are clearly on very similar trajectories, as illustrated by this statement from Johnson on the company’s website:

“Machines of all kinds can help us along the way, but our vision is one in which we humans maintain and expand our authorial power. The advanced intelligence of tomorrow is a collaboration between the natural and the artificial. United, unheard of possibilities abound.”

Noninvasive electroencephalography based brain-computer interface enables direct brain-computer communication for training. (Image courtesy of U.S. Army Research Laboratory.)

Noninvasive electroencephalography based brain-computer interface enables direct brain-computer communication for training. (Image courtesy of U.S. Army Research Laboratory.)

No matter who leads the way in BCI technology, we’re bound to see significant progress in the near future despite the engineering challenges of transmitting, processing and interpreting information between brains and computers. That’s why the National Academy of Engineering includes “reverse engineering the brain” on its list of 14 Grand Challenges for Engineering in the 21st Century.

For more up-and-coming revolutionary technology, learn about Artificial Intelligence and Engineering.
Source: Engineering a Brain-Computer Interface > ENGINEERING.com