Research heralds better and bidirectional brain-computer interfaces

A pair of studies, one from Stanford and another from the University of Geneva, exemplify the speed with which brain-computer interfaces are advancing; and while you won’t be using one instead of a mouse and keyboard any time soon, even in its nascent form the tech may prove transformative for the disabled.

First comes work from Stanford: an improved microelectrode array and computer system that allows paralyzed users to type using an on-screen cursor.

stanford-bci

The “baby-aspirin-sized” array has 100 electrodes and can monitor individual neurons. It plugs into the motor cortex and users imagine moving a limb in the direction they want to move the cursor; with minimal training, some were outperforming extant systems and typing dozens of characters per minute without assistance.

“This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” said Stanford engineering professor and co-author of the report, Krishna Shenoy, in a news release.

The hope, naturally, is to improve the rate at which paralysis-stricken people can communicate, as well as simplify the setup process. But being able to easily and accurately move a cursor around a screen means interacting with an ordinary computer would be much easier as well. So in addition to simply typing, people could easily navigate the web, play games, and so on.

“This is really a safety study, a feasibility study,” said professor and co-author Jaimie Henderson, in a Stanford video. “But I’m very confident that in the not too distant future we’ll have systems that are deployable and able to provide help for people with paralysis.”

Their research was published yesterday in eLife Sciences.

On the more theoretical side of things, a team at the University of Geneva has created a mechanism that could lead to brain-computer interfaces that not only let people control cursors or limbs, but let those objects send feedback to the brain.

mouse_bci

This is actually the image that accompanied the University of Geneva release.

The trouble is that BCIs tend to read information from the brain and let the person confirm the resultant actions (moving a bionic arm or selecting a letter on a screen) visually. But the physical sensation of having a limb includes positional data as well, what we call proprioception: how bent a joint is, how high a hand is raised. Some work has been done to create this kind of feedback, but this research shows a simpler process — in mice, anyway.

The researchers used light-based microscopy to watch a set of cells in a mouse’s brain. When the mouse used one neuron in particular the researchers had chosen (it didn’t control anything, but one can imagine it turned on a light or moved an arm), it received a reward and a pulse of feedback in its sensory cortex — also caused by light, which the cells had been modified to be sensitive to.

Basically the test showed that the mouse was able to associate the artificial sensation with activation of that specific neuron the researchers had chosen: a primitive but functional feedback loop.

Humans won’t be getting gene therapy to make their neurons photosensitive, of course — his research is even more preliminary than Stanford’s. But it suggests that the basic setup works and could be adapted for use by people. Hook up a handful of these feedback loops and you can give a rough picture of an artificial limb’s position even when the user’s eye is closed.

The paper was published today in the journal Neuron.