Brain Implant Lets Man With Paralysis Turn His Thoughts Into Actual Words

A man unable to move or speak can now generate words and sentences on a computer using only his thoughts. This recent marvel comes from a Facebook-funded study from UCSF, called Project Steno, that decodes conversation from a speech-impaired paralyzed patient into words on a screen.

“This is the first time someone just naturally trying to say words could be decoded into words just from brain activity,” said Dr David Moses, lead author of a study published Wednesday in the New England Journal of Medicine. “Hopefully, this is the proof of principle for direct speech control of a communication device, using intended attempted speech as the control signal by someone who cannot speak, who is paralyzed.”

Brain-computer interfaces (BCIs) have recently come up with a series of genius projects, including Stanford research that turns visualized handwriting into projected text. However, UCSF almost acts like a translator, decoding signals in the man’s brain that once controlled his vocal tract.

Dr Edward Chang did the implantation process on an old man who got paralyzed at the age of 20 due to a brain stroke. With the help of neuroprosthesis electrodes that decodes signals in the man’s brain that once controlled his vocal tract, he answered a few questions. The man is currently limited to a vocabulary of just 50 words and communicates real-time sentences at a rate of about 15 words per minute, which is much slower than natural speech.

For instance, if the patient is being asked, “How are you today?” the response on the screen would pop word by word as “I am very good”.

“This tells us that it’s possible,” says Edward Chang, a neurosurgeon at the University of California, San Francisco. “I think there’s a huge runway to make this better over time.”

The study aims to go way beyond what they already achieved and create something spectacular on their way forward. It is currently vague how much of the speech recognition is derived from logged patterns of brain activity, vocal expressions, or both.

Moses further clarifies that the current research is a lot different from mind-reading; rather, it depends on sensing brain activity when the body is involved in a particular behaviour, such as speaking. He reveals that UCSF’s team effort could not yet render non-invasive neural interfaces. Elon Musk’s Neuralink guarantees wireless transmission data from brain-implanted electrodes for forthcoming study, but it has only been tested on a monkey.

Facebook’s recent interest has been moved from head-worn brain-computer interfaces for future VR/AR headsets to wrist-worn devices. Therefore the company has announced to make these accessible for research projects.

A Facebook representative confirmed via email, “Aspects of the optical head-mounted work will be applicable to our EMG research at the wrist. We will continue to use optical BCI as a research tool to build better wrist-based sensor models and algorithms. While we will continue to leverage these prototypes in our research, we are no longer developing a head-mounted optical BCI device to sense speech production. That’s one reason why we will be sharing our head-mounted hardware prototypes with other researchers, who can apply our innovation to other use cases,”

Consumer-targeted neural input technology is yet in its initial stages. However, devices with non-invasive head or wrist-worn sensors are currently way more imprecise than these implanted electrodes.  

Leave a Reply

Your email address will not be published. Required fields are marked *