In the new study, the Stanford team wanted to know whether neurons in the motor cortex also carry useful information about speech movements. That is, could they detect how “subject T12” tried to move her mouth, tongue and vocal cords while trying to talk?
These are small, subtle movements, and according to Sabes, a big discovery is that just a few neurons carry enough information for a computer program to predict with good accuracy what words the patient was trying to say. That information was transferred by Shenoy’s team to a computer screen, where the patient’s words appeared as they were spoken by the computer.
The new result builds on previous work by Edward Chang of the University of California, San Francisco, who wrote that talk about the most complicated movements people make. We squeeze air, add vibrations that make it audible, and shape it into words with our mouth, lips, and tongue. To make the “f” sound, place your upper teeth on your lower lip and push air out — just one of the dozens of mouth movements it takes to speak.
One way forward
Chang previously used electrodes placed on top of the brain to enable a volunteer to talk through a computer, but in their preprint, the Stanford researchers say their system is more accurate and three to four times faster.
“Our results show a viable path forward to restoring communication to people with paralysis at talk rate,” the researchers, including Shenoy and neurosurgeon Jaimie Henderson, wrote.
David Moses, who works with Chang’s team at UCSF, says the current work is reaching “impressive new performance benchmarks.” But even as records continue to be broken, he says, “it will become increasingly important to demonstrate stable and reliable performance over several years.” Any commercial brain implant can have a hard time getting past the controls, especially if it deteriorates over time or if the accuracy of the recording decreases.
The way forward is likely to include both more advanced implants and closer integration with artificial intelligence.