A Wireless Brain-Machine Interface for Real-Time Speech Synthesis
Issue Date
2009-12-09Author
Guenther, Frank H.
Brumberg, Jonathan S.
Wright, E. Joseph
Nieto-Castanon, Alfonso
Tourville, Jason A.
Panko, Mikhail
Law, Robert
Siebert, Steven A.
Bartels, Jess L.
Andreasen, Dinal S.
Ehirim, Princewill
Mao, Hui
Kennedy, Philip R.
Publisher
Public Library of Science
Type
Article
Article Version
Scholarly/refereed, publisher version
Rights
Copyright: © 2009 Brumberg et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Metadata
Show full item recordAbstract
BackgroundBrain-machine interfaces (BMIs) involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech.
Methodology/Principal FindingsNeural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms) auditory feedback of the decoded sound. Accuracy of the volunteer's vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70%) and 46% decrease in average endpoint error from the first to the last block of a three-vowel task.
Conclusions/SignificanceOur results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.
Description
This is the published version, also available here: http://dx.doi.org/10.1371/journal.pone.0008218.
Collections
Citation
Guenther FH, Brumberg JS, Wright EJ, Nieto-Castanon A, Tourville JA, et al. (2009) A Wireless Brain-Machine Interface for Real-Time Speech Synthesis. PLoS ONE 4(12): e8218. http://dx.doi.org/10.1371/journal.pone.0008218.
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.
Except where otherwise noted, this item's license is described as: Copyright: © 2009 Brumberg et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.