Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 13, 2024 | Published
Journal Article Open

Representation of internal speech by single neurons in human supramarginal gyrus

Abstract

Speech brain–machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.

Copyright and License

© The Author(s) 2024. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Acknowledgement

We thank L. Bashford and I. Rosenthal for helpful discussions and data collection. We thank our study participants for their dedication to the study that made this work possible. This research was supported by the NIH National Institute of Neurological Disorders and Stroke Grant U01: U01NS098975 and U01: U01NS123127 (S.K.W., D.A.B., K.P., C.L. and R.A.A.) and by the T&C Chen Brain-Machine Interface Center (S.K.W., D.A.B. and R.A.A.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the paper.

Contributions

S.K.W., D.A.B. and R.A.A. designed the study. S.K.W. and D.A.B. developed the experimental tasks and collected the data. S.K.W. analysed the results and generated the figures. S.K.W., D.A.B. and R.A.A. interpreted the results and wrote the paper. K.P. coordinated regulatory requirements of clinical trials. C.L. and B.L. performed the surgery to implant the recording arrays.

Data Availability

The data supporting the findings of this study are openly available via Zenodo at https://doi.org/10.5281/zenodo.10697024 (ref. 65). Source data are provided with this paper.

Supplementary Figs. 1–5.

Supplementary Video 1

Source data for Fig. 3.

Source data for Fig. 4.

Source data for Fig. 5.

Source Data Fig. 3

Source Data Fig. 5

Source Data Fig. 6

Code Availability

The custom code developed for this study is openly available via Zenodo at https://doi.org/10.5281/zenodo.10697024 (ref. 65).

Conflict of Interest

The authors declare no competing interests.

Files

s41562-024-01867-y.pdf
Files (790.3 MB)
Name Size Download all
md5:e62c796c49f916d1dc0366f04399b063
2.3 MB Preview Download
md5:f4ec257eec8305039199f117b81e5b44
1.6 MB Preview Download
md5:b7c9834c2ab1251b86fa6c12ff2d16ec
28.3 MB Preview Download
md5:8c7a3524af8aa54e08e5b9333978e52f
12.6 kB Download
md5:aed9b6726ad349aa5a7570a07b89602a
9.1 kB Download
md5:0313c2336e4627e9da3f5fad1ef357a1
32.8 kB Download
md5:960859e0d38fa7490a3a8fdd8c2d7348
33.7 kB Download
md5:14e2b8b766a471cbe3e87a0b955c4352
758.1 MB Download
md5:0cd39e1d1adf8f317cc3c95ac7125195
12.8 kB Download
md5:24a70518d717ff62c29db4d4cd08f937
15.0 kB Download

Additional details

Created:
May 22, 2024
Modified:
May 22, 2024