1
|
Issa MF, Khan I, Ruzzoli M, Molinaro N, Lizarazu M. On the speech envelope in the cortical tracking of speech. Neuroimage 2024; 297:120675. [PMID: 38885886 DOI: 10.1016/j.neuroimage.2024.120675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 06/05/2024] [Accepted: 06/06/2024] [Indexed: 06/20/2024] Open
Abstract
The synchronization between the speech envelope and neural activity in auditory regions, referred to as cortical tracking of speech (CTS), plays a key role in speech processing. The method selected for extracting the envelope is a crucial step in CTS measurement, and the absence of a consensus on best practices among the various methods can influence analysis outcomes and interpretation. Here, we systematically compare five standard envelope extraction methods the absolute value of Hilbert transform (absHilbert), gammatone filterbanks, heuristic approach, Bark scale, and vocalic energy), analyzing their impact on the CTS. We present performance metrics for each method based on the recording of brain activity from participants listening to speech in clear and noisy conditions, utilizing intracranial EEG, MEG and EEG data. As expected, we observed significant CTS in temporal brain regions below 10 Hz across all datasets, regardless of the extraction methods. In general, the gammatone filterbanks approach consistently demonstrated superior performance compared to other methods. Results from our study can guide scientists in the field to make informed decisions about the optimal analysis to extract the CTS, contributing to advancing the understanding of the neuronal mechanisms implicated in CTS.
Collapse
Affiliation(s)
- Mohamed F Issa
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Department of Scientific Computing, Faculty of Computers and Artificial Intelligence, Benha University, Benha, Egypt.
| | - Izhar Khan
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Manuela Ruzzoli
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Mikel Lizarazu
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| |
Collapse
|
2
|
Kries J, De Clercq P, Gillis M, Vanthornhout J, Lemmens R, Francart T, Vandermosten M. Exploring neural tracking of acoustic and linguistic speech representations in individuals with post-stroke aphasia. Hum Brain Mapp 2024; 45:e26676. [PMID: 38798131 PMCID: PMC11128780 DOI: 10.1002/hbm.26676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 03/04/2024] [Accepted: 03/21/2024] [Indexed: 05/29/2024] Open
Abstract
Aphasia is a communication disorder that affects processing of language at different levels (e.g., acoustic, phonological, semantic). Recording brain activity via Electroencephalography while people listen to a continuous story allows to analyze brain responses to acoustic and linguistic properties of speech. When the neural activity aligns with these speech properties, it is referred to as neural tracking. Even though measuring neural tracking of speech may present an interesting approach to studying aphasia in an ecologically valid way, it has not yet been investigated in individuals with stroke-induced aphasia. Here, we explored processing of acoustic and linguistic speech representations in individuals with aphasia in the chronic phase after stroke and age-matched healthy controls. We found decreased neural tracking of acoustic speech representations (envelope and envelope onsets) in individuals with aphasia. In addition, word surprisal displayed decreased amplitudes in individuals with aphasia around 195 ms over frontal electrodes, although this effect was not corrected for multiple comparisons. These results show that there is potential to capture language processing impairments in individuals with aphasia by measuring neural tracking of continuous speech. However, more research is needed to validate these results. Nonetheless, this exploratory study shows that neural tracking of naturalistic, continuous speech presents a powerful approach to studying aphasia.
Collapse
Affiliation(s)
- Jill Kries
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
- Department of PsychologyStanford UniversityStanfordCaliforniaUSA
| | - Pieter De Clercq
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Marlies Gillis
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Jonas Vanthornhout
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Robin Lemmens
- Experimental Neurology, Department of NeurosciencesKU LeuvenLeuvenBelgium
- Laboratory of Neurobiology, VIB‐KU Leuven Center for Brain and Disease ResearchLeuvenBelgium
- Department of NeurologyUniversity Hospitals LeuvenLeuvenBelgium
| | - Tom Francart
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Maaike Vandermosten
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| |
Collapse
|
3
|
Casilio M, Kasdan AV, Schneck SM, Entrup JL, Levy DF, Crouch K, Wilson SM. Situating word deafness within aphasia recovery: A case report. Cortex 2024; 173:96-119. [PMID: 38387377 PMCID: PMC11073474 DOI: 10.1016/j.cortex.2023.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 10/02/2023] [Accepted: 12/26/2023] [Indexed: 02/24/2024]
Abstract
Word deafness is a rare neurological disorder often observed following bilateral damage to superior temporal cortex and canonically defined as an auditory modality-specific deficit in word comprehension. The extent to which word deafness is dissociable from aphasia remains unclear given its heterogeneous presentation, and some have consequently posited that word deafness instead represents a stage in recovery from aphasia, where auditory and linguistic processing are affected to varying degrees and improve at differing rates. Here, we report a case of an individual (Mr. C) with bilateral temporal lobe lesions whose presentation evolved from a severe aphasia to an atypical form of word deafness, where auditory linguistic processing was impaired at the sentence level and beyond. We first reconstructed in detail Mr. C's stroke recovery through medical record review and supplemental interviewing. Then, using behavioral testing and multimodal neuroimaging, we documented a predominant auditory linguistic deficit in sentence and narrative comprehension-with markedly reduced behavioral performance and absent brain activation in the language network in the spoken modality exclusively. In contrast, Mr. C displayed near-unimpaired behavioral performance and robust brain activations in the language network for the linguistic processing of words, irrespective of modality. We argue that these findings not only support the view of word deafness as a stage in aphasia recovery but also further instantiate the important role of left superior temporal cortex in auditory linguistic processing.
Collapse
Affiliation(s)
| | - Anna V Kasdan
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, TN, USA
| | | | | | - Deborah F Levy
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Kelly Crouch
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Vanderbilt University Medical Center, Nashville, TN, USA; School of Health and Rehabilitation Sciences, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
4
|
Martin KC, DeMarco AT, Dyslin SM, Turkeltaub PE. Rapid auditory and phonemic processing relies on the left planum temporale. RESEARCH SQUARE 2024:rs.3.rs-4189759. [PMID: 38645022 PMCID: PMC11030499 DOI: 10.21203/rs.3.rs-4189759/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
After initial bilateral acoustic processing of the speech signal, much of the subsequent language processing is left-lateralized. The reason for this lateralization remains an open question. Prevailing hypotheses describe a left hemisphere (LH) advantage for rapidly unfolding information-such as the segmental (e.g., phonetic and phonemic) components of speech. Here we investigated whether and where damage to the LH predicted impaired performance on judging the directionality of frequency modulated (FM) sweep stimuli that changed within short (25ms) or longer (250ms) temporal windows. Performance was significantly lower for stroke survivors (n = 50; 18 female) than controls (n = 61; 34 female) on FM Sweeps judgments, particularly on the short sweeps. Support vector regression lesion-symptom mapping (SVR-LSM) revealed that part of the left planum temporale (PT) was related to worse performance on judging the short FM sweeps, controlling for performance on the long sweeps. We then investigated whether damage to this particular area related to diminished performance on two levels of linguistic processing that theoretically depend on rapid auditory processing: stop consonant identification and pseudoword repetition. We separated stroke participants into subgroups based on whether their LH lesion included the part of the left PT that related to diminished short sweeps judgments. Participants with PT lesions (PT lesion+, n = 24) performed significantly worse than those without (PT lesion-, n = 26) on stop consonant identification and pseudoword repetition, controlling for lesion size and hearing ability. Interestingly, PT lesions impacted pseudoword repetition more than real word repetition (PT lesion-by-repetition trial type interaction), which is of interest because pseudowords rely solely on sound perception and sequencing, whereas words can also rely on lexical-semantic knowledge. We conclude that the left PT is a critical region for processing auditory information in short temporal windows, and it may also be an essential transfer point in auditory-to-linguistic processing.
Collapse
Affiliation(s)
| | - Andrew T DeMarco
- Georgetown University Medical Center, MedStar National Rehabilitation Hospital
| | | | - Peter E Turkeltaub
- Georgetown University Medical Center, MedStar National Rehabilitation Hospital
| |
Collapse
|
5
|
Regev TI, Kim HS, Chen X, Affourtit J, Schipper AE, Bergen L, Mahowald K, Fedorenko E. High-level language brain regions process sublexical regularities. Cereb Cortex 2024; 34:bhae077. [PMID: 38494886 PMCID: PMC11486690 DOI: 10.1093/cercor/bhae077] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 02/05/2024] [Accepted: 02/07/2024] [Indexed: 03/19/2024] Open
Abstract
A network of left frontal and temporal brain regions supports language processing. This "core" language network stores our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about phonemes and how they combine to form phonemic clusters, syllables, and words. Are phoneme combinatorics also represented in these language regions? Across five functional magnetic resonance imaging experiments, we investigated the sensitivity of high-level language processing brain regions to sublexical linguistic regularities by examining responses to diverse nonwords-sequences of phonemes that do not constitute real words (e.g. punes, silory, flope). We establish robust responses in the language network to visually (experiment 1a, n = 605) and auditorily (experiments 1b, n = 12, and 1c, n = 13) presented nonwords. In experiment 2 (n = 16), we find stronger responses to nonwords that are more well-formed, i.e. obey the phoneme-combinatorial constraints of English. Finally, in experiment 3 (n = 14), we provide suggestive evidence that the responses in experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that sublexical regularities are stored and processed within the same fronto-temporal network that supports lexical and syntactic processes.
Collapse
Affiliation(s)
- Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Hee So Kim
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Xuanyi Chen
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Sciences, Rice University, Houston, TX 77005, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Abigail E Schipper
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Leon Bergen
- Department of Linguistics, University of California San Diego, San Diego CA 92093, United States
| | - Kyle Mahowald
- Department of Linguistics, University of Texas at Austin, Austin, TX 78712, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Harvard Program in Speech and Hearing Bioscience and Technology, Boston, MA 02115, United States
| |
Collapse
|