1
|
Federici A, Fantoni M, Pavani F, Handjaras G, Bednaya E, Martinelli A, Berto M, Trabalzini F, Ricciardi E, Nava E, Orzan E, Bianchi B, Bottari D. Resilience and vulnerability of neural speech tracking after hearing restoration. Commun Biol 2025; 8:343. [PMID: 40025189 PMCID: PMC11873316 DOI: 10.1038/s42003-025-07788-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 02/19/2025] [Indexed: 03/04/2025] Open
Abstract
The role of early auditory experience in the development of neural speech tracking remains an open question. To address this issue, we measured neural speech tracking in children with or without functional hearing during their first year of life after their hearing was restored with cochlear implants (CIs), as well as in hearing controls (HC). Neural tracking in children with CIs is unaffected by the absence of perinatal auditory experience. CI users and HC exhibit a similar neural tracking magnitude at short timescales of brain activity. However, neural tracking is delayed in CI users, and its timing depends on the age of hearing restoration. Conversely, at longer timescales, speech tracking is dampened in participants using CIs, thereby accounting for their speech comprehension deficits. These findings highlight the resilience of sensory processing in speech tracking while also demonstrating the vulnerability of higher-level processing to the lack of early auditory experience.
Collapse
Affiliation(s)
| | - Marta Fantoni
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Francesco Pavani
- Centro Interdipartimentale Mente/Cervello-CIMEC, University of Trento, Trento, Italy
- Centro Interuniversitario di Ricerca "Cognizione Linguaggio e Sordità"-CIRCLeS; University of Trento, Trento, Italy
| | | | - Evgenia Bednaya
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Alice Martinelli
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- IRCCS Fondazione Stella Maris, Pisa, Italy
| | - Martina Berto
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Franco Trabalzini
- IRCCS Meyer, Azienda Ospedaliero-Universitaria Meyer, Firenze, Italy
| | | | - Elena Nava
- University of Milano-Bicocca, Milano, Italy
| | - Eva Orzan
- IRCCS Materno Infantile Burlo Garofolo, Trieste, Italy
| | - Benedetta Bianchi
- IRCCS Meyer, Azienda Ospedaliero-Universitaria Meyer, Firenze, Italy
| | - Davide Bottari
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy.
| |
Collapse
|
2
|
Chen YP, Neff P, Leske S, Wong DDE, Peter N, Obleser J, Kleinjung T, Dimitrijevic A, Dalal SS, Weisz N. Cochlear implantation in adults with acquired single-sided deafness improves cortical processing and comprehension of speech presented to the non-implanted ears: a longitudinal EEG study. Brain Commun 2025; 7:fcaf001. [PMID: 39816191 PMCID: PMC11733687 DOI: 10.1093/braincomms/fcaf001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 09/26/2024] [Accepted: 01/01/2025] [Indexed: 01/18/2025] Open
Abstract
Former studies have established that individuals with a cochlear implant (CI) for treating single-sided deafness experience improved speech processing after implantation. However, it is not clear how each ear contributes separately to improve speech perception over time at the behavioural and neural level. In this longitudinal EEG study with four different time points, we measured neural activity in response to various temporally and spectrally degraded spoken words presented monaurally to the CI and non-CI ears (5 left and 5 right ears) in 10 single-sided CI users and 10 age- and sex-matched individuals with normal hearing. Subjective comprehension ratings for each word were also recorded. Data from single-sided CI participants were collected pre-CI implantation, and at 3, 6 and 12 months after implantation. We conducted a time-resolved representational similarity analysis on the EEG data to quantify whether and how neural patterns became more similar to those of normal hearing individuals. At 6 months after implantation, the speech comprehension ratings for the degraded words improved in both ears. Notably, the improvement was more pronounced for the non-CI ears than the CI ears. Furthermore, the enhancement in the non-CI ears was paralleled by increased similarity to neural representational patterns of the normal hearing control group. The maximum of this effect coincided with peak decoding accuracy for spoken-word comprehension (600-1200 ms after stimulus onset). The present data demonstrate that cortical processing gradually normalizes within months after CI implantation for speech presented to the non-CI ear. CI enables the deaf ear to provide afferent input, which, according to our results, complements the input of the non-CI ear, gradually improving its function. These novel findings underscore the feasibility of tracking neural recovery after auditory input restoration using advanced multivariate analysis methods, such as representational similarity analysis.
Collapse
Affiliation(s)
- Ya-Ping Chen
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria
- Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
| | - Patrick Neff
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland
- Department of Psychiatry and Psychotherapy, University of Regensburg, 93053 Regensburg, Germany
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, 1202 Geneva, Switzerland
| | - Sabine Leske
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, 0313 Oslo, Norway
- Department of Musicology, University of Oslo, 0313 Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, 8657 Mosjøen, Norway
- Department of Psychology, Universität Konstanz, 78457 Konstanz, Germany
| | - Daniel D E Wong
- Department of Psychology, Universität Konstanz, 78457 Konstanz, Germany
| | - Nicole Peter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland
| | - Jonas Obleser
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
| | - Tobias Kleinjung
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland
| | - Andrew Dimitrijevic
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON M5S 3H2, Canada
| | - Sarang S Dalal
- Department of Psychology, Universität Konstanz, 78457 Konstanz, Germany
- Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8200 Aarhus, Denmark
| | - Nathan Weisz
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria
- Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
- Neuroscience Institute, Christian Doppler University Hospital, Paracelsus Medical University, 5020 Salzburg, Austria
| |
Collapse
|
3
|
Aldag N, Nogueira W. Phoneme-related potentials recorded from normal hearing listeners and cochlear implant users in a selective attention paradigm to continuous speech. Hear Res 2024; 454:109136. [PMID: 39532054 DOI: 10.1016/j.heares.2024.109136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 09/30/2024] [Accepted: 10/21/2024] [Indexed: 11/16/2024]
Abstract
Cochlear implants can restore the ability to understand speech in patients with profound sensorineural hearing loss. At present, it is not fully understood how cochlear implant users perceive speech and how electric hearing provided by a cochlear implant differs from acoustic hearing. Phoneme-related potentials characterize neural responses to individual instances of phonemes extracted from continuous speech. This retrospective study investigated phoneme-related potentials in cochlear implant users in a selective attention paradigm. Responses were compared between normal hearing listeners and cochlear implant users, and between attended and unattended conditions. Differences between phoneme categories were compared and a classifier was trained to predict the phoneme category from the neural representation. The phoneme-related potentials of cochlear implant users showed similar responses to the ones obtained in normal hearing listeners for early responses (< 100 ms) but not for later responses (> 100 ms) where peaks were smaller or absent. Attention led to an enhancement of the response, whereas latency was mostly not affected by attention. The temporal morphology of the response was influenced by the phonetic features of the stimulus, allowing a classification of the phoneme category based on the phoneme-related potentials. There is a clinical need for methods that can rapidly and objectively assess the speech understanding performance of cochlear implant users. Phoneme-related potentials may provide such a link between the acoustic and the neural representations of phonemes. They may also reveal the challenges of individual subjects and thus provide indications for patient-specific auditory training, rehabilitation programs or the fitting of cochlear implant parameters.
Collapse
Affiliation(s)
- Nina Aldag
- Department of Otolaryngology, Hannover Medical School and Cluster of Excellence "Hearing4all", Hanover, Germany.
| | - Waldo Nogueira
- Department of Otolaryngology, Hannover Medical School and Cluster of Excellence "Hearing4all", Hanover, Germany
| |
Collapse
|
4
|
Mertel K, Dimitrijevic A, Thaut M. Can Music Enhance Working Memory and Speech in Noise Perception in Cochlear Implant Users? Design Protocol for a Randomized Controlled Behavioral and Electrophysiological Study. Audiol Res 2024; 14:611-624. [PMID: 39051196 PMCID: PMC11270222 DOI: 10.3390/audiolres14040052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/18/2024] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND A cochlear implant (CI) enables deaf people to understand speech but due to technical restrictions, users face great limitations in noisy conditions. Music training has been shown to augment shared auditory and cognitive neural networks for processing speech and music and to improve auditory-motor coupling, which benefits speech perception in noisy listening conditions. These are promising prerequisites for studying multi-modal neurologic music training (NMT) for speech-in-noise (SIN) perception in adult cochlear implant (CI) users. Furthermore, a better understanding of the neurophysiological correlates when performing working memory (WM) and SIN tasks after multi-modal music training with CI users may provide clinicians with a better understanding of optimal rehabilitation. METHODS Within 3 months, 81 post-lingual deafened adult CI recipients will undergo electrophysiological recordings and a four-week neurologic music therapy multi-modal training randomly assigned to one of three training focusses (pitch, rhythm, and timbre). Pre- and post-tests will analyze behavioral outcomes and apply a novel electrophysiological measurement approach that includes neural tracking to speech and alpha oscillation modulations to the sentence-final-word-identification-and-recall test (SWIR-EEG). Expected outcome: Short-term multi-modal music training will enhance WM and SIN performance in post-lingual deafened adult CI recipients and will be reflected in greater neural tracking and alpha oscillation modulations in prefrontal areas. Prospectively, outcomes could contribute to understanding the relationship between cognitive functioning and SIN besides the technical deficits of the CI. Targeted clinical application of music training for post-lingual deafened adult CI carriers to significantly improve SIN and positively impact the quality of life can be realized.
Collapse
Affiliation(s)
- Kathrin Mertel
- Music and Health Research Collaboratory (MaHRC), University of Toronto, Toronto, ON M5S 1C5, Canada;
| | - Andrew Dimitrijevic
- Sunnybrook Cochlear Implant Program, Sunnybrook Hospital, Toronto, ON M4N 3M5, Canada;
| | - Michael Thaut
- Music and Health Research Collaboratory (MaHRC), University of Toronto, Toronto, ON M5S 1C5, Canada;
| |
Collapse
|
5
|
Asaadi AH, Amiri SH, Bosaghzadeh A, Ebrahimpour R. Effects and prediction of cognitive load on encoding model of brain response to auditory and linguistic stimuli in educational multimedia. Sci Rep 2024; 14:9133. [PMID: 38644370 PMCID: PMC11033259 DOI: 10.1038/s41598-024-59411-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/10/2024] [Indexed: 04/23/2024] Open
Abstract
Multimedia is extensively used for educational purposes. However, certain types of multimedia lack proper design, which could impose a cognitive load on the user. Therefore, it is essential to predict cognitive load and understand how it impairs brain functioning. Participants watched a version of educational multimedia that applied Mayer's principles, followed by a version that did not. Meanwhile, their electroencephalography (EEG) was recorded. Subsequently, they participated in a post-test and completed a self-reported cognitive load questionnaire. The audio envelope and word frequency were extracted from the multimedia, and the temporal response functions (TRFs) were obtained using a linear encoding model. We observed that the behavioral data are different between the two groups and the TRFs of the two multimedia versions were different. We saw changes in the amplitude and latencies of both early and late components. In addition, correlations were found between behavioral data and the amplitude and latencies of TRF components. Cognitive load decreased participants' attention to the multimedia, and semantic processing of words also occurred with a delay and smaller amplitude. Hence, encoding models provide insights into the temporal and spatial mapping of the cognitive load activity, which could help us detect and reduce cognitive load in potential environments such as educational multimedia or simulators for different purposes.
Collapse
Affiliation(s)
- Amir Hosein Asaadi
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Islamic Republic of Iran
- Institute for Research in Fundamental Sciences (IPM), School of Cognitive Sciences, Tehran, Iran
| | - S Hamid Amiri
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Islamic Republic of Iran
| | - Alireza Bosaghzadeh
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Islamic Republic of Iran
| | - Reza Ebrahimpour
- Center for Cognitive Science, Institute for Convergence Science and Technology (ICST), Sharif University of Technology, P.O. Box:14588-89694, Tehran, Iran.
| |
Collapse
|
6
|
Choi I, Gander PE, Berger JI, Woo J, Choy MH, Hong J, Colby S, McMurray B, Griffiths TD. Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees. J Assoc Res Otolaryngol 2023; 24:607-617. [PMID: 38062284 PMCID: PMC10752853 DOI: 10.1007/s10162-023-00918-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
OBJECTIVES Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.
Collapse
Affiliation(s)
- Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA.
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Phillip E Gander
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, Republic of Korea
| | - Matthew H Choy
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| | - Jean Hong
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Sarah Colby
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Bob McMurray
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| |
Collapse
|
7
|
Wilroth J, Bernhardsson B, Heskebeck F, Skoglund MA, Bergeling C, Alickovic E. Improving EEG-based decoding of the locus of auditory attention through domain adaptation . J Neural Eng 2023; 20:066022. [PMID: 37988748 DOI: 10.1088/1741-2552/ad0e7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective.This paper presents a novel domain adaptation (DA) framework to enhance the accuracy of electroencephalography (EEG)-based auditory attention classification, specifically for classifying the direction (left or right) of attended speech. The framework aims to improve the performances for subjects with initially low classification accuracy, overcoming challenges posed by instrumental and human factors. Limited dataset size, variations in EEG data quality due to factors such as noise, electrode misplacement or subjects, and the need for generalization across different trials, conditions and subjects necessitate the use of DA methods. By leveraging DA methods, the framework can learn from one EEG dataset and adapt to another, potentially resulting in more reliable and robust classification models.Approach.This paper focuses on investigating a DA method, based on parallel transport, for addressing the auditory attention classification problem. The EEG data utilized in this study originates from an experiment where subjects were instructed to selectively attend to one of the two spatially separated voices presented simultaneously.Main results.Significant improvement in classification accuracy was observed when poor data from one subject was transported to the domain of good data from different subjects, as compared to the baseline. The mean classification accuracy for subjects with poor data increased from 45.84% to 67.92%. Specifically, the highest achieved classification accuracy from one subject reached 83.33%, a substantial increase from the baseline accuracy of 43.33%.Significance.The findings of our study demonstrate the improved classification performances achieved through the implementation of DA methods. This brings us a step closer to leveraging EEG in neuro-steered hearing devices.
Collapse
Affiliation(s)
- Johanna Wilroth
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
| | - Bo Bernhardsson
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Frida Heskebeck
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Martin A Skoglund
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Carolina Bergeling
- Department of Mathematics and Natural Sciences, Blekinge Institute of Technology, Karlskrona, Sweden
| | - Emina Alickovic
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| |
Collapse
|
8
|
Van Hirtum T, Somers B, Dieudonné B, Verschueren E, Wouters J, Francart T. Neural envelope tracking predicts speech intelligibility and hearing aid benefit in children with hearing loss. Hear Res 2023; 439:108893. [PMID: 37806102 DOI: 10.1016/j.heares.2023.108893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/01/2023] [Accepted: 09/27/2023] [Indexed: 10/10/2023]
Abstract
Early assessment of hearing aid benefit is crucial, as the extent to which hearing aids provide audible speech information predicts speech and language outcomes. A growing body of research has proposed neural envelope tracking as an objective measure of speech intelligibility, particularly for individuals unable to provide reliable behavioral feedback. However, its potential for evaluating speech intelligibility and hearing aid benefit in children with hearing loss remains unexplored. In this study, we investigated neural envelope tracking in children with permanent hearing loss through two separate experiments. EEG data were recorded while children listened to age-appropriate stories (Experiment 1) or an animated movie (Experiment 2) under aided and unaided conditions (using personal hearing aids) at multiple stimulus intensities. Neural envelope tracking was evaluated using a linear decoder reconstructing the speech envelope from the EEG in the delta band (0.5-4 Hz). Additionally, we calculated temporal response functions (TRFs) to investigate the spatio-temporal dynamics of the response. In both experiments, neural tracking increased with increasing stimulus intensity, but only in the unaided condition. In the aided condition, neural tracking remained stable across a wide range of intensities, as long as speech intelligibility was maintained. Similarly, TRF amplitudes increased with increasing stimulus intensity in the unaided condition, while in the aided condition significant differences were found in TRF latency rather than TRF amplitude. This suggests that decreasing stimulus intensity does not necessarily impact neural tracking. Furthermore, the use of personal hearing aids significantly enhanced neural envelope tracking, particularly in challenging speech conditions that would be inaudible when unaided. Finally, we found a strong correlation between neural envelope tracking and behaviorally measured speech intelligibility for both narrated stories (Experiment 1) and movie stimuli (Experiment 2). Altogether, these findings indicate that neural envelope tracking could be a valuable tool for predicting speech intelligibility benefits derived from personal hearing aids in hearing-impaired children. Incorporating narrated stories or engaging movies expands the accessibility of these methods even in clinical settings, offering new avenues for using objective speech measures to guide pediatric audiology decision-making.
Collapse
Affiliation(s)
- Tilde Van Hirtum
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Ben Somers
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Benjamin Dieudonné
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Eline Verschueren
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Jan Wouters
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Tom Francart
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium.
| |
Collapse
|
9
|
Dolhopiatenko H, Nogueira W. Selective attention decoding in bimodal cochlear implant users. Front Neurosci 2023; 16:1057605. [PMID: 36711138 PMCID: PMC9874229 DOI: 10.3389/fnins.2022.1057605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/20/2022] [Indexed: 01/12/2023] Open
Abstract
The growing group of cochlear implant (CI) users includes subjects with preserved acoustic hearing on the opposite side to the CI. The use of both listening sides results in improved speech perception in comparison to listening with one side alone. However, large variability in the measured benefit is observed. It is possible that this variability is associated with the integration of speech across electric and acoustic stimulation modalities. However, there is a lack of established methods to assess speech integration between electric and acoustic stimulation and consequently to adequately program the devices. Moreover, existing methods do not provide information about the underlying physiological mechanisms of this integration or are based on simple stimuli that are difficult to relate to speech integration. Electroencephalography (EEG) to continuous speech is promising as an objective measure of speech perception, however, its application in CIs is challenging because it is influenced by the electrical artifact introduced by these devices. For this reason, the main goal of this work is to investigate a possible electrophysiological measure of speech integration between electric and acoustic stimulation in bimodal CI users. For this purpose, a selective attention decoding paradigm has been designed and validated in bimodal CI users. The current study included behavioral and electrophysiological measures. The behavioral measure consisted of a speech understanding test, where subjects repeated words to a target speaker in the presence of a competing voice listening with the CI side (CIS) only, with the acoustic side (AS) only or with both listening sides (CIS+AS). Electrophysiological measures included cortical auditory evoked potentials (CAEPs) and selective attention decoding through EEG. CAEPs were recorded to broadband stimuli to confirm the feasibility to record cortical responses with CIS only, AS only, and CIS+AS listening modes. In the selective attention decoding paradigm a co-located target and a competing speech stream were presented to the subjects using the three listening modes (CIS only, AS only, and CIS+AS). The main hypothesis of the current study is that selective attention can be decoded in CI users despite the presence of CI electrical artifact. If selective attention decoding improves combining electric and acoustic stimulation with respect to electric stimulation alone, the hypothesis can be confirmed. No significant difference in behavioral speech understanding performance when listening with CIS+AS and AS only was found, mainly due to the ceiling effect observed with these two listening modes. The main finding of the current study is the possibility to decode selective attention in CI users even if continuous artifact is present. Moreover, an amplitude reduction of the forward transfer response function (TRF) of selective attention decoding was observed when listening with CIS+AS compared to AS only. Further studies to validate selective attention decoding as an electrophysiological measure of electric acoustic speech integration are required.
Collapse
|
10
|
Xiu B, Paul BT, Chen JM, Le TN, Lin VY, Dimitrijevic A. Neural responses to naturalistic audiovisual speech are related to listening demand in cochlear implant users. Front Hum Neurosci 2022; 16:1043499. [DOI: 10.3389/fnhum.2022.1043499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/09/2022] Open
Abstract
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and “real-world” listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, “The Office”). We used continuous EEG to quantify “speech neural tracking” (i.e., TRFs, temporal response functions) to the show’s soundtrack and 8–12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.
Collapse
|
11
|
Recording EEG in Cochlear Implant Users: Guidelines for Experimental Design and Data Analysis for Optimizing Signal Quality and Minimizing Artifacts. J Neurosci Methods 2022; 375:109592. [PMID: 35367234 DOI: 10.1016/j.jneumeth.2022.109592] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 03/26/2022] [Accepted: 03/27/2022] [Indexed: 11/22/2022]
Abstract
Cochlear implants (CI) are neural prostheses that can restore hearing in individuals with severe to profound hearing loss. Although CIs significantly improve quality of life, clinical outcomes are still highly variable. An important part of this variability is explained by the brain reorganization following cochlear implantation. Therefore, clinicians and researchers are seeking objective measurements to investigate post-implantation brain plasticity. Electroencephalography (EEG) is a promising technique because it is objective, non-invasive, and implant-compatible, but is nonetheless susceptible to massive artifacts generated by the prosthesis's electrical activity. CI artifacts can blur and distort brain responses; thus, it is crucial to develop reliable techniques to remove them from EEG recordings. Despite numerous artifact removal techniques used in previous studies, there is a paucity of documentation and consensus on the optimal EEG procedures to reduce these artifacts. Herein, and through a comprehensive review process, we provide a guideline for designing an EEG-CI experiment minimizing the effect of the artifact. We provide some technical guidance for recording an accurate neural response from CI users and discuss the current challenges in detecting and removing CI-induced artifacts from a recorded signal. The aim of this paper is also to provide recommendations to better appraise and report EEG-CI findings.
Collapse
|
12
|
Nogueira W, Dolhopiatenko H. Predicting speech intelligibility from a selective attention decoding paradigm in cochlear implant users. J Neural Eng 2022; 19. [PMID: 35234663 DOI: 10.1088/1741-2552/ac599f] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 03/01/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVES Electroencephalography (EEG) can be used to decode selective attention in cochlear implant (CI) users. This work investigates if selective attention to an attended speech source in the presence of a concurrent speech source can predict speech understanding in CI users. APPROACH CI users were instructed to attend to one out of two speech streams while EEG was recorded. Both speech streams were presented to the same ear and at different signal to interference ratios (SIRs). Speech envelope reconstruction of the to-be-attended speech from EEG was obtained by training decoders using regularized least squares. The correlation coefficient between the reconstructed and the attended (ρ_(A_SIR )) or the unattended (ρ_(U_SIR )) speech stream at each SIR was computed. Additionally, we computed the difference correlation coefficient at the same 〖(ρ〗_Diff= ρ_(A_SIR )-ρ_(U_SIR )) and opposite SIR (ρ_DiffOpp= ρ_(A_SIR )-ρ_(U_(-SIR) )). ρ_Diff compares the attended and unattended correlation coefficient to speech sources presented at different presentation levels depending on SIR. In contrast, ρ_DiffOpp compares the attended and unattended correlation coefficients to speech sources presented at the same presentation level irrespective of SIR. MAIN RESULTS Selective attention decoding in CI users is possible even if both speech streams are presented monaurally. A significant effect of SIR on ρ_(A_SIR ), ρ_Diff and ρ_DiffOpp, but not on ρ_(U_SIR ), was observed. Finally, the results show a significant correlation between speech understanding performance and ρ_(A_SIR ) as well as with ρ_(U_SIR ) across subjects. Moreover, ρ_DiffOpp which is less affected by the CI artifact, also demonstrated a significant correlation with speech understanding. SIGNIFICANCE Selective attention decoding in CI users is possible, however care needs to be taken with the CI artifact and the speech material used to train the decoders. These results are important for future development of objective speech understanding measures for CI users.
Collapse
Affiliation(s)
- Waldo Nogueira
- Department of Otolaryngology and Cluster of Excellence "Hearing4all", Hannover Medical School, Karl-Wiechert Allee, 3, Hannover, Niedersachsen, 30625, GERMANY
| | - Hanna Dolhopiatenko
- Department of Otolaryngology and Cluster of Excellence "Hearing4all", Hannover Medical School, Karl-Wiechert Allee, 3, Hannover, Niedersachsen, 30625, GERMANY
| |
Collapse
|
13
|
Aldag N, Büchner A, Lenarz T, Nogueira W. Towards decoding selective attention through cochlear implant electrodes as sensors in subjects with contralateral acoustic hearing. J Neural Eng 2022; 19. [DOI: 10.1088/1741-2552/ac4de6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 01/21/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objectives: Focusing attention on one speaker in a situation with multiple background speakers or noise is referred to as auditory selective attention. Decoding selective attention is an interesting line of research with respect to future brain-guided hearing aids or cochlear implants (CIs) that are designed to adaptively adjust sound processing through cortical feedback loops. This study investigates the feasibility of using the electrodes and backward telemetry of a CI to record electroencephalography (EEG). Approach: The study population included 6 normal-hearing (NH) listeners and 5 CI users with contralateral acoustic hearing. Cortical auditory evoked potentials (CAEP) and selective attention were recorded using a state-of-the-art high-density scalp EEG and, in the case of CI users, also using two CI electrodes as sensors in combination with the backward telemetry system of these devices (iEEG). Main results: In the selective attention paradigm with multi-channel scalp EEG the mean decoding accuracy across subjects was 94.8 % and 94.6 % for NH listeners and CI users, respectively. With single-channel scalp EEG the accuracy dropped but was above chance level in 8 to 9 out of 11 subjects, depending on the electrode montage. With the single-channel iEEG, the selective attention decoding accuracy could only be analyzed in 2 out of 5 CI users due to a loss of data in the other 3 subjects. In these 2 CI users, the selective attention decoding accuracy was above chance level. Significance: This study shows that single-channel EEG is suitable for auditory selective attention decoding, even though it reduces the decoding quality compared to a multi-channel approach. CI-based iEEG can be used for the purpose of recording CAEPs and decoding selective attention. However, the study also points out the need for further technical development for the CI backward telemetry regarding long-term recordings and the optimal sensor positions.
Collapse
|
14
|
Lee JH, Shim H, Gantz B, Choi I. Strength of Attentional Modulation on Cortical Auditory Evoked Responses Correlates with Speech-in-Noise Performance in Bimodal Cochlear Implant Users. Trends Hear 2022; 26:23312165221141143. [PMID: 36464791 PMCID: PMC9726851 DOI: 10.1177/23312165221141143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Revised: 10/10/2022] [Accepted: 10/17/2022] [Indexed: 12/12/2022] Open
Abstract
Auditory selective attention is a crucial top-down cognitive mechanism for understanding speech in noise. Cochlear implant (CI) users display great variability in speech-in-noise performance that is not easily explained by peripheral auditory profile or demographic factors. Thus, it is imperative to understand if auditory cognitive processes such as selective attention explain such variability. The presented study directly addressed this question by quantifying attentional modulation of cortical auditory responses during an attention task and comparing its individual differences with speech-in-noise performance. In our attention experiment, participants with CI were given a pre-stimulus visual cue that directed their attention to either of two speech streams and were asked to select a deviant syllable in the target stream. The two speech streams consisted of the female voice saying "Up" five times every 800 ms and the male voice saying "Down" four times every 1 s. The onset of each syllable elicited distinct event-related potentials (ERPs). At each syllable onset, the difference in the amplitudes of ERPs between the two attentional conditions (attended - ignored) was computed. This ERP amplitude difference served as a proxy for attentional modulation strength. Our group-level analysis showed that the amplitude of ERPs was greater when the syllable was attended than ignored, exhibiting that attention modulated cortical auditory responses. Moreover, the strength of attentional modulation showed a significant correlation with speech-in-noise performance. These results suggest that the attentional modulation of cortical auditory responses may provide a neural marker for predicting CI users' success in clinical tests of speech-in-noise listening.
Collapse
Affiliation(s)
- Jae-Hee Lee
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Hwan Shim
- Dept. Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY, 14623, USA
| | - Bruce Gantz
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Inyong Choi
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| |
Collapse
|
15
|
Palana J, Schwartz S, Tager-Flusberg H. Evaluating the Use of Cortical Entrainment to Measure Atypical Speech Processing: A Systematic Review. Neurosci Biobehav Rev 2021; 133:104506. [PMID: 34942267 DOI: 10.1016/j.neubiorev.2021.12.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 12/12/2021] [Accepted: 12/18/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Cortical entrainment has emerged as promising means for measuring continuous speech processing in young, neurotypical adults. However, its utility for capturing atypical speech processing has not been systematically reviewed. OBJECTIVES Synthesize evidence regarding the merit of measuring cortical entrainment to capture atypical speech processing and recommend avenues for future research. METHOD We systematically reviewed publications investigating entrainment to continuous speech in populations with auditory processing differences. RESULTS In the 25 publications reviewed, most studies were conducted on older and/or hearing-impaired adults, for whom slow-wave entrainment to speech was often heightened compared to controls. Research conducted on populations with neurodevelopmental disorders, in whom slow-wave entrainment was often reduced, was less common. Across publications, findings highlighted associations between cortical entrainment and speech processing performance differences. CONCLUSIONS Measures of cortical entrainment offer useful means of capturing speech processing differences and future research should leverage them more extensively when studying populations with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Joseph Palana
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA; Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Harvard Medical School, Boston Children's Hospital, 1 Autumn Street, Boston, MA, 02215, USA
| | - Sophie Schwartz
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| |
Collapse
|
16
|
Huet MP, Micheyl C, Parizet E, Gaudrain E. Behavioral Account of Attended Stream Enhances Neural Tracking. Front Neurosci 2021; 15:674112. [PMID: 34966252 PMCID: PMC8710602 DOI: 10.3389/fnins.2021.674112] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 10/11/2021] [Indexed: 11/13/2022] Open
Abstract
During the past decade, several studies have identified electroencephalographic (EEG) correlates of selective auditory attention to speech. In these studies, typically, listeners are instructed to focus on one of two concurrent speech streams (the "target"), while ignoring the other (the "masker"). EEG signals are recorded while participants are performing this task, and subsequently analyzed to recover the attended stream. An assumption often made in these studies is that the participant's attention can remain focused on the target throughout the test. To check this assumption, and assess when a participant's attention in a concurrent speech listening task was directed toward the target, the masker, or neither, we designed a behavioral listen-then-recall task (the Long-SWoRD test). After listening to two simultaneous short stories, participants had to identify keywords from the target story, randomly interspersed among words from the masker story and words from neither story, on a computer screen. To modulate task difficulty, and hence, the likelihood of attentional switches, masker stories were originally uttered by the same talker as the target stories. The masker voice parameters were then manipulated to parametrically control the similarity of the two streams, from clearly dissimilar to almost identical. While participants listened to the stories, EEG signals were measured and subsequently, analyzed using a temporal response function (TRF) model to reconstruct the speech stimuli. Responses in the behavioral recall task were used to infer, retrospectively, when attention was directed toward the target, the masker, or neither. During the model-training phase, the results of these behavioral-data-driven inferences were used as inputs to the model in addition to the EEG signals, to determine if this additional information would improve stimulus reconstruction accuracy, relative to performance of models trained under the assumption that the listener's attention was unwaveringly focused on the target. Results from 21 participants show that information regarding the actual - as opposed to, assumed - attentional focus can be used advantageously during model training, to enhance subsequent (test phase) accuracy of auditory stimulus-reconstruction based on EEG signals. This is the case, especially, in challenging listening situations, where the participants' attention is less likely to remain focused entirely on the target talker. In situations where the two competing voices are clearly distinct and easily separated perceptually, the assumption that listeners are able to stay focused on the target is reasonable. The behavioral recall protocol introduced here provides experimenters with a means to behaviorally track fluctuations in auditory selective attention, including, in combined behavioral/neurophysiological studies.
Collapse
Affiliation(s)
- Moïra-Phoebé Huet
- Laboratoire Vibrations Acoustique, Institut National des Sciences Appliquées de Lyon, Université de Lyon, Villeurbanne, France
- CNRS UMR 5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France
| | | | - Etienne Parizet
- Laboratoire Vibrations Acoustique, Institut National des Sciences Appliquées de Lyon, Université de Lyon, Villeurbanne, France
| | - Etienne Gaudrain
- CNRS UMR 5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France
- Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
17
|
Ihara AS, Matsumoto A, Ojima S, Katayama J, Nakamura K, Yokota Y, Watanabe H, Naruse Y. Prediction of Second Language Proficiency Based on Electroencephalographic Signals Measured While Listening to Natural Speech. Front Hum Neurosci 2021; 15:665809. [PMID: 34335208 PMCID: PMC8322447 DOI: 10.3389/fnhum.2021.665809] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 06/22/2021] [Indexed: 12/04/2022] Open
Abstract
This study had two goals: to clarify the relationship between electroencephalographic (EEG) features estimated while non-native speakers listened to a second language (L2) and their proficiency in L2 determined by a conventional paper test and to provide a predictive model for L2 proficiency based on EEG features. We measured EEG signals from 205 native Japanese speakers, who varied widely in English proficiency while they listened to natural speech in English. Following the EEG measurement, they completed a conventional English listening test for Japanese speakers. We estimated multivariate temporal response functions separately for word class, speech rate, word position, and parts of speech. We found significant negative correlations between listening score and 17 EEG features, which included peak latency of early components (corresponding to N1 and P2) for both open and closed class words and peak latency and amplitude of a late component (corresponding to N400) for open class words. On the basis of the EEG features, we generated a predictive model for Japanese speakers’ English listening proficiency. The correlation coefficient between the true and predicted listening scores was 0.51. Our results suggest that L2 or foreign language ability can be assessed using neural signatures measured while listening to natural speech, without the need of a conventional paper test.
Collapse
Affiliation(s)
- Aya S Ihara
- National Institute of Information and Communications Technology, and Osaka University, Kobe, Japan
| | - Atsushi Matsumoto
- National Institute of Information and Communications Technology, and Osaka University, Kobe, Japan
| | - Shiro Ojima
- Department of English, College of Education, Yokohama National University, Yokohama, Japan
| | - Jun'ichi Katayama
- Department of Psychological Science, and Center for Applied Psychological Science (CAPS), Kwansei Gakuin University, Nishinomiya, Japan
| | | | - Yusuke Yokota
- National Institute of Information and Communications Technology, and Osaka University, Kobe, Japan
| | - Hiroki Watanabe
- National Institute of Information and Communications Technology, and Osaka University, Kobe, Japan
| | - Yasushi Naruse
- National Institute of Information and Communications Technology, and Osaka University, Kobe, Japan
| |
Collapse
|
18
|
Paul BT, Chen J, Le T, Lin V, Dimitrijevic A. Cortical alpha oscillations in cochlear implant users reflect subjective listening effort during speech-in-noise perception. PLoS One 2021; 16:e0254162. [PMID: 34242290 PMCID: PMC8270138 DOI: 10.1371/journal.pone.0254162] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 06/22/2021] [Indexed: 12/12/2022] Open
Abstract
Listening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8-12 Hz (alpha) oscillations measured by electroencephalography (EEG). Alpha power in two cortical regions has been associated with effortful listening-left inferior frontal gyrus (IFG), and parietal cortex-but these relationships have not been examined in the same listeners. Further, there are few studies available investigating neural correlates of effort in the individuals with cochlear implants. Here we tested 16 CI users in a novel effort-focused speech-in-noise listening paradigm, and confirm a relationship between alpha power and self-reported effort ratings in parietal regions, but not left IFG. The parietal relationship was not linear but quadratic, with alpha power comparatively lower when effort ratings were at the top and bottom of the effort scale, and higher when effort ratings were in the middle of the scale. Results are discussed in terms of cognitive systems that are engaged in difficult listening situations, and the implication for clinical translation.
Collapse
Affiliation(s)
- Brandon T. Paul
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Joseph Chen
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Trung Le
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Vincent Lin
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Andrew Dimitrijevic
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
19
|
Effects of long-term unilateral cochlear implant use on large-scale network synchronization in adolescents. Hear Res 2021; 409:108308. [PMID: 34343851 DOI: 10.1016/j.heares.2021.108308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 06/25/2021] [Accepted: 06/29/2021] [Indexed: 11/20/2022]
Abstract
Unilateral cochlear implantation (CI) limits deafness-related changes in the auditory pathways but promotes abnormal cortical preference for the stimulated ear and leaves the opposite ear with little protection from auditory deprivation. In the present study, time-frequency analyses of event-related potentials elicited from stimuli presented to each ear were used to determine effects of unilateral CI use on cortical synchrony. CI-elicited activity in 34 adolescents (15.4±1.9 years of age) who had listened with unilateral CIs for most of their lives prior to bilateral implantation were compared to responses elicited by a 500Hz tone-burst in normal hearing peers. Phase-locking values between 4 and 60Hz were calculated for 171 pairs of 19-cephalic recording electrodes. Ear specific results were found in the normal hearing group: higher synchronization in low frequency bands (theta and alpha) from left ear stimulation in the right hemisphere and more high frequency activity (gamma band) from right ear stimulation in the left hemisphere. In the CI group, increased phase synchronization in the theta and beta frequencies with bursts of gamma activity were elicited by the experienced-right CI between frontal, temporal and parietal cortical regions in both hemispheres, consistent with increased recruitment of cortical areas involved in attention and higher-order processes, potentially to support unilateral listening. By contrast, activity was globally desynchronized in response to initial stimulation of the naïve-left ear, suggesting decoupling of these pathways from the cortical hearing network. These data reveal asymmetric auditory development promoted by unilateral CI use, resulting in an abnormally mature neural network.
Collapse
|
20
|
Kraus F, Tune S, Ruhe A, Obleser J, Wöstmann M. Unilateral Acoustic Degradation Delays Attentional Separation of Competing Speech. Trends Hear 2021; 25:23312165211013242. [PMID: 34184964 PMCID: PMC8246482 DOI: 10.1177/23312165211013242] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Hearing loss is often asymmetric such that hearing thresholds differ substantially between the two ears. The extreme case of such asymmetric hearing is single-sided deafness. A unilateral cochlear implant (CI) on the more severely impaired ear is an effective treatment to restore hearing. The interactive effects of unilateral acoustic degradation and spatial attention to one sound source in multitalker situations are at present unclear. Here, we simulated some features of listening with a unilateral CI in young, normal-hearing listeners (N = 22) who were presented with 8-band noise-vocoded speech to one ear and intact speech to the other ear. Neural responses were recorded in the electroencephalogram to obtain the spectrotemporal response function to speech. Listeners made more mistakes when answering questions about vocoded (vs. intact) attended speech. At the neural level, we asked how unilateral acoustic degradation would impact the attention-induced amplification of tracking target versus distracting speech. Interestingly, unilateral degradation did not per se reduce the attention-induced amplification but instead delayed it in time: Speech encoding accuracy, modelled on the basis of the spectrotemporal response function, was significantly enhanced for attended versus ignored intact speech at earlier neural response latencies (<∼250 ms). This attentional enhancement was not absent but delayed for vocoded speech. These findings suggest that attentional selection of unilateral, degraded speech is feasible but induces delayed neural separation of competing speech, which might explain listening challenges experienced by unilateral CI users.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Sarah Tune
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Anna Ruhe
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
21
|
EEG-based diagnostics of the auditory system using cochlear implant electrodes as sensors. Sci Rep 2021; 11:5383. [PMID: 33686155 PMCID: PMC7940426 DOI: 10.1038/s41598-021-84829-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 02/18/2021] [Indexed: 01/31/2023] Open
Abstract
The cochlear implant is one of the most successful medical prostheses, allowing deaf and severely hearing-impaired persons to hear again by electrically stimulating the auditory nerve. A trained audiologist adjusts the stimulation settings for good speech understanding, known as "fitting" the implant. This process is based on subjective feedback from the user, making it time-consuming and challenging, especially in paediatric or communication-impaired populations. Furthermore, fittings only happen during infrequent sessions at a clinic, and therefore cannot take into account variable factors that affect the user's hearing, such as physiological changes and different listening environments. Objective audiometry, in which brain responses evoked by auditory stimulation are collected and analysed, removes the need for active patient participation. However, recording of brain responses still requires expensive equipment that is cumbersome to use. An elegant solution is to record the neural signals using the implant itself. We demonstrate for the first time the recording of continuous electroencephalographic (EEG) signals from the implanted intracochlear electrode array in human subjects, using auditory evoked potentials originating from different brain regions. This was done using a temporary recording set-up with a percutaneous connector used for research purposes. Furthermore, we show that the response morphologies and amplitudes depend crucially on the recording electrode configuration. The integration of an EEG system into cochlear implants paves the way towards chronic neuro-monitoring of hearing-impaired patients in their everyday environment, and neuro-steered hearing prostheses, which can autonomously adjust their output based on neural feedback.
Collapse
|
22
|
Abstract
Speech processing in the human brain is grounded in non-specific auditory processing in the general mammalian brain, but relies on human-specific adaptations for processing speech and language. For this reason, many recent neurophysiological investigations of speech processing have turned to the human brain, with an emphasis on continuous speech. Substantial progress has been made using the phenomenon of "neural speech tracking", in which neurophysiological responses time-lock to the rhythm of auditory (and other) features in continuous speech. One broad category of investigations concerns the extent to which speech tracking measures are related to speech intelligibility, which has clinical applications in addition to its scientific importance. Recent investigations have also focused on disentangling different neural processes that contribute to speech tracking. The two lines of research are closely related, since processing stages throughout auditory cortex contribute to speech comprehension, in addition to subcortical processing and higher order and attentional processes.
Collapse
Affiliation(s)
- Christian Brodbeck
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, U.S.A
| | - Jonathan Z. Simon
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, U.S.A
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, U.S.A
- Department of Biology, University of Maryland, College Park, Maryland 20742, U.S.A
| |
Collapse
|