1
|
Mohammadi Y, Graversen C, Manresa JB, Østergaard J, Andersen OK. Effects of Background Noise and Linguistic Violations on Frontal Theta Oscillations During Effortful Listening. Ear Hear 2024; 45:721-729. [PMID: 38287477 DOI: 10.1097/aud.0000000000001464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
OBJECTIVES Background noise and linguistic violations have been shown to increase the listening effort. The present study aims to examine the effects of the interaction between background noise and linguistic violations on subjective listening effort and frontal theta oscillations during effortful listening. DESIGN Thirty-two normal-hearing listeners participated in this study. The linguistic violation was operationalized as sentences versus random words (strings). Behavioral and electroencephalography data were collected while participants listened to sentences and strings in background noise at different signal to noise ratios (SNRs) (-9, -6, -3, 0 dB), maintained them in memory for about 3 sec in the presence of background noise, and then chose the correct sequence of words from a base matrix of words. RESULTS Results showed the interaction effects of SNR and speech type on effort ratings. Although strings were inherently more effortful than sentences, decreasing SNR from 0 to -9 dB (in 3 dB steps), increased effort rating more for sentences than strings in each step, suggesting the more pronounced effect of noise on sentence processing that strings in low SNRs. Results also showed a significant interaction between SNR and speech type on frontal theta event-related synchronization during the retention interval. This interaction indicated that strings exhibited higher frontal theta event-related synchronization than sentences at SNR of 0 dB, suggesting increased verbal working memory demand for strings under challenging listening conditions. CONCLUSIONS The study demonstrated that the interplay between linguistic violation and background noise shapes perceived effort and cognitive load during speech comprehension under challenging listening conditions. The differential impact of noise on processing sentences versus strings highlights the influential role of context and cognitive resource allocation in the processing of speech.
Collapse
Affiliation(s)
- Yousef Mohammadi
- Department of Health Science and Technology, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
| | - Carina Graversen
- Department of Health Science and Technology, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
- Department of Health Science and Technology, Center for Neuroplasticity and Pain, Aalborg University, Aalborg, Denmark
| | - José Biurrun Manresa
- Department of Health Science and Technology, Center for Neuroplasticity and Pain, Aalborg University, Aalborg, Denmark
- Institute for Research and Development in Bioengineering and Bioinformatics, National Scientific and Technical Research Council (CONICET) - National University of Entre Ríos (UNER), Oro Verde, Argentina
| | - Jan Østergaard
- Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| | - Ole Kæseler Andersen
- Department of Health Science and Technology, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
- Department of Health Science and Technology, Center for Neuroplasticity and Pain, Aalborg University, Aalborg, Denmark
| |
Collapse
|
2
|
Inguscio BMS, Cartocci G, Sciaraffa N, Nicastri M, Giallini I, Aricò P, Greco A, Babiloni F, Mancini P. Two are better than one: Differences in cortical EEG patterns during auditory and visual verbal working memory processing between Unilateral and Bilateral Cochlear Implanted children. Hear Res 2024; 446:109007. [PMID: 38608331 DOI: 10.1016/j.heares.2024.109007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 03/28/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
Despite the proven effectiveness of cochlear implant (CI) in the hearing restoration of deaf or hard-of-hearing (DHH) children, to date, extreme variability in verbal working memory (VWM) abilities is observed in both unilateral and bilateral CI user children (CIs). Although clinical experience has long observed deficits in this fundamental executive function in CIs, the cause to date is still unknown. Here, we have set out to investigate differences in brain functioning regarding the impact of monaural and binaural listening in CIs compared with normal hearing (NH) peers during a three-level difficulty n-back task undertaken in two sensory modalities (auditory and visual). The objective of this pioneering study was to identify electroencephalographic (EEG) marker pattern differences in visual and auditory VWM performances in CIs compared to NH peers and possible differences between unilateral cochlear implant (UCI) and bilateral cochlear implant (BCI) users. The main results revealed differences in theta and gamma EEG bands. Compared with hearing controls and BCIs, UCIs showed hypoactivation of theta in the frontal area during the most complex condition of the auditory task and a correlation of the same activation with VWM performance. Hypoactivation in theta was also observed, again for UCIs, in the left hemisphere when compared to BCIs and in the gamma band in UCIs compared to both BCIs and NHs. For the latter two, a correlation was found between left hemispheric gamma oscillation and performance in the audio task. These findings, discussed in the light of recent research, suggest that unilateral CI is deficient in supporting auditory VWM in DHH. At the same time, bilateral CI would allow the DHH child to approach the VWM benchmark for NH children. The present study suggests the possible effectiveness of EEG in supporting, through a targeted approach, the diagnosis and rehabilitation of VWM in DHH children.
Collapse
Affiliation(s)
- Bianca Maria Serena Inguscio
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy.
| | - Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy
| | | | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Pietro Aricò
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy; Department of Computer, Control, and Management Engineering "Antonio Ruberti", Sapienza University of Rome, Via Ariosto 125, Rome 00185, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy; Department of Computer Science, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou 310018, China
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| |
Collapse
|
3
|
McGarrigle R, Knight S, Rakusen L, Mattys S. Mood shapes the impact of reward on perceived fatigue from listening. Q J Exp Psychol (Hove) 2024:17470218241242260. [PMID: 38485525 DOI: 10.1177/17470218241242260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Knowledge of the underlying mechanisms of effortful listening could help to reduce cases of social withdrawal and mitigate fatigue, especially in older adults. However, the relationship between transient effort and longer term fatigue is likely to be more complex than originally thought. Here, we manipulated the presence/absence of monetary reward to examine the role of motivation and mood state in governing changes in perceived effort and fatigue from listening. In an online study, 185 participants were randomly assigned to either a "reward" (n = 91) or "no-reward" (n = 94) group and completed a dichotic listening task along with a series of questionnaires assessing changes over time in perceived effort, mood, and fatigue. Effort ratings were higher overall in the reward group, yet fatigue ratings in that group showed a shallower linear increase over time. Mediation analysis revealed an indirect effect of reward on fatigue ratings via perceived mood state; reward induced a more positive mood state which was associated with reduced fatigue. These results suggest that: (1) listening conditions rated as more "effortful" may be less fatiguing if the effort is deemed worthwhile, and (2) alterations to one's mood state represent a potential mechanism by which fatigue may be elicited during unrewarding listening situations.
Collapse
Affiliation(s)
| | - Sarah Knight
- Department of Psychology, University of York, York, UK
| | | | - Sven Mattys
- Department of Psychology, University of York, York, UK
| |
Collapse
|
4
|
Brilliant, Yaar-Soffer Y, Herrmann CS, Henkin Y, Kral A. Theta and alpha oscillatory signatures of auditory sensory and cognitive loads during complex listening. Neuroimage 2024; 289:120546. [PMID: 38387743 DOI: 10.1016/j.neuroimage.2024.120546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 02/07/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024] Open
Abstract
The neuronal signatures of sensory and cognitive load provide access to brain activities related to complex listening situations. Sensory and cognitive loads are typically reflected in measures like response time (RT) and event-related potentials (ERPs) components. It's, however, strenuous to distinguish the underlying brain processes solely from these measures. In this study, along with RT- and ERP-analysis, we performed time-frequency analysis and source localization of oscillatory activity in participants performing two different auditory tasks with varying degrees of complexity and related them to sensory and cognitive load. We studied neuronal oscillatory activity in both periods before the behavioral response (pre-response) and after it (post-response). Robust oscillatory activities were found in both periods and were differentially affected by sensory and cognitive load. Oscillatory activity under sensory load was characterized by decrease in pre-response (early) theta activity and increased alpha activity. Oscillatory activity under cognitive load was characterized by increased theta activity, mainly in post-response (late) time. Furthermore, source localization revealed specific brain regions responsible for processing these loads, such as temporal and frontal lobe, cingulate cortex and precuneus. The results provide evidence that in complex listening situations, the brain processes sensory and cognitive loads differently. These neural processes have specific oscillatory signatures and are long lasting, extending beyond the behavioral response.
Collapse
Affiliation(s)
- Brilliant
- Department of Experimental Otology, Hannover Medical School, 30625 Hannover, Germany.
| | - Y Yaar-Soffer
- Department of Communication Disorder, Tel Aviv University, 5262657 Tel Aviv, Israel; Hearing, Speech and Language Center, Sheba Medical Center, 5265601 Tel Hashomer, Israel
| | - C S Herrmann
- Experimental Psychology Division, University of Oldenburg, 26111 Oldenburg, Germany
| | - Y Henkin
- Department of Communication Disorder, Tel Aviv University, 5262657 Tel Aviv, Israel; Hearing, Speech and Language Center, Sheba Medical Center, 5265601 Tel Hashomer, Israel
| | - A Kral
- Department of Experimental Otology, Hannover Medical School, 30625 Hannover, Germany
| |
Collapse
|
5
|
Slugocki C, Kuk F, Korhonen P. Alpha-Band Dynamics of Hearing Aid Wearers Performing the Repeat-Recall Test (RRT). Trends Hear 2024; 28:23312165231222098. [PMID: 38549287 PMCID: PMC10981257 DOI: 10.1177/23312165231222098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/28/2023] [Accepted: 12/06/2023] [Indexed: 04/01/2024] Open
Abstract
This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Francis Kuk
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Petri Korhonen
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| |
Collapse
|
6
|
Huizeling E, Alday PM, Peeters D, Hagoort P. Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia 2023; 191:108730. [PMID: 37939871 DOI: 10.1016/j.neuropsychologia.2023.108730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/15/2023] [Accepted: 11/03/2023] [Indexed: 11/10/2023]
Abstract
EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
Collapse
Affiliation(s)
- Eleanor Huizeling
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| | | | - David Peeters
- Department of Communication and Cognition, TiCC, Tilburg University, Tilburg, the Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| |
Collapse
|
7
|
Mohammadi Y, Østergaard J, Graversen C, Andersen OK, Biurrun Manresa J. Validity and reliability of self-reported and neural measures of listening effort. Eur J Neurosci 2023; 58:4357-4370. [PMID: 37984406 DOI: 10.1111/ejn.16187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 10/17/2023] [Accepted: 10/23/2023] [Indexed: 11/22/2023]
Abstract
Listening effort can be defined as a measure of cognitive resources used by listeners to perform a listening task. Various methods have been proposed to measure this effort, yet their reliability remains unestablished, a crucial step before their application in research or clinical settings. This study encompassed 32 participants undertaking speech-in-noise tasks across two sessions, approximately a week apart. They listened to sentences and word lists at varying signal-to-noise ratios (SNRs) (-9, -6, -3 and 0 dB), then retaining them for roughly 3 s. We evaluated the test-retest reliability of self-reported effort ratings, theta (4-7 Hz) and alpha (8-13 Hz) oscillatory power, suggested previously as neural markers of listening effort. Additionally, we examined the reliability of correct word percentages. Both relative and absolute reliability were assessed using intraclass correlation coefficients (ICC) and Bland-Altman analysis. We also computed the standard error of measurement (SEM) and smallest detectable change (SDC). Our findings indicated heightened frontal midline theta power for word lists compared to sentences during the retention phase under high SNRs (0 dB, -3 dB), likely indicating a greater memory load for word lists. We observed SNR's impact on alpha power in the right central region during the listening phase and frontal theta power during the retention phase in sentences. Overall, the reliability analysis demonstrated satisfactory between-session variability for correct words and effort ratings. However, neural measures (frontal midline theta power and right central alpha power) displayed substantial variability, even though group-level outcomes appeared consistent across sessions.
Collapse
Affiliation(s)
- Yousef Mohammadi
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Jan Østergaard
- Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| | - Carina Graversen
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Ole Kaeseler Andersen
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - José Biurrun Manresa
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Institute for Research and Development in Bioengineering and Bioinformatics (IBB), CONICET-UNER, Oro Verde, Argentina
| |
Collapse
|
8
|
Philips C, Jacquemin L, Lammers MJW, Mertens G, Gilles A, Vanderveken OM, Van Rompaey V. Listening effort and fatigue among cochlear implant users: a scoping review. Front Neurol 2023; 14:1278508. [PMID: 38020642 PMCID: PMC10656682 DOI: 10.3389/fneur.2023.1278508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 09/18/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction In challenging listening situations, speech perception with a cochlear implant (CI) remains demanding and requires high levels of listening effort, which can lead to increased levels of listening-related fatigue. The body of literature on these topics increases as the number of CI users rises. This scoping review aims to provide an overview of the existing literature on listening effort, fatigue, and listening-related fatigue among CI users and the measurement techniques to evaluate them. Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statements were used to conduct the scoping review. The search was performed on PubMed, Scopus, and Web of Science to identify all relevant studies. Results In total, 24 studies were included and suggests that CI users experience higher levels of listening effort when compared to normal hearing controls using scales, questionnaires and electroencephalogram measurements. However, executing dual-task paradigms did not reveal any difference in listening effort between both groups. Uncertainty exists regarding the difference in listening effort between unilateral, bilateral, and bimodal CI users with bilateral hearing loss due to ambiguous results. Only five studies were eligible for the research on fatigue and listening-related fatigue. Additionally, studies using objective measurement methods were lacking. Discussion This scoping review highlights the necessity for additional research on these topics. Moreover, there is a need for guidelines on how listening effort, fatigue, and listening-related fatigue should be measured to allow for study results that are comparable and support optimal rehabilitation strategies.
Collapse
Affiliation(s)
- Cato Philips
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| | - Laure Jacquemin
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| | - Marc J. W. Lammers
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| | - Griet Mertens
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| | - Annick Gilles
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
- Department of Education, Health and Social Work, University College Ghent, Ghent, Belgium
| | - Olivier M. Vanderveken
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| | - Vincent Van Rompaey
- Experimental Laboratory of Translational Neurosciences and Dento-Otolaryngology, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology/Head and Neck Surgery, Antwerp University Hospital, Antwerp, Belgium
| |
Collapse
|
9
|
Eqlimi E, Bockstael A, Schönwiesner M, Talsma D, Botteldooren D. Time course of EEG complexity reflects attentional engagement during listening to speech in noise. Eur J Neurosci 2023; 58:4043-4069. [PMID: 37814423 DOI: 10.1111/ejn.16159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 08/31/2023] [Accepted: 09/13/2023] [Indexed: 10/11/2023]
Abstract
Auditory distractions are recognized to considerably challenge the quality of information encoding during speech comprehension. This study explores electroencephalography (EEG) microstate dynamics in ecologically valid, noisy settings, aiming to uncover how these auditory distractions influence the process of information encoding during speech comprehension. We examined three listening scenarios: (1) speech perception with background noise (LA), (2) focused attention on the background noise (BA), and (3) intentional disregard of the background noise (BUA). Our findings showed that microstate complexity and unpredictability increased when attention was directed towards speech compared with tasks without speech (LA > BA & BUA). Notably, the time elapsed between the recurrence of microstates increased significantly in LA compared with both BA and BUA. This suggests that coping with background noise during speech comprehension demands more sustained cognitive effort. Additionally, a two-stage time course for both microstate complexity and alpha-to-theta power ratio was observed. Specifically, in the early epochs, a lower level was observed, which gradually increased and eventually reached a steady level in the later epochs. The findings suggest that the initial stage is primarily driven by sensory processes and information gathering, while the second stage involves higher level cognitive engagement, including mnemonic binding and memory encoding.
Collapse
Affiliation(s)
- Ehsan Eqlimi
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | | | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| |
Collapse
|
10
|
Kestens K, Van Yper L, Degeest S, Keppler H. The P300 Auditory Evoked Potential: A Physiological Measure of the Engagement of Cognitive Systems Contributing to Listening Effort? Ear Hear 2023; 44:1389-1403. [PMID: 37287098 DOI: 10.1097/aud.0000000000001381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVES This study aimed to explore the potential of the P300 (P3b) as a physiological measure of the engagement of cognitive systems contributing to listening effort. DESIGN Nineteen right-handed young adults (mean age: 24.79 years) and 20 right-handed older adults (mean age: 58.90 years) with age-appropriate hearing were included. The P300 was recorded at Fz, Cz, and Pz using a two-stimulus oddball paradigm with the Flemish monosyllabic numbers "one" and "three" as standard and deviant stimuli, respectively. This oddball paradigm was conducted in three listening conditions, varying in listening demand: one quiet and two noisy listening conditions (+4 and -2 dB signal to noise ratio [SNR]). At each listening condition, physiological, behavioral, and subjective tests of listening effort were administered. P300 amplitude and latency served as a potential physiological measure of the engagement of cognitive systems contributing to listening effort. In addition, the mean reaction time to respond to the deviant stimuli was used as a behavioral listening effort measurement. Last, subjective listening effort was administered through a visual analog scale. To assess the effects of listening condition and age group on each of these measures, linear mixed models were conducted. Correlation coefficients were calculated to determine the relationship between the physiological, behavioral, and subjective measures. RESULTS P300 amplitude and latency, mean reaction time, and subjective scores significantly increased as the listening condition became more taxing. Moreover, a significant group effect was found for all physiological, behavioral, and subjective measures, favoring young adults. Last, no clear relationships between the physiological, behavioral, and subjective measures were found. CONCLUSIONS The P300 was considered a physiological measure of the engagement of cognitive systems contributing to listening effort. Because advancing age is associated with hearing loss and cognitive decline, more research is needed on the effects of all these variables on the P300 to further explore its usefulness as a listening effort measurement for research and clinical purposes.
Collapse
Affiliation(s)
- Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Lindsey Van Yper
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Sofie Degeest
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Hannah Keppler
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
- Department of Oto-rhino-laryngology, Ghent University Hospital, Ghent, Belgium
| |
Collapse
|
11
|
An H, Lee J, Suh MW, Lim Y. Neural correlation of speech envelope tracking for background noise in normal hearing. Front Neurosci 2023; 17:1268591. [PMID: 37916182 PMCID: PMC10616241 DOI: 10.3389/fnins.2023.1268591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 10/04/2023] [Indexed: 11/03/2023] Open
Abstract
Everyday speech communication often occurs in environments with background noise, and the impact of noise on speech recognition can vary depending on factors such as noise type, noise intensity, and the listener's hearing ability. However, the extent to which neural mechanisms in speech understanding are influenced by different types and levels of noise remains unknown. This study aims to investigate whether individuals exhibit distinct neural responses and attention strategies depending on noise conditions. We recorded electroencephalography (EEG) data from 20 participants with normal hearing (13 males) and evaluated both neural tracking of speech envelopes and behavioral performance in speech understanding in the presence of varying types of background noise. Participants engaged in an EEG experiment consisting of two separate sessions. The first session involved listening to a 12-min story presented binaurally without any background noise. In the second session, speech understanding scores were measured using matrix sentences presented under speech-shaped noise (SSN) and Story noise background noise conditions at noise levels corresponding to sentence recognitions score (SRS). We observed differences in neural envelope correlation depending on noise type but not on its level. Interestingly, the impact of noise type on the variation in envelope tracking was more significant among participants with higher speech perception scores, while those with lower scores exhibited similarities in envelope correlation regardless of the noise condition. The findings suggest that even individuals with normal hearing could adopt different strategies to understand speech in challenging listening environments, depending on the type of noise.
Collapse
Affiliation(s)
- HyunJung An
- Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - JeeWon Lee
- Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul, Republic of Korea
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Yoonseob Lim
- Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
- Department of HY-KIST Bio-convergence, Hanyang University, Seoul, Republic of Korea
| |
Collapse
|
12
|
Kallioinen P, Olofsson JK, von Mentzer CN. Semantic processing in children with Cochlear Implants: A review of current N400 studies and recommendations for future research. Biol Psychol 2023; 182:108655. [PMID: 37541539 DOI: 10.1016/j.biopsycho.2023.108655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023]
Abstract
Deaf and hard of hearing children with cochlear implants (CI) often display impaired spoken language skills. While a large number of studies investigated brain responses to sounds in this population, relatively few focused on semantic processing. Here we summarize and discuss findings in four studies of the N400, a cortical response that reflects semantic processing, in children with CI. A study with auditory target stimuli found N400 effects at delayed latencies at 12 months after implantation, but at 18 and 24 months after implantation effects had typical latencies. In studies with visual target stimuli N400 effects were larger than or similar to controls in children with CI, despite lower semantic abilities. We propose that in children with CI, the observed large N400 effect reflects a stronger reliance on top-down predictions, relative to bottom-up language processing. Recent behavioral studies of children and adults with CI suggest that top-down processing is a common compensatory strategy, but with distinct limitations such as being effortful. A majority of the studies have small sample sizes (N < 20), and only responses to image targets were studied repeatedly in similar paradigms. This precludes strong conclusions. We give suggestions for future research and ways to overcome the scarcity of participants, including extending research to children with conventional hearing aids, an understudied group.
Collapse
Affiliation(s)
- Petter Kallioinen
- Department of Linguistics, Stockholm University, Stockholm, Sweden; Lund University Cognitive Science, Lund University, Lund, Sweden.
| | - Jonas K Olofsson
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | |
Collapse
|
13
|
Cartocci G, Inguscio BMS, Giorgi A, Vozzi A, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Fetoni AR, Freni F, Ciodaro F, Galletti F, Albera R, Canale A, Piccioni LO, Babiloni F. Music in noise recognition: An EEG study of listening effort in cochlear implant users and normal hearing controls. PLoS One 2023; 18:e0288461. [PMID: 37561758 PMCID: PMC10414671 DOI: 10.1371/journal.pone.0288461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/27/2023] [Indexed: 08/12/2023] Open
Abstract
Despite the plethora of studies investigating listening effort and the amount of research concerning music perception by cochlear implant (CI) users, the investigation of the influence of background noise on music processing has never been performed. Given the typical speech in noise recognition task for the listening effort assessment, the aim of the present study was to investigate the listening effort during an emotional categorization task on musical pieces with different levels of background noise. The listening effort was investigated, in addition to participants' ratings and performances, using EEG features known to be involved in such phenomenon, that is alpha activity in parietal areas and in the left inferior frontal gyrus (IFG), that includes the Broca's area. Results showed that CI users performed worse than normal hearing (NH) controls in the recognition of the emotional content of the stimuli. Furthermore, when considering the alpha activity corresponding to the listening to signal to noise ratio (SNR) 5 and SNR10 conditions subtracted of the activity while listening to the Quiet condition-ideally removing the emotional content of the music and isolating the difficulty level due to the SNRs- CI users reported higher levels of activity in the parietal alpha and in the homologous of the left IFG in the right hemisphere (F8 EEG channel), in comparison to NH. Finally, a novel suggestion of a particular sensitivity of F8 for SNR-related listening effort in music was provided.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Andrea Giorgi
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Carlo Antonio Leone
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Tiziana Di Cesare
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Anna Rita Fetoni
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Ciodaro
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Galletti
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Roberto Albera
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Andrea Canale
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Lucia Oriella Piccioni
- Department of Otolaryngology-Head and Neck Surgery, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| |
Collapse
|
14
|
Mohammadi Y, Graversen C, Østergaard J, Andersen OK, Reichenbach T. Phase-locking of Neural Activity to the Envelope of Speech in the Delta Frequency Band Reflects Differences between Word Lists and Sentences. J Cogn Neurosci 2023; 35:1301-1311. [PMID: 37379482 DOI: 10.1162/jocn_a_02016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
The envelope of a speech signal is tracked by neural activity in the cerebral cortex. The cortical tracking occurs mainly in two frequency bands, theta (4-8 Hz) and delta (1-4 Hz). Tracking in the faster theta band has been mostly associated with lower-level acoustic processing, such as the parsing of syllables, whereas the slower tracking in the delta band relates to higher-level linguistic information of words and word sequences. However, much regarding the more specific association between cortical tracking and acoustic as well as linguistic processing remains to be uncovered. Here, we recorded EEG responses to both meaningful sentences and random word lists in different levels of signal-to-noise ratios (SNRs) that lead to different levels of speech comprehension as well as listening effort. We then related the neural signals to the acoustic stimuli by computing the phase-locking value (PLV) between the EEG recordings and the speech envelope. We found that the PLV in the delta band increases with increasing SNR for sentences but not for the random word lists, showing that the PLV in this frequency band reflects linguistic information. When attempting to disentangle the effects of SNR, speech comprehension, and listening effort, we observed a trend that the PLV in the delta band might reflect listening effort rather than the other two variables, although the effect was not statistically significant. In summary, our study shows that the PLV in the delta band reflects linguistic information and might be related to listening effort.
Collapse
|
15
|
Villard S, Perrachione TK, Lim SJ, Alam A, Kidd G. Energetic and informational masking place dissociable demands on listening effort: Evidence from simultaneous electroencephalography and pupillometrya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1152-1167. [PMID: 37610284 PMCID: PMC10449482 DOI: 10.1121/10.0020539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/24/2023]
Abstract
The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8-12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks. This study aimed to compare changes in pupil size and alpha power elicited by different types of auditory maskers (highly confusable intelligible speech maskers, speech-envelope-modulated speech-shaped noise, and unmodulated speech-shaped noise maskers) in young, normal-hearing listeners. Within each condition, the target-to-masker ratio was set at the participant's individually estimated 75% correct point on the psychometric function. The speech masking condition elicited a significantly greater increase in pupil size than either of the noise masking conditions, whereas the unmodulated noise masking condition elicited a significantly greater increase in alpha oscillatory power than the speech masking condition, suggesting that the effort needed to solve these respective tasks may have different neural origins.
Collapse
Affiliation(s)
- Sarah Villard
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ayesha Alam
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
16
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
17
|
McGarrigle R, Mattys S. Sensory-Processing Sensitivity Predicts Fatigue From Listening, But Not Perceived Effort, in Young and Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:444-460. [PMID: 36657070 PMCID: PMC10023191 DOI: 10.1044/2022_jslhr-22-00374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/16/2022] [Accepted: 10/18/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Listening-related fatigue is a potential negative consequence of challenges experienced during everyday listening and may disproportionately affect older adults. Contrary to expectation, we recently found that increased reports of listening-related fatigue were associated with better performance on a dichotic listening task. However, this link was found only in individuals who reported heightened sensitivity to a variety of physical, social, and emotional stimuli (i.e., increased "sensory-processing sensitivity" [SPS]). This study examined whether perceived effort may underlie the link between performance and fatigue. METHOD Two hundred six young adults, aged 18-30 years (Experiment 1), and 122 older adults, aged 60-80 years (Experiment 2), performed a dichotic listening task and were administered a series of questionnaires including the NASA Task Load Index of perceived effort, the Vanderbilt Fatigue Scale (measuring daily life listening-related fatigue), and the Highly Sensitive Person Scale (measuring SPS). Both experiments were completed online. RESULTS SPS predicted listening-related fatigue, but perceived effort during the listening task was not associated with SPS or listening-related fatigue in either age group. We were also unable to replicate the interaction between dichotic listening performance and SPS in either group. Exploratory analyses revealed contrasting effects of age; older adults found the dichotic listening task more effortful but indicated lower overall fatigue. CONCLUSIONS These findings suggest that SPS is a better predictor of listening-related fatigue than performance or effort ratings on a dichotic listening task. SPS may be an important factor in determining an individual's likelihood of experiencing listening-related fatigue irrespective of hearing or cognitive ability. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21893013.
Collapse
Affiliation(s)
- Ronan McGarrigle
- Department of Psychology, University of Bradford, United Kingdom
- Department of Psychology, University of York, United Kingdom
| | - Sven Mattys
- Department of Psychology, University of York, United Kingdom
| |
Collapse
|
18
|
Beckers L, Tromp N, Philips B, Mylanus E, Huinck W. Exploring neurocognitive factors and brain activation in adult cochlear implant recipients associated with speech perception outcomes-A scoping review. Front Neurosci 2023; 17:1046669. [PMID: 36816114 PMCID: PMC9932917 DOI: 10.3389/fnins.2023.1046669] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/05/2023] [Indexed: 02/05/2023] Open
Abstract
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W.
Collapse
Affiliation(s)
- Loes Beckers
- Cochlear Ltd., Mechelen, Belgium,Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands,*Correspondence: Loes Beckers,
| | - Nikki Tromp
- Cochlear Ltd., Mechelen, Belgium,Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | | | - Emmanuel Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Wendy Huinck
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
19
|
Shields C, Sladen M, Bruce IA, Kluk K, Nichani J. Exploring the Correlations Between Measures of Listening Effort in Adults and Children: A Systematic Review with Narrative Synthesis. Trends Hear 2023; 27:23312165221137116. [PMID: 36636020 PMCID: PMC9982391 DOI: 10.1177/23312165221137116] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
Listening effort (LE) describes the cognitive resources needed to process an auditory message. Our understanding of this notion remains in its infancy, hindering our ability to appreciate how it impacts individuals with hearing impairment effectively. Despite the myriad of proposed measurement tools, a validated method remains elusive. This is complicated by the seeming lack of association between tools demonstrated via correlational analyses. This review aims to systematically review the literature relating to the correlational analyses between different measures of LE. Five databases were used- PubMed, Cochrane, EMBASE, PsychINFO, and CINAHL. The quality of the evidence was assessed using the GRADE criteria and risk of bias with ROBINS-I/GRADE tools. Each statistically significant analysis was classified using an approved system for medical correlations. The final analyses included 48 papers, equating to 274 correlational analyses, of which 99 reached statistical significance (36.1%). Within these results, the most prevalent classifications were poor or fair. Moreover, when moderate or very strong correlations were observed, they tended to be dependent on experimental conditions. The quality of evidence was graded as very low. These results show that measures of LE are poorly correlated and supports the multi-dimensional concept of LE. The lack of association may be explained by considering where each measure operates along the effort perception pathway. Moreover, the fragility of significant correlations to specific conditions further diminishes the hope of finding an all-encompassing tool. Therefore, it may be prudent to focus on capturing the consequences of LE rather than the notion itself.
Collapse
Affiliation(s)
- Callum Shields
- ENT department, Royal Manchester
Children's Hospital, Manchester, UK,University of Manchester, Manchester, UK,Callum Shields, ENT department, Royal
Manchester Children's Hospital, Manchester, UK.
| | - Mark Sladen
- ENT department, Royal Manchester
Children's Hospital, Manchester, UK
| | | | | | - Jaya Nichani
- ENT department, Royal Manchester
Children's Hospital, Manchester, UK
| |
Collapse
|
20
|
Dolhopiatenko H, Nogueira W. Selective attention decoding in bimodal cochlear implant users. Front Neurosci 2023; 16:1057605. [PMID: 36711138 PMCID: PMC9874229 DOI: 10.3389/fnins.2022.1057605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/20/2022] [Indexed: 01/12/2023] Open
Abstract
The growing group of cochlear implant (CI) users includes subjects with preserved acoustic hearing on the opposite side to the CI. The use of both listening sides results in improved speech perception in comparison to listening with one side alone. However, large variability in the measured benefit is observed. It is possible that this variability is associated with the integration of speech across electric and acoustic stimulation modalities. However, there is a lack of established methods to assess speech integration between electric and acoustic stimulation and consequently to adequately program the devices. Moreover, existing methods do not provide information about the underlying physiological mechanisms of this integration or are based on simple stimuli that are difficult to relate to speech integration. Electroencephalography (EEG) to continuous speech is promising as an objective measure of speech perception, however, its application in CIs is challenging because it is influenced by the electrical artifact introduced by these devices. For this reason, the main goal of this work is to investigate a possible electrophysiological measure of speech integration between electric and acoustic stimulation in bimodal CI users. For this purpose, a selective attention decoding paradigm has been designed and validated in bimodal CI users. The current study included behavioral and electrophysiological measures. The behavioral measure consisted of a speech understanding test, where subjects repeated words to a target speaker in the presence of a competing voice listening with the CI side (CIS) only, with the acoustic side (AS) only or with both listening sides (CIS+AS). Electrophysiological measures included cortical auditory evoked potentials (CAEPs) and selective attention decoding through EEG. CAEPs were recorded to broadband stimuli to confirm the feasibility to record cortical responses with CIS only, AS only, and CIS+AS listening modes. In the selective attention decoding paradigm a co-located target and a competing speech stream were presented to the subjects using the three listening modes (CIS only, AS only, and CIS+AS). The main hypothesis of the current study is that selective attention can be decoded in CI users despite the presence of CI electrical artifact. If selective attention decoding improves combining electric and acoustic stimulation with respect to electric stimulation alone, the hypothesis can be confirmed. No significant difference in behavioral speech understanding performance when listening with CIS+AS and AS only was found, mainly due to the ceiling effect observed with these two listening modes. The main finding of the current study is the possibility to decode selective attention in CI users even if continuous artifact is present. Moreover, an amplitude reduction of the forward transfer response function (TRF) of selective attention decoding was observed when listening with CIS+AS compared to AS only. Further studies to validate selective attention decoding as an electrophysiological measure of electric acoustic speech integration are required.
Collapse
|
21
|
Zhang M, Siegle GJ. Linking Affective and Hearing Sciences-Affective Audiology. Trends Hear 2023; 27:23312165231208377. [PMID: 37904515 PMCID: PMC10619363 DOI: 10.1177/23312165231208377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 09/22/2023] [Accepted: 10/01/2023] [Indexed: 11/01/2023] Open
Abstract
A growing number of health-related sciences, including audiology, have increasingly recognized the importance of affective phenomena. However, in audiology, affective phenomena are mostly studied as a consequence of hearing status. This review first addresses anatomical and functional bidirectional connections between auditory and affective systems that support a reciprocal affect-hearing relationship. We then postulate, by focusing on four practical examples (hearing public campaigns, hearing intervention uptake, thorough hearing evaluation, and tinnitus), that some important challenges in audiology are likely affect-related and that potential solutions could be developed by inspiration from affective science advances. We continue by introducing useful resources from affective science that could help audiology professionals learn about the wide range of affective constructs and integrate them into hearing research and clinical practice in structured and applicable ways. Six important considerations for good quality affective audiology research are summarized. We conclude that it is worthwhile and feasible to explore the explanatory power of emotions, feelings, motivations, attitudes, moods, and other affective processes in depth when trying to understand and predict how people with hearing difficulties perceive, react, and adapt to their environment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China
| | - Greg J. Siegle
- Department of Psychiatry, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
22
|
Moinuddin KA, Havugimana F, Al-Fahad R, Bidelman GM, Yeasin M. Unraveling Spatial-Spectral Dynamics of Speech Categorization Speed Using Convolutional Neural Networks. Brain Sci 2022; 13:brainsci13010075. [PMID: 36672055 PMCID: PMC9856675 DOI: 10.3390/brainsci13010075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The process of categorizing sounds into distinct phonetic categories is known as categorical perception (CP). Response times (RTs) provide a measure of perceptual difficulty during labeling decisions (i.e., categorization). The RT is quasi-stochastic in nature due to individuality and variations in perceptual tasks. To identify the source of RT variation in CP, we have built models to decode the brain regions and frequency bands driving fast, medium and slow response decision speeds. In particular, we implemented a parameter optimized convolutional neural network (CNN) to classify listeners' behavioral RTs from their neural EEG data. We adopted visual interpretation of model response using Guided-GradCAM to identify spatial-spectral correlates of RT. Our framework includes (but is not limited to): (i) a data augmentation technique designed to reduce noise and control the overall variance of EEG dataset; (ii) bandpower topomaps to learn the spatial-spectral representation using CNN; (iii) large-scale Bayesian hyper-parameter optimization to find best performing CNN model; (iv) ANOVA and posthoc analysis on Guided-GradCAM activation values to measure the effect of neural regions and frequency bands on behavioral responses. Using this framework, we observe that α-β (10-20 Hz) activity over left frontal, right prefrontal/frontal, and right cerebellar regions are correlated with RT variation. Our results indicate that attention, template matching, temporal prediction of acoustics, motor control, and decision uncertainty are the most probable factors in RT variation.
Collapse
Affiliation(s)
| | - Felix Havugimana
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| | - Rakib Al-Fahad
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
| | - Mohammed Yeasin
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| |
Collapse
|
23
|
Torppa R, Kuuluvainen S, Lipsanen J. The development of cortical processing of speech differs between children with cochlear implants and normal hearing and changes with parental singing. Front Neurosci 2022; 16:976767. [PMID: 36507354 PMCID: PMC9731313 DOI: 10.3389/fnins.2022.976767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/04/2022] [Indexed: 11/21/2022] Open
Abstract
Objective The aim of the present study was to investigate speech processing development in children with normal hearing (NH) and cochlear implants (CI) groups using a multifeature event-related potential (ERP) paradigm. Singing is associated to enhanced attention and speech perception. Therefore, its connection to ERPs was investigated in the CI group. Methods The paradigm included five change types in a pseudoword: two easy- (duration, gap) and three difficult-to-detect (vowel, pitch, intensity) with CIs. The positive mismatch responses (pMMR), mismatch negativity (MMN), P3a and late differentiating negativity (LDN) responses of preschoolers (below 6 years 9 months) and schoolchildren (above 6 years 9 months) with NH or CIs at two time points (T1, T2) were investigated with Linear Mixed Modeling (LMM). For the CI group, the association of singing at home and ERP development was modeled with LMM. Results Overall, responses elicited by the easy- and difficult to detect changes differed between the CI and NH groups. Compared to the NH group, the CI group had smaller MMNs to vowel duration changes and gaps, larger P3a responses to gaps, and larger pMMRs and smaller LDNs to vowel identity changes. Preschoolers had smaller P3a responses and larger LDNs to gaps, and larger pMMRs to vowel identity changes than schoolchildren. In addition, the pMMRs to gaps increased from T1 to T2 in preschoolers. More parental singing in the CI group was associated with increasing pMMR and less parental singing with decreasing P3a amplitudes from T1 to T2. Conclusion The multifeature paradigm is suitable for assessing cortical speech processing development in children. In children with CIs, cortical discrimination is often reflected in pMMR and P3a responses, and in MMN and LDN responses in children with NH. Moreover, the cortical speech discrimination of children with CIs develops late, and over time and age, their speech sound change processing changes as does the processing of children with NH. Importantly, multisensory activities such as parental singing can lead to improvement in the discrimination and attention shifting toward speech changes in children with CIs. These novel results should be taken into account in future research and rehabilitation.
Collapse
Affiliation(s)
- Ritva Torppa
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Centre of Excellence in Music, Mind, Body and Brain, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Soila Kuuluvainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Department of Digital Humanities, Faculty of Arts, University of Helsinki, Helsinki, Finland
| | - Jari Lipsanen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
24
|
Xiu B, Paul BT, Chen JM, Le TN, Lin VY, Dimitrijevic A. Neural responses to naturalistic audiovisual speech are related to listening demand in cochlear implant users. Front Hum Neurosci 2022; 16:1043499. [DOI: 10.3389/fnhum.2022.1043499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/09/2022] Open
Abstract
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and “real-world” listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, “The Office”). We used continuous EEG to quantify “speech neural tracking” (i.e., TRFs, temporal response functions) to the show’s soundtrack and 8–12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.
Collapse
|
25
|
Wheeler HJ, Hatch DR, Moody-Antonio SA, Nie Y. Music and Speech Perception in Prelingually Deafened Young Listeners With Cochlear Implants: A Preliminary Study Using Sung Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3951-3965. [PMID: 36179251 DOI: 10.1044/2022_jslhr-21-00271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE In the context of music and speech perception, this study aimed to assess the effect of variation in one of two auditory attributes-pitch contour and timbre-on the perception of the other in prelingually deafened young cochlear implant (CI) users, and the relationship between pitch contour perception and two cognitive functions of interest. METHOD Nine prelingually deafened CI users, aged 8.75-22.17 years, completed a melodic contour identification (MCI) task using stimuli of piano notes or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note), a speech perception task identifying matrix-styled sentences naturally intonated or sung with a fixed pitch (same pitch for each word) or a mixed pitch (different pitches for each word), a forward digit span test indexing auditory short-term memory (STM), and the matrices section of the Kaufman Brief Intelligence Test-Second Edition indexing nonverbal IQ. RESULTS MCI was significantly poorer for the mixed timbre condition. Speech perception was significantly poorer for the fixed and mixed pitch conditions than for the naturally intonated condition. Auditory STM positively correlated with MCI at 2- and 3-semitone note spacings. Relative to their normal-hearing peers from a related study using the same stimuli and tasks, the CI participants showed comparable MCI at 2- or 3-semitone note spacing, and a comparable level of significant decrement in speech perception across three pitch contour conditions. CONCLUSION Findings suggest that prelingually deafened CI users show similar trends of normal-hearing peers for the effect of variation in pitch contour or timbre on the perception of the other, and that cognitive functions may underlie these outcomes to some extent, at least for the perception of pitch contour. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21217937.
Collapse
Affiliation(s)
- Harley J Wheeler
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA
| | - Debora R Hatch
- Department of Otolaryngology, Eastern Virginia Medical School, Norfolk
| | | | - Yingjiu Nie
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA
| |
Collapse
|
26
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Shahsavari Baboukani P, Graversen C, Alickovic E, Østergaard J. Speech to noise ratio improvement induces nonlinear parietal phase synchrony in hearing aid users. Front Neurosci 2022; 16:932959. [PMID: 36017182 PMCID: PMC9396236 DOI: 10.3389/fnins.2022.932959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 06/29/2022] [Indexed: 11/13/2022] Open
Abstract
ObjectivesComprehension of speech in adverse listening conditions is challenging for hearing-impaired (HI) individuals. Noise reduction (NR) schemes in hearing aids (HAs) have demonstrated the capability to help HI to overcome these challenges. The objective of this study was to investigate the effect of NR processing (inactive, where the NR feature was switched off, vs. active, where the NR feature was switched on) on correlates of listening effort across two different background noise levels [+3 dB signal-to-noise ratio (SNR) and +8 dB SNR] by using a phase synchrony analysis of electroencephalogram (EEG) signals.DesignThe EEG was recorded while 22 HI participants fitted with HAs performed a continuous speech in noise (SiN) task in the presence of background noise and a competing talker. The phase synchrony within eight regions of interest (ROIs) and four conventional EEG bands was computed by using a multivariate phase synchrony measure.ResultsThe results demonstrated that the activation of NR in HAs affects the EEG phase synchrony in the parietal ROI at low SNR differently than that at high SNR. The relationship between conditions of the listening task and phase synchrony in the parietal ROI was nonlinear.ConclusionWe showed that the activation of NR schemes in HAs can non-linearly reduce correlates of listening effort as estimated by EEG-based phase synchrony. We contend that investigation of the phase synchrony within ROIs can reflect the effects of HAs in HI individuals in ecological listening conditions.
Collapse
Affiliation(s)
- Payam Shahsavari Baboukani
- Department of Electronic Systems, Aalborg University, Aalborg, Denmark
- *Correspondence: Payam Shahsavari Baboukani
| | - Carina Graversen
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Department of Health Science and Technology, Center for Neuroplasticity and Pain (CNAP), Aalborg University, Aalborg, Denmark
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Jan Østergaard
- Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| |
Collapse
|
28
|
Zinszer BD, Yuan Q, Zhang Z, Chandrasekaran B, Guo T. Continuous speech tracking in bilinguals reflects adaptation to both language and noise. BRAIN AND LANGUAGE 2022; 230:105128. [PMID: 35537247 DOI: 10.1016/j.bandl.2022.105128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 04/13/2022] [Accepted: 04/21/2022] [Indexed: 06/14/2023]
Abstract
Listeners regularly comprehend continuous speech despite noisy conditions. Previous studies show that neural tracking of speech degrades under noise, predicts comprehension, and increases for non-native listeners. We test the hypothesis that listeners similarly increase tracking for both L2 and noisy L1 speech, after adjusting for comprehension. Twenty-four Chinese-English bilinguals underwent EEG while listening to one hour of an audiobook, mixed with three levels of noise, in Mandarin and English and answered comprehension questions. We estimated tracking of the speech envelope in EEG for each one-minute segment using the multivariate temporal response function (mTRF). Contrary to our prediction, L2 tracking was significantly lower than L1, while L1 tracking significantly increased with noise maskers without reducing comprehension. However, greater L2 proficiency was positively associated with greater L2 tracking. We discuss how studies of speech envelope tracking using noise and bilingualism might be reconciled through a focus on exerted rather than demanded effort.
Collapse
Affiliation(s)
| | - Qiming Yuan
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China
| | - Zhaoqi Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China
| | | | - Taomei Guo
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China.
| |
Collapse
|
29
|
Hunter CR. Listening Over Time: Single-Trial Tonic and Phasic Oscillatory Alpha-and Theta-Band Indicators of Listening-Related Fatigue. Front Neurosci 2022; 16:915349. [PMID: 35720726 PMCID: PMC9198355 DOI: 10.3389/fnins.2022.915349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 05/10/2022] [Indexed: 11/13/2022] Open
Abstract
Objectives Listening effort engages cognitive resources to support speech understanding in adverse listening conditions, and leads to fatigue over the longer term for people with hearing loss. Direct, neural measures of listening-related fatigue have not been developed. Here, event-related or phasic changes in alpha and theta oscillatory power during listening were used as measures of listening effort, and longer-term or tonic changes over the course of the listening task were assessed as measures of listening-related fatigue. In addition, influences of self-reported fatigue and degree of hearing loss on tonic changes in oscillatory power were examined. Design Participants were middle-aged adults (age 37–65 years; n = 12) with age-appropriate hearing. Sentences were presented in a background of multi-talker babble at a range of signal-to-noise ratios (SNRs) varying around the 80 percent threshold of individual listeners. Single-trial oscillatory power during both sentence and baseline intervals was analyzed with linear mixed-effect models that included as predictors trial number, SNR, subjective fatigue, and hearing loss. Results Alpha and theta power in both sentence presentation and baseline intervals increased as a function of trial, indicating listening-related fatigue. Further, tonic power increases across trials were affected by hearing loss and/or subjective fatigue, particularly in the alpha-band. Phasic changes in alpha and theta power generally tracked with SNR, with decreased alpha power and increased theta power at less favorable SNRs. However, for the alpha-band, the linear effect of SNR emerged only at later trials. Conclusion Tonic increases in oscillatory power in alpha- and theta-bands over the course of a listening task may be biomarkers for the development of listening-related fatigue. In addition, alpha-band power as an index of listening-related fatigue may be sensitive to individual differences attributable to level of hearing loss and the subjective experience of listening-related fatigue. Finally, phasic effects of SNR on alpha power emerged only after a period of listening, suggesting that this measure of listening effort could depend on the development of listening-related fatigue.
Collapse
Affiliation(s)
- Cynthia R Hunter
- Speech Perception, Cognition, and Hearing Laboratory, Department of Speech-Language-Hearing: Sciences and Disorders, The University of Kansas, Lawrence, KS, United States
| |
Collapse
|
30
|
Hauswald A, Keitel A, Chen Y, Rösch S, Weisz N. Degradation levels of continuous speech affect neural speech tracking and alpha power differently. Eur J Neurosci 2022; 55:3288-3302. [PMID: 32687616 PMCID: PMC9540197 DOI: 10.1111/ejn.14912] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/12/2020] [Accepted: 07/13/2020] [Indexed: 11/26/2022]
Abstract
Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1-7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.
Collapse
Affiliation(s)
- Anne Hauswald
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Anne Keitel
- Psychology, School of Social SciencesUniversity of DundeeDundeeUK
- Centre for Cognitive NeuroimagingUniversity of GlasgowGlasgowUK
| | - Ya‐Ping Chen
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Sebastian Rösch
- Department of OtorhinolaryngologyParacelsus Medical UniversitySalzburgAustria
| | - Nathan Weisz
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| |
Collapse
|
31
|
Gray R, Sarampalis A, Başkent D, Harding EE. Working-Memory, Alpha-Theta Oscillations and Musical Training in Older Age: Research Perspectives for Speech-on-speech Perception. Front Aging Neurosci 2022; 14:806439. [PMID: 35645774 PMCID: PMC9131017 DOI: 10.3389/fnagi.2022.806439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
During the normal course of aging, perception of speech-on-speech or “cocktail party” speech and use of working memory (WM) abilities change. Musical training, which is a complex activity that integrates multiple sensory modalities and higher-order cognitive functions, reportedly benefits both WM performance and speech-on-speech perception in older adults. This mini-review explores the relationship between musical training, WM and speech-on-speech perception in older age (> 65 years) through the lens of the Ease of Language Understanding (ELU) model. Linking neural-oscillation literature associating speech-on-speech perception and WM with alpha-theta oscillatory activity, we propose that two stages of speech-on-speech processing in the ELU are underpinned by WM-related alpha-theta oscillatory activity, and that effects of musical training on speech-on-speech perception may be reflected in these frequency bands among older adults.
Collapse
Affiliation(s)
- Ryan Gray
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Psychology, Centre for Applied Behavioural Sciences, School of Social Sciences, Heriot-Watt University, Edinburgh, United Kingdom
| | - Anastasios Sarampalis
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Eleanor E. Harding
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- *Correspondence: Eleanor E. Harding,
| |
Collapse
|
32
|
Gillis M, Decruy L, Vanthornhout J, Francart T. Hearing loss is associated with delayed neural responses to continuous speech. Eur J Neurosci 2022; 55:1671-1690. [PMID: 35263814 DOI: 10.1111/ejn.15644] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 02/21/2022] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
We investigated the impact of hearing loss on the neural processing of speech. Using a forward modeling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers. Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: more or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing-impaired listeners process speech less efficiently.
Collapse
Affiliation(s)
- Marlies Gillis
- KU Leuven, Department of Neurosciences, ExpORL, Leuven, Belgium
| | - Lien Decruy
- Institute for Systems Research, University of Maryland, College Park, MD, USA
| | | | - Tom Francart
- KU Leuven, Department of Neurosciences, ExpORL, Leuven, Belgium
| |
Collapse
|
33
|
Corcoran AW, Perera R, Koroma M, Kouider S, Hohwy J, Andrillon T. Expectations boost the reconstruction of auditory features from electrophysiological responses to noisy speech. Cereb Cortex 2022; 33:691-708. [PMID: 35253871 PMCID: PMC9890472 DOI: 10.1093/cercor/bhac094] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
Online speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual "pop-out" phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions.
Collapse
Affiliation(s)
- Andrew W Corcoran
- Corresponding author: Room E672, 20 Chancellors Walk, Clayton, VIC 3800, Australia.
| | - Ricardo Perera
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Matthieu Koroma
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Sid Kouider
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Jakob Hohwy
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia,Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Thomas Andrillon
- Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia,Paris Brain Institute, Sorbonne Université, Inserm-CNRS, Paris 75013, France
| |
Collapse
|
34
|
Nogueira W, Dolhopiatenko H. Predicting speech intelligibility from a selective attention decoding paradigm in cochlear implant users. J Neural Eng 2022; 19. [PMID: 35234663 DOI: 10.1088/1741-2552/ac599f] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 03/01/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVES Electroencephalography (EEG) can be used to decode selective attention in cochlear implant (CI) users. This work investigates if selective attention to an attended speech source in the presence of a concurrent speech source can predict speech understanding in CI users. APPROACH CI users were instructed to attend to one out of two speech streams while EEG was recorded. Both speech streams were presented to the same ear and at different signal to interference ratios (SIRs). Speech envelope reconstruction of the to-be-attended speech from EEG was obtained by training decoders using regularized least squares. The correlation coefficient between the reconstructed and the attended (ρ_(A_SIR )) or the unattended (ρ_(U_SIR )) speech stream at each SIR was computed. Additionally, we computed the difference correlation coefficient at the same 〖(ρ〗_Diff= ρ_(A_SIR )-ρ_(U_SIR )) and opposite SIR (ρ_DiffOpp= ρ_(A_SIR )-ρ_(U_(-SIR) )). ρ_Diff compares the attended and unattended correlation coefficient to speech sources presented at different presentation levels depending on SIR. In contrast, ρ_DiffOpp compares the attended and unattended correlation coefficients to speech sources presented at the same presentation level irrespective of SIR. MAIN RESULTS Selective attention decoding in CI users is possible even if both speech streams are presented monaurally. A significant effect of SIR on ρ_(A_SIR ), ρ_Diff and ρ_DiffOpp, but not on ρ_(U_SIR ), was observed. Finally, the results show a significant correlation between speech understanding performance and ρ_(A_SIR ) as well as with ρ_(U_SIR ) across subjects. Moreover, ρ_DiffOpp which is less affected by the CI artifact, also demonstrated a significant correlation with speech understanding. SIGNIFICANCE Selective attention decoding in CI users is possible, however care needs to be taken with the CI artifact and the speech material used to train the decoders. These results are important for future development of objective speech understanding measures for CI users.
Collapse
Affiliation(s)
- Waldo Nogueira
- Department of Otolaryngology and Cluster of Excellence "Hearing4all", Hannover Medical School, Karl-Wiechert Allee, 3, Hannover, Niedersachsen, 30625, GERMANY
| | - Hanna Dolhopiatenko
- Department of Otolaryngology and Cluster of Excellence "Hearing4all", Hannover Medical School, Karl-Wiechert Allee, 3, Hannover, Niedersachsen, 30625, GERMANY
| |
Collapse
|
35
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
36
|
Lin Y, Tsao Y, Hsieh PJ. Neural correlates of individual differences in predicting ambiguous sounds comprehension level. Neuroimage 2022; 251:119012. [DOI: 10.1016/j.neuroimage.2022.119012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 01/28/2022] [Accepted: 02/16/2022] [Indexed: 11/16/2022] Open
|
37
|
Perea Pérez F, Hartley DEH, Kitterick PT, Wiggins IM. Perceived Listening Difficulties of Adult Cochlear-Implant Users Under Measures Introduced to Combat the Spread of COVID-19. Trends Hear 2022; 26:23312165221087011. [PMID: 35440245 PMCID: PMC9024163 DOI: 10.1177/23312165221087011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Following the outbreak of the COVID-19 pandemic, public-health measures introduced to stem the spread of the disease caused profound changes to patterns of daily-life communication. This paper presents the results of an online survey conducted to document adult cochlear-implant (CI) users’ perceived listening difficulties under four communication scenarios commonly experienced during the pandemic, specifically when talking: with someone wearing a facemask, under social/physical distancing guidelines, via telephone, and via video call. Results from ninety-four respondents indicated that people considered their in-person listening experiences in some common everyday scenarios to have been significantly worsened by the introduction of mask-wearing and physical distancing. Participants reported experiencing an array of listening difficulties, including reduced speech intelligibility and increased listening effort, which resulted in many people actively avoiding certain communication scenarios at least some of the time. Participants also found listening effortful during remote communication, which became rapidly more prevalent following the outbreak of the pandemic. Potential solutions identified by participants to ease the burden of everyday listening with a CI may have applicability beyond the context of the COVID-19 pandemic. Specifically, the results emphasized the importance of visual cues, including lipreading and live speech-to-text transcriptions, to improve in-person and remote communication for people with a CI.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- 574111National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK.,Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK
| | - Douglas E H Hartley
- 574111National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK.,Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK.,9820Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - Pádraig T Kitterick
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK.,National Acoustic Laboratories, Sydney, Australia
| | - Ian M Wiggins
- 574111National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK.,Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK
| |
Collapse
|
38
|
Shields C, Willis H, Nichani J, Sladen M, Kluk-de Kort K. Listening effort: WHAT is it, HOW is it measured and WHY is it important? Cochlear Implants Int 2021; 23:114-117. [PMID: 34844525 DOI: 10.1080/14670100.2021.1992941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- C Shields
- Department of Paediatric Otolaryngology, Royal Manchester Children's Hospital, Manchester Foundation Trust, Manchester, UK.,Division of Human Communication, Development & Hearing, University of Manchester, Manchester, UK
| | - H Willis
- Independent Stress Management Consultant and CI User, Reading, UK
| | - J Nichani
- Department of Paediatric Otolaryngology, Royal Manchester Children's Hospital, Manchester Foundation Trust, Manchester, UK
| | - M Sladen
- Department of Paediatric Otolaryngology, Royal Manchester Children's Hospital, Manchester Foundation Trust, Manchester, UK
| | - K Kluk-de Kort
- Division of Human Communication, Development & Hearing, University of Manchester, Manchester, UK
| |
Collapse
|
39
|
Hearing Aid Noise Reduction Lowers the Sustained Listening Effort During Continuous Speech in Noise-A Combined Pupillometry and EEG Study. Ear Hear 2021; 42:1590-1601. [PMID: 33950865 DOI: 10.1097/aud.0000000000001050] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The investigation of auditory cognitive processes recently moved from strictly controlled, trial-based paradigms toward the presentation of continuous speech. This also allows the investigation of listening effort on larger time scales (i.e., sustained listening effort). Here, we investigated the modulation of sustained listening effort by a noise reduction algorithm as applied in hearing aids in a listening scenario with noisy continuous speech. The investigated directional noise reduction algorithm mainly suppresses noise from the background. DESIGN We recorded the pupil size and the EEG in 22 participants with hearing loss who listened to audio news clips in the presence of background multi-talker babble noise. We estimated how noise reduction (off, on) and signal-to-noise ratio (SNR; +3 dB, +8 dB) affect pupil size and the power in the parietal EEG alpha band (i.e., parietal alpha power) as well as the behavioral performance. RESULTS Our results show that noise reduction reduces pupil size, while there was no significant effect of the SNR. It is important to note that we found interactions of SNR and noise reduction, which suggested that noise reduction reduces pupil size predominantly under the lower SNR. Parietal alpha power showed a similar yet nonsignificant pattern, with increased power under easier conditions. In line with the participants' reports that one of the two presented talkers was more intelligible, we found a reduced pupil size, increased parietal alpha power, and better performance when people listened to the more intelligible talker. CONCLUSIONS We show that the modulation of sustained listening effort (e.g., by hearing aid noise reduction) as indicated by pupil size and parietal alpha power can be studied under more ecologically valid conditions. Mainly concluded from pupil size, we demonstrate that hearing aid noise reduction lowers sustained listening effort. Our study approximates to real-world listening scenarios and evaluates the benefit of the signal processing as can be found in a modern hearing aid.
Collapse
|
40
|
Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021; 240:118385. [PMID: 34256138 PMCID: PMC8503862 DOI: 10.1016/j.neuroimage.2021.118385] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 10/27/2022] Open
Abstract
In this study we used functional near-infrared spectroscopy (fNIRS) to investigate neural responses in normal-hearing adults as a function of speech recognition accuracy, intelligibility of the speech stimulus, and the manner in which speech is distorted. Participants listened to sentences and reported aloud what they heard. Speech quality was distorted artificially by vocoding (simulated cochlear implant speech) or naturally by adding background noise. Each type of distortion included high and low-intelligibility conditions. Sentences in quiet were used as baseline comparison. fNIRS data were analyzed using a newly developed image reconstruction approach. First, elevated cortical responses in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) were associated with speech recognition during the low-intelligibility conditions. Second, activation in the MTG was associated with recognition of vocoded speech with low intelligibility, whereas MFG activity was largely driven by recognition of speech in background noise, suggesting that the cortical response varies as a function of distortion type. Lastly, an accuracy effect in the MFG demonstrated significantly higher activation during correct perception relative to incorrect perception of speech. These results suggest that normal-hearing adults (i.e., untrained listeners of vocoded stimuli) do not exploit the same attentional mechanisms of the frontal cortex used to resolve naturally degraded speech and may instead rely on segmental and phonetic analyses in the temporal lobe to discriminate vocoded speech.
Collapse
Affiliation(s)
- Jessica Defenderfer
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Samuel Forbes
- Psychology, University of East Anglia, Norwich, England.
| | | | - Mark Hedrick
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Patrick Plyler
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Aaron T Buss
- Psychology, University of Tennessee, Knoxville, TN, United States.
| |
Collapse
|
41
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
42
|
Devaraju DS, Kemp A, Eddins DA, Shrivastav R, Chandrasekaran B, Hampton Wray A. Effects of Task Demands on Neural Correlates of Acoustic and Semantic Processing in Challenging Listening Conditions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3697-3706. [PMID: 34403278 DOI: 10.1044/2021_jslhr-21-00006] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Listeners shift their listening strategies between lower level acoustic information and higher level semantic information to prioritize maximum speech intelligibility in challenging listening conditions. Although increasing task demands via acoustic degradation modulates lexical-semantic processing, the neural mechanisms underlying different listening strategies are unclear. The current study examined the extent to which encoding of lower level acoustic cues is modulated by task demand and associations with lexical-semantic processes. Method Electroencephalography was acquired while participants listened to sentences in the presence of four-talker babble that contained either higher or lower probability final words. Task difficulty was modulated by time available to process responses. Cortical tracking of speech-neural correlates of acoustic temporal envelope processing-were estimated using temporal response functions. Results Task difficulty did not affect cortical tracking of temporal envelope of speech under challenging listening conditions. Neural indices of lexical-semantic processing (N400 amplitudes) were larger with increased task difficulty. No correlations were observed between the cortical tracking of temporal envelope of speech and lexical-semantic processes, even after controlling for the effect of individualized signal-to-noise ratios. Conclusions Cortical tracking of the temporal envelope of speech and semantic processing are differentially influenced by task difficulty. While increased task demands modulated higher level semantic processing, cortical tracking of the temporal envelope of speech may be influenced by task difficulty primarily when the demand is manipulated in terms of acoustic properties of the stimulus, consistent with an emerging perspective in speech perception.
Collapse
Affiliation(s)
- Dhatri S Devaraju
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Amy Kemp
- Department of Communication Sciences and Special Education, University of Georgia, Athens
| | - David A Eddins
- Department of Communication Sciences & Disorders, University of South Florida, Tampa
| | | | | | - Amanda Hampton Wray
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| |
Collapse
|
43
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
44
|
Verschueren E, Vanthornhout J, Francart T. The Effect of Stimulus Choice on an EEG-Based Objective Measure of Speech Intelligibility. Ear Hear 2021; 41:1586-1597. [PMID: 33136634 DOI: 10.1097/aud.0000000000000875] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
OBJECTIVES Recently, an objective measure of speech intelligibility (SI), based on brain responses derived from the electroencephalogram (EEG), has been developed using isolated Matrix sentences as a stimulus. We investigated whether this objective measure of SI can also be used with natural speech as a stimulus, as this would be beneficial for clinical applications. DESIGN We recorded the EEG in 19 normal-hearing participants while they listened to two types of stimuli: Matrix sentences and a natural story. Each stimulus was presented at different levels of SI by adding speech weighted noise. SI was assessed in two ways for both stimuli: (1) behaviorally and (2) objectively by reconstructing the speech envelope from the EEG using a linear decoder and correlating it with the acoustic envelope. We also calculated temporal response functions (TRFs) to investigate the temporal characteristics of the brain responses in the EEG channels covering different brain areas. RESULTS For both stimulus types, the correlation between the speech envelope and the reconstructed envelope increased with increasing SI. In addition, correlations were higher for the natural story than for the Matrix sentences. Similar to the linear decoder analysis, TRF amplitudes increased with increasing SI for both stimuli. Remarkable is that although SI remained unchanged under the no-noise and +2.5 dB SNR conditions, neural speech processing was affected by the addition of this small amount of noise: TRF amplitudes across the entire scalp decreased between 0 and 150 ms, while amplitudes between 150 and 200 ms increased in the presence of noise. TRF latency changes in function of SI appeared to be stimulus specific: the latency of the prominent negative peak in the early responses (50 to 300 ms) increased with increasing SI for the Matrix sentences, but remained unchanged for the natural story. CONCLUSIONS These results show (1) the feasibility of natural speech as a stimulus for the objective measure of SI; (2) that neural tracking of speech is enhanced using a natural story compared to Matrix sentences; and (3) that noise and the stimulus type can change the temporal characteristics of the brain responses. These results might reflect the integration of incoming acoustic features and top-down information, suggesting that the choice of the stimulus has to be considered based on the intended purpose of the measurement.
Collapse
Affiliation(s)
- Eline Verschueren
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven-University of Leuven, Leuven, Belgium
| | | | | |
Collapse
|
45
|
Prince P, Paul BT, Chen J, Le T, Lin V, Dimitrijevic A. Neural correlates of visual stimulus encoding and verbal working memory differ between cochlear implant users and normal-hearing controls. Eur J Neurosci 2021; 54:5016-5037. [PMID: 34146363 PMCID: PMC8457219 DOI: 10.1111/ejn.15365] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/29/2022]
Abstract
A common concern for individuals with severe‐to‐profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross‐modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age‐matched normal‐hearing (NH) controls. While we recorded the high‐density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual‐evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross‐modal and intramodal plasticity.
Collapse
Affiliation(s)
- Priyanka Prince
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Brandon T Paul
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Psychology, Ryerson University, Toronto, Ontario, Canada
| | - Joseph Chen
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Trung Le
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Vincent Lin
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Andrew Dimitrijevic
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
46
|
Paul BT, Chen J, Le T, Lin V, Dimitrijevic A. Cortical alpha oscillations in cochlear implant users reflect subjective listening effort during speech-in-noise perception. PLoS One 2021; 16:e0254162. [PMID: 34242290 PMCID: PMC8270138 DOI: 10.1371/journal.pone.0254162] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 06/22/2021] [Indexed: 12/12/2022] Open
Abstract
Listening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8–12 Hz (alpha) oscillations measured by electroencephalography (EEG). Alpha power in two cortical regions has been associated with effortful listening—left inferior frontal gyrus (IFG), and parietal cortex—but these relationships have not been examined in the same listeners. Further, there are few studies available investigating neural correlates of effort in the individuals with cochlear implants. Here we tested 16 CI users in a novel effort-focused speech-in-noise listening paradigm, and confirm a relationship between alpha power and self-reported effort ratings in parietal regions, but not left IFG. The parietal relationship was not linear but quadratic, with alpha power comparatively lower when effort ratings were at the top and bottom of the effort scale, and higher when effort ratings were in the middle of the scale. Results are discussed in terms of cognitive systems that are engaged in difficult listening situations, and the implication for clinical translation.
Collapse
Affiliation(s)
- Brandon T. Paul
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- * E-mail:
| | - Joseph Chen
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Trung Le
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Vincent Lin
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Andrew Dimitrijevic
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Faculty of Medicine, Otolaryngology—Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
47
|
Effects of long-term unilateral cochlear implant use on large-scale network synchronization in adolescents. Hear Res 2021; 409:108308. [PMID: 34343851 DOI: 10.1016/j.heares.2021.108308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 06/25/2021] [Accepted: 06/29/2021] [Indexed: 11/20/2022]
Abstract
Unilateral cochlear implantation (CI) limits deafness-related changes in the auditory pathways but promotes abnormal cortical preference for the stimulated ear and leaves the opposite ear with little protection from auditory deprivation. In the present study, time-frequency analyses of event-related potentials elicited from stimuli presented to each ear were used to determine effects of unilateral CI use on cortical synchrony. CI-elicited activity in 34 adolescents (15.4±1.9 years of age) who had listened with unilateral CIs for most of their lives prior to bilateral implantation were compared to responses elicited by a 500Hz tone-burst in normal hearing peers. Phase-locking values between 4 and 60Hz were calculated for 171 pairs of 19-cephalic recording electrodes. Ear specific results were found in the normal hearing group: higher synchronization in low frequency bands (theta and alpha) from left ear stimulation in the right hemisphere and more high frequency activity (gamma band) from right ear stimulation in the left hemisphere. In the CI group, increased phase synchronization in the theta and beta frequencies with bursts of gamma activity were elicited by the experienced-right CI between frontal, temporal and parietal cortical regions in both hemispheres, consistent with increased recruitment of cortical areas involved in attention and higher-order processes, potentially to support unilateral listening. By contrast, activity was globally desynchronized in response to initial stimulation of the naïve-left ear, suggesting decoupling of these pathways from the cortical hearing network. These data reveal asymmetric auditory development promoted by unilateral CI use, resulting in an abnormally mature neural network.
Collapse
|
48
|
Wisniewski MG, Zakrzewski AC, Bell DR, Wheeler M. EEG power spectral dynamics associated with listening in adverse conditions. Psychophysiology 2021; 58:e13877. [PMID: 34161612 DOI: 10.1111/psyp.13877] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 05/15/2021] [Accepted: 05/17/2021] [Indexed: 01/08/2023]
Abstract
Adverse listening conditions increase the demand on cognitive resources needed for speech comprehension. In an exploratory study, we aimed to identify independent power spectral features in the EEG useful for studying the cognitive processes involved in this effortful listening. Listeners performed the coordinate response measure task with a single-talker masker at a 0-dB signal-to-noise ratio. Sounds were left unfiltered or degraded with low-pass filtering. Independent component analysis (ICA) was used to identify independent components (ICs) in the EEG data, the power spectral dynamics of which were then analyzed. Frontal midline theta, left frontal, right frontal, left mu, right mu, left temporal, parietal, left occipital, central occipital, and right occipital clusters of ICs were identified. All IC clusters showed some significant listening-related changes in their power spectrum. This included sustained theta enhancements, gamma enhancements, alpha enhancements, alpha suppression, beta enhancements, and mu rhythm suppression. Several of these effects were absent or negligible using traditional channel analyses. Comparison of filtered to unfiltered speech revealed a stronger alpha suppression in the parietal and central occipital clusters of ICs for the filtered speech condition. This not only replicates recent findings showing greater alpha suppression as listening difficulty increases but also suggests that such alpha-band effects can stem from multiple cortical sources. We lay out the advantages of the ICA approach over the restrictive analyses that have been used as of late in the study of listening effort. We also make suggestions for moving into hypothesis-driven studies regarding the power spectral features that were revealed.
Collapse
Affiliation(s)
- Matthew G Wisniewski
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | | | - Destiny R Bell
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Michelle Wheeler
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| |
Collapse
|
49
|
Riha C, Güntensperger D, Oschwald J, Kleinjung T, Meyer M. Application of Latent Growth Curve modeling to predict individual trajectories during neurofeedback treatment for tinnitus. PROGRESS IN BRAIN RESEARCH 2021; 263:109-136. [PMID: 34243885 DOI: 10.1016/bs.pbr.2021.04.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Tinnitus is a heterogeneous phenomenon indexed by various EEG oscillatory profiles. Applying neurofeedback (NFB) with the aim of changing these oscillatory patterns not only provides help for those who suffer from the phantom percept, but a promising foundation from which to probe influential factors. The reliable attribution of influential factors that potentially predict oscillatory changes during the course of NFB training may lead to the identification of subgroups of individuals that are more or less responsive to NFB training. The present study investigated oscillatory trajectories of delta (3-4Hz) and individual alpha (8.5-12Hz) during 15 NFB training sessions, based on a Latent Growth Curve framework. First, we found the desired enhancement of alpha, while delta was stable throughout the NFB training. Individual differences in tinnitus-specific variables and general-, as well as health-related quality of life predictors were largely unrelated to oscillatory change prior to and across the training. Only the predictors age and sex at baseline were clearly related to slow-wave delta, particularly so for older female individuals who showed higher delta power values from the start. Second, we confirmed a hierarchical cross-frequency association between the two frequency bands; however, in opposing directions to those anticipated in tinnitus. The establishment of individually tailored NFB protocols would boost this therapy's effectiveness in the treatment of tinnitus. In our analysis, we propose a conceptual groundwork toward this goal of developing more targeted treatment.
Collapse
Affiliation(s)
- Constanze Riha
- Chair of Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland; Research Priority Program "ESIT-European School of Interdisciplinary Tinnitus Research", Zurich, Switzerland
| | - Dominik Güntensperger
- Chair of Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Jessica Oschwald
- University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Zurich, Switzerland
| | - Tobias Kleinjung
- Department of Otorhinolaryngology, University Hospital Zurich, Zurich, Switzerland
| | - Martin Meyer
- Chair of Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
50
|
Adult Users of the Oticon Medical Neuro Cochlear Implant System Benefit from Beamforming in the High Frequencies. Audiol Res 2021; 11:179-191. [PMID: 33923595 PMCID: PMC8167646 DOI: 10.3390/audiolres11020016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 04/09/2021] [Accepted: 04/13/2021] [Indexed: 11/17/2022] Open
Abstract
The Oticon Medical Neuro cochlear implant system includes the modes Opti Omni and Speech Omni, the latter providing beamforming (i.e., directional selectivity) in the high frequencies. Two studies compared sentence identification scores of adult cochlear implant users with Opti Omni and Speech Omni. In Study 1, a double-blind longitudinal crossover study, 12 new users trialed Opti Omni or Speech Omni (random allocation) for three months, and their sentence identification in quiet and noise (+10 dB signal-to-noise ratio) with the trialed mode were measured. The same procedure was repeated for the second mode. In Study 2, a single-blind study, 11 experienced users performed a speech identification task in quiet and at relative signal-to-noise ratios ranging from -3 to +18 dB with Opti Omni and Speech Omni. The Study 1 scores in quiet and in noise were significantly better with Speech Omni than with Opti Omni. Study 2 scores were significantly better with Speech Omni than with Opti Omni at +6 and +9 dB signal-to-noise ratios. Beamforming in the high frequencies, as implemented in Speech Omni, leads to improved speech identification in medium levels of background noise, where cochlear implant users spend most of their day.
Collapse
|