1
|
Shim L, Kim J, Kim G, Lee HJ. Ear-specific neuroplasticity for sound localization in individuals with single-sided deafness. Hear Res 2025; 459:109207. [PMID: 39933256 DOI: 10.1016/j.heares.2025.109207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 01/31/2025] [Accepted: 02/03/2025] [Indexed: 02/13/2025]
Abstract
Studies on cortical plasticity in individuals with single-sided deafness (SSD) show increased activity in the auditory cortex ipsilateral to the hearing ear, impacting auditory localization and rehabilitation outcomes. However, the direct relationship between neuroplastic changes and binaural processing in SSD remains unclear, as does the specificity of plasticity to the affected ear. In this study, two groups of SSD patients (left [Lt] SSD, 17; right [Rt] SSD, 18) of postlingual onset and 13 normal-hearing controls (NC) underwent fMRI during an auditory localization task. The NC group was also treated with earplugs to simulate acute monaural hearing. We compared the cortical networks involved in auditory localization and conducted correlation analyses to identify neural activity associated with SSD duration and localization performance. The response laterality in the auditory cortex was analyzed and compared across groups. Results indicated that extended SSD modulates auditory cortical response in the right primary auditory cortex. The posterior superior temporal gyrus and cingulo-opercular network were linked to improved localization performance. Findings suggest that cortical attentional resources are crucial for auditory spatial behavior in SSD, especially when the left ear is impaired.
Collapse
Affiliation(s)
- Leeseul Shim
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Gyeonggi-do 14068, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang-si, Gyeonggi-do, Republic of Korea
| | - Jahee Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon-si, Gangwon-do 24252, Republic of Korea
| | - Gibbeum Kim
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Gyeonggi-do 14068, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Gyeonggi-do 14068, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang-si, Gyeonggi-do, Republic of Korea; Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon-si, Gangwon-do 24252, Republic of Korea.
| |
Collapse
|
2
|
Burleson AM, Souza PE. The time course of cognitive effort during disrupted speech. Q J Exp Psychol (Hove) 2025:17470218251316797. [PMID: 39840813 DOI: 10.1177/17470218251316797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2025]
Abstract
Listeners often find themselves in scenarios where speech is disrupted, misperceived, or otherwise difficult to recognise. In these situations, many individuals report exerting additional effort to understand speech, even when repairing speech may be difficult or impossible. This investigation aimed to characterise cognitive efforts across time during both sentence listening and a post-sentence retention interval by observing the pupillary response of participants with normal to borderline-normal hearing in response to two interrupted speech conditions: sentences interrupted by gaps of silence or bursts of noise. The pupillary response serves as a measure of the cumulative resources devoted to task completion. Both interruption conditions resulted in significantly greater levels of pupil dilation than the uninterrupted speech condition. Just prior to the end of a sentence, trials periodically interrupted by bursts of noise elicited greater pupil dilation than the silent-interrupted condition. Compared to the uninterrupted condition, both interruption conditions resulted in increased dilation after sentence end but before repetition, possibly reflecting sustained processing demands. Understanding pupil dilation as a marker of cognitive effort is important for clinicians and researchers when assessing the additional effort exerted by listeners with hearing loss who may use cochlear implants or hearing aids. Even when successful perceptual repair is unlikely, listeners may continue to exert increased effort when processing misperceived speech, which could cause them to miss upcoming speech or may contribute to heightened listening fatigue.
Collapse
Affiliation(s)
- Andrew M Burleson
- Hearing Aid Laboratory, Department of Communication Sciences and Disorders Evanston, Northwestern University, Evanston, IL, USA
- Emerging Auditory Research laboratory, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Pamela E Souza
- Hearing Aid Laboratory, Department of Communication Sciences and Disorders Evanston, Northwestern University, Evanston, IL, USA
| |
Collapse
|
3
|
Zekveld AA, Kramer SE, Heslenfeld DJ, Versfeld NJ, Vriend C. Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4549-4566. [PMID: 39392910 DOI: 10.1044/2024_jslhr-24-00017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2024]
Abstract
PURPOSE A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response. METHOD Seventeen NH participants (Mage = 46 years) were compared to 17 HH participants (Mage = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired. RESULTS HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech. CONCLUSION Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27162135.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
- Institute of Psychology, Leiden University, the Netherlands
| | - Sophia E Kramer
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Dirk J Heslenfeld
- Faculty of Behavioural and Movement Sciences, Experimental and Applied Psychology, VU University, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Chris Vriend
- Department of Psychiatry and Department of Anatomy and Neuroscience, Amsterdam UMC, Vrije Universiteit Amsterdam, the Netherlands
- Brain Imaging, Amsterdam Neuroscience, the Netherlands
| |
Collapse
|
4
|
Perron M, Vuong V, Grassi MW, Imran A, Alain C. Engagement of the speech motor system in challenging speech perception: Activation likelihood estimation meta-analyses. Hum Brain Mapp 2024; 45:e70023. [PMID: 39268584 PMCID: PMC11393483 DOI: 10.1002/hbm.70023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/20/2024] [Accepted: 08/29/2024] [Indexed: 09/17/2024] Open
Abstract
The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.
Collapse
Affiliation(s)
- Maxime Perron
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Veronica Vuong
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| | - Madison W Grassi
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Ashna Imran
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
5
|
Herrera C, Whittle N, Leek MR, Brodbeck C, Lee G, Barcenas C, Barnes S, Holshouser B, Yi A, Venezia JH. Cortical networks for recognition of speech with simultaneous talkers. Hear Res 2023; 437:108856. [PMID: 37531847 DOI: 10.1016/j.heares.2023.108856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).
Collapse
Affiliation(s)
| | - Nicole Whittle
- VA Loma Linda Healthcare System, Loma Linda, CA, United States
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | | | - Grace Lee
- Loma Linda University, Loma Linda, CA, United States
| | | | - Samuel Barnes
- Loma Linda University, Loma Linda, CA, United States
| | | | - Alex Yi
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | - Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States.
| |
Collapse
|
6
|
Han JH, Lee J, Lee HJ. The effect of noise on the cortical activity patterns of speech processing in adults with single-sided deafness. Front Neurol 2023; 14:1054105. [PMID: 37006498 PMCID: PMC10060629 DOI: 10.3389/fneur.2023.1054105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/27/2023] [Indexed: 03/18/2023] Open
Abstract
The most common complaint in people with single-sided deafness (SSD) is difficulty in understanding speech in a noisy environment. Moreover, the neural mechanism of speech-in-noise (SiN) perception in SSD individuals is still poorly understood. In this study, we measured the cortical activity in SSD participants during a SiN task to compare with a speech-in-quiet (SiQ) task. Dipole source analysis revealed left hemispheric dominance in both left- and right-sided SSD group. Contrary to SiN listening, this hemispheric difference was not found during SiQ listening in either group. In addition, cortical activation in the right-sided SSD individuals was independent of the location of sound whereas activation sites in the left-sided SSD group were altered by the sound location. Examining the neural-behavioral relationship revealed that N1 activation is associated with the duration of deafness and the SiN perception ability of individuals with SSD. Our findings indicate that SiN listening is processed differently in the brains of left and right SSD individuals.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
| | - Jihyun Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea
- *Correspondence: Hyo-Jeong Lee
| |
Collapse
|
7
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|
8
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
9
|
Al-Zubaidi A, Bräuer S, Holdgraf CR, Schepers IM, Rieger JW. OUP accepted manuscript. Cereb Cortex Commun 2022; 3:tgac007. [PMID: 35281216 PMCID: PMC8914075 DOI: 10.1093/texcom/tgac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/26/2022] [Accepted: 01/29/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Arkan Al-Zubaidi
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
- Research Center Neurosensory Science, Oldenburg University, 26129 Oldenburg, Germany
| | - Susann Bräuer
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Chris R Holdgraf
- Department of Statistics, UC Berkeley, Berkeley, CA 94720, USA
- International Interactive Computing Collaboration
| | - Inga M Schepers
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Jochem W Rieger
- Corresponding author: Department of Psychology, Faculty VI, Oldenburg University, 26129 Oldenburg, Germany.
| |
Collapse
|
10
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA.
| | | | - Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Carolyn M McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| |
Collapse
|
11
|
Abstract
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Collapse
Affiliation(s)
- Matthew B. Winn
- Matthew B. Winn, University of Minnesota, Twin Cities, 164 Pillsbury Dr SE, Minneapolis, MN Minnesota 55455, United States.
| | | |
Collapse
|
12
|
Mechtenberg H, Xie X, Myers EB. Sentence predictability modulates cortical response to phonetic ambiguity. BRAIN AND LANGUAGE 2021; 218:104959. [PMID: 33930722 PMCID: PMC8513138 DOI: 10.1016/j.bandl.2021.104959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 03/02/2021] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Phonetic categories have undefined edges, such that individual tokens that belong to different speech sound categories may occupy the same region in acoustic space. In continuous speech, there are multiple sources of top-down information (e.g., lexical, semantic) that help to resolve the identity of an ambiguous phoneme. Of interest is how these top-down constraints interact with ambiguity at the phonetic level. In the current fMRI study, participants passively listened to sentences that varied in semantic predictability and in the amount of naturally-occurring phonetic competition. The left middle frontal gyrus, angular gyrus, and anterior inferior frontal gyrus were sensitive to both semantic predictability and the degree of phonetic competition. Notably, greater phonetic competition within non-predictive contexts resulted in a negatively-graded neural response. We suggest that uncertainty at the phonetic-acoustic level interacts with uncertainty at the semantic level-perhaps due to a failure of the network to construct a coherent meaning.
Collapse
Affiliation(s)
- Hannah Mechtenberg
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Mansfield, CT 06269, USA.
| | - Xin Xie
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Mansfield, CT 06269, USA; Department of Psychological Sciences, University of Connecticut, Storrs, Mansfield, CT 06269, USA.
| |
Collapse
|
13
|
Cohen N, Ben-Yakov A, Weber J, Edelson MG, Paz R, Dudai Y. Prestimulus Activity in the Cingulo-Opercular Network Predicts Memory for Naturalistic Episodic Experience. Cereb Cortex 2021; 30:1902-1913. [PMID: 31740917 DOI: 10.1093/cercor/bhz212] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 08/19/2019] [Accepted: 08/21/2019] [Indexed: 11/13/2022] Open
Abstract
Human memory is strongly influenced by brain states occurring before an event, yet we know little about the underlying mechanisms. We found that activity in the cingulo-opercular network (including bilateral anterior insula [aI] and anterior prefrontal cortex [aPFC]) seconds before an event begins can predict whether this event will subsequently be remembered. We then tested how activity in the cingulo-opercular network shapes memory performance. Our findings indicate that prestimulus cingulo-opercular activity affects memory performance by opposingly modulating subsequent activity in two sets of regions previously linked to encoding and retrieval of episodic information. Specifically, higher prestimulus cingulo-opercular activity was associated with a subsequent increase in activity in temporal regions previously linked to encoding and with a subsequent reduction in activity within a set of regions thought to play a role in retrieval and self-referential processing. Together, these findings suggest that prestimulus attentional states modulate memory for real-life events by enhancing encoding and possibly by dampening interference from competing memory substrates.
Collapse
Affiliation(s)
- Noga Cohen
- Department of Special Education and The Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Haifa 3498838, Israel
| | - Aya Ben-Yakov
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 2EF, UK
| | - Jochen Weber
- Department of Psychology, Columbia University, New York, NY, 10027, USA
| | - Micah G Edelson
- Department of Economics, University of Zurich, Zürich, CH-8032, Switzerland
| | - Rony Paz
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 76100, Israel
| | - Yadin Dudai
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 76100, Israel
| |
Collapse
|
14
|
Choi HG, Hong SK, Lee HJ, Chang J. Acute Alcohol Intake Deteriorates Hearing Thresholds and Speech Perception in Noise. Audiol Neurootol 2020; 26:218-225. [PMID: 33341812 DOI: 10.1159/000510694] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 08/05/2020] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The hearing process involves complex peripheral and central auditory pathways and could be influenced by various situations or medications. To date, there is very little known about the effects of alcohol on the auditory performances. The purpose of the present study was to evaluate how acute alcohol administration affects various aspects of hearing performance in human subjects, from the auditory perceptive threshold to the speech-in-noise task, which is cognitively demanding. METHODS A total of 43 healthy volunteers were recruited, and each of the participants received calculated amounts of alcohol according to their body weight and sex with a targeted blood alcohol content level of 0.05% using the Widmark formula. Hearing was tested in alcohol-free conditions (no alcohol intake within the previous 24 h) and acute alcohol conditions. A test battery composed of pure-tone audiometry, speech reception threshold (SRT), word recognition score (WRS), distortion product otoacoustic emission (DPOAE), gaps-in-noise (GIN) test, and Korean matrix sentence test (testing speech perception in noise) was performed in the 2 conditions. RESULTS Acute alcohol intake elevated pure-tone hearing thresholds and SRT but did not affect WRS. Both otoacoustic emissions recorded with DPOAE and the temporal resolution measured with the GIN test were not influenced by alcohol intake. The hearing performance in a noisy environment in both easy (-2 dB signal-to-noise ratio [SNR]) and difficult (-8 dB SNR) conditions was decreased by alcohol. CONCLUSIONS Acute alcohol elevated auditory perceptive thresholds and affected performance in complex and difficult auditory tasks rather than simple tasks.
Collapse
Affiliation(s)
- Hyo Geun Choi
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Sung Kwang Hong
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea, .,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea,
| | - Jiwon Chang
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea
| |
Collapse
|
15
|
Rysop AU, Schmitt LM, Obleser J, Hartwigsen G. Neural modelling of the semantic predictability gain under challenging listening conditions. Hum Brain Mapp 2020; 42:110-127. [PMID: 32959939 PMCID: PMC7721236 DOI: 10.1002/hbm.25208] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 09/07/2020] [Accepted: 09/08/2020] [Indexed: 11/09/2022] Open
Abstract
When speech intelligibility is reduced, listeners exploit constraints posed by semantic context to facilitate comprehension. The left angular gyrus (AG) has been argued to drive this semantic predictability gain. Taking a network perspective, we ask how the connectivity within language-specific and domain-general networks flexibly adapts to the predictability and intelligibility of speech. During continuous functional magnetic resonance imaging (fMRI), participants repeated sentences, which varied in semantic predictability of the final word and in acoustic intelligibility. At the neural level, highly predictable sentences led to stronger activation of left-hemispheric semantic regions including subregions of the AG (PGa, PGp) and posterior middle temporal gyrus when speech became more intelligible. The behavioural predictability gain of single participants mapped onto the same regions but was complemented by increased activity in frontal and medial regions. Effective connectivity from PGa to PGp increased for more intelligible sentences. In contrast, inhibitory influence from pre-supplementary motor area to left insula was strongest when predictability and intelligibility of sentences were either lowest or highest. This interactive effect was negatively correlated with the behavioural predictability gain. Together, these results suggest that successful comprehension in noisy listening conditions relies on an interplay of semantic regions and concurrent inhibition of cognitive control regions when semantic cues are available.
Collapse
Affiliation(s)
- Anna Uta Rysop
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lea-Maria Schmitt
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
16
|
Yuriko Santos Kawata N, Hashimoto T, Kawashima R. Neural mechanisms underlying concurrent listening of simultaneous speech. Brain Res 2020; 1738:146821. [PMID: 32259518 DOI: 10.1016/j.brainres.2020.146821] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 03/31/2020] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
Can we identify what two people are saying at the same time? Although it is difficult to perfectly repeat two or more simultaneous messages, listeners can report information from both speakers. In a concurrent/divided listening task, enhanced attention and segregation of speech can be required rather than selection and suppression. However, the neural mechanisms of concurrent listening to multi-speaker concurrent speech has yet to be clarified. The present study utilized functional magnetic resonance imaging to examine the neural responses of healthy young adults listening to concurrent male and female speakers in an attempt to reveal the mechanism of concurrent listening. After practice and multiple trials testing concurrent listening, 31 participants achieved performance comparable with that of selective listening. Furthermore, compared to selective listening, concurrent listening induced greater activation in the anterior cingulate cortex, bilateral anterior insula, frontoparietal regions, and the periaqueductal gray region. In addition to the salience network for multi-speaker listening, attentional modulation and enhanced segregation of these signals could be used to achieve successful concurrent listening. These results indicate the presence of a potential mechanism by which one can listen to two voices with enhanced attention to saliency signals.
Collapse
Affiliation(s)
- Natasha Yuriko Santos Kawata
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan
| | - Teruo Hashimoto
- Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan.
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan; Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan
| |
Collapse
|
17
|
Vaden KI, Eckert MA, Dubno JR, Harris KC. Cingulo-opercular adaptive control for younger and older adults during a challenging gap detection task. J Neurosci Res 2020; 98:680-691. [PMID: 31385349 PMCID: PMC7000297 DOI: 10.1002/jnr.24506] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 07/18/2019] [Accepted: 07/19/2019] [Indexed: 11/07/2022]
Abstract
Cingulo-opercular activity is hypothesized to reflect an adaptive control function that optimizes task performance through adjustments in attention and behavior, and outcome monitoring. While auditory perceptual task performance appears to benefit from elevated activity in cingulo-opercular regions of frontal cortex before stimuli are presented, this association appears reduced for older adults compared to younger adults. However, adaptive control function may be limited by difficult task conditions for older adults. An fMRI study was used to characterize adaptive control differences while 15 younger (average age = 24 years) and 15 older adults (average age = 68 years) performed a gap detection in noise task designed to limit age-related differences. During the fMRI study, participants listened to a noise recording and indicated with a button-press whether it contained a gap. Stimuli were presented between sparse fMRI scans (TR = 8.6 s) and BOLD measurements were collected during separate listening and behavioral response intervals. Age-related performance differences were limited by presenting gaps in noise with durations calibrated at or above each participant's detection threshold. Cingulo-opercular BOLD increased significantly throughout listening and behavioral response intervals, relative to a resting baseline. Correct behavioral responses were significantly more likely on trials with elevated pre-stimulus cingulo-opercular BOLD, consistent with an adaptive control framework. Cingulo-opercular adaptive control estimates appeared higher for participants with better gap sensitivity and lower response bias, irrespective of age, which suggests that this mechanism can benefit performance across the lifespan under conditions that limit age-related performance differences.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Kelly C Harris
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| |
Collapse
|
18
|
Francis AL, Love J. Listening effort: Are we measuring cognition or affect, or both? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1514. [PMID: 31381275 DOI: 10.1002/wcs.1514] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 07/07/2019] [Accepted: 07/10/2019] [Indexed: 12/14/2022]
Abstract
Listening effort is increasingly recognized as a factor in communication, particularly for and with nonnative speakers, for the elderly, for individuals with hearing impairment and/or for those working in noise. However, as highlighted by McGarrigle et al., International Journal of Audiology, 2014, 53, 433-445, the term "listening effort" encompasses a wide variety of concepts, including the engagement and control of multiple possibly distinct neural systems for information processing, and the affective response to the expenditure of those resources in a given context. Thus, experimental or clinical methods intended to objectively quantify listening effort may ultimately reflect a complex interaction between the operations of one or more of those information processing systems, and/or the affective and motivational response to the demand on those systems. Here we examine theoretical, behavioral, and psychophysiological factors related to resolving the question of what we are measuring, and why, when we measure "listening effort." This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Theory and Methods Psychology > Attention Psychology > Emotion and Motivation.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| |
Collapse
|
19
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 360] [Impact Index Per Article: 60.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
20
|
Gennari SP, Millman RE, Hymers M, Mattys SL. Anterior paracingulate and cingulate cortex mediates the effects of cognitive load on speech sound discrimination. Neuroimage 2018; 178:735-743. [DOI: 10.1016/j.neuroimage.2018.06.035] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 06/07/2018] [Accepted: 06/10/2018] [Indexed: 11/28/2022] Open
|
21
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
22
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
23
|
Godwin D, Ji A, Kandala S, Mamah D. Functional Connectivity of Cognitive Brain Networks in Schizophrenia during a Working Memory Task. Front Psychiatry 2017; 8:294. [PMID: 29312020 PMCID: PMC5743938 DOI: 10.3389/fpsyt.2017.00294] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Accepted: 12/11/2017] [Indexed: 11/21/2022] Open
Abstract
Task-based connectivity studies facilitate the understanding of how the brain functions during cognition, which is commonly impaired in schizophrenia (SZ). Our aim was to investigate functional connectivity during a working memory task in SZ. We hypothesized that the task-negative (default mode) network and the cognitive control (frontoparietal) network would show dysconnectivity. Twenty-five SZ patient and 31 healthy control scans were collected using the customized 3T Siemens Skyra MRI scanner, previously used to collect data for the Human Connectome Project. Blood oxygen level dependent signal during the 0-back and 2-back conditions were extracted within a network-based parcelation scheme. Average functional connectivity was assessed within five brain networks: frontoparietal (FPN), default mode (DMN), cingulo-opercular (CON), dorsal attention (DAN), and ventral attention network; as well as between the DMN or FPN and other networks. For within-FPN connectivity, there was a significant interaction between n-back condition and group (p = 0.015), with decreased connectivity at 0-back in SZ subjects compared to controls. FPN-to-DMN connectivity also showed a significant condition × group effect (p = 0.003), with decreased connectivity at 0-back in SZ. Across groups, connectivity within the CON and DAN were increased during the 2-back condition, while DMN connectivity with either CON or DAN were decreased during the 2-back condition. Our findings support the role of the FPN, CON, and DAN in working memory and indicate that the pattern of FPN functional connectivity differs between SZ patients and control subjects during the course of a working memory task.
Collapse
Affiliation(s)
- Douglass Godwin
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Andrew Ji
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Sridhar Kandala
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Daniel Mamah
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| |
Collapse
|