1
|
Rysop AU, Williams KA, Schmitt LM, Meinzer M, Obleser J, Hartwigsen G. Aging modulates large-scale neural network interactions during speech comprehension. Neurobiol Aging 2025; 150:109-121. [PMID: 40088622 DOI: 10.1016/j.neurobiolaging.2025.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 01/22/2025] [Accepted: 02/19/2025] [Indexed: 03/17/2025]
Abstract
Speech comprehension in noisy environments constitutes a critical challenge in everyday life and affects people of all ages. This challenging listening situation can be alleviated using semantic context to predict upcoming words (i.e., predictability gain)-a process associated with the domain-specific semantic network. When no such context can be used, speech comprehension in challenging listening conditions relies on cognitive control functions, underpinned by domain-general networks. Most previous studies focused on regional activity of pre-selected cortical regions or networks in healthy young listeners. Thus, it remains unclear how domain-specific and domain-general networks interact during speech comprehension in noise and how this may change across the lifespan. Here, we used correlational psychophysiological interaction (cPPI) to investigate functional network interactions during sentence comprehension under noisy conditions with varying predictability in healthy young and older listeners. Relative to young listeners, older adults showed increased task-related activity in several domain-general networks but reduced between-network connectivity. Across groups, higher predictability was associated with increased positive coupling between semantic and attention networks and increased negative coupling between semantic and control networks. These results highlight the complex interplay between the semantic network and several domain-general networks underlying the predictability gain. The observed differences in connectivity profiles with age inform the current debate on whether age-related changes in neural activity and functional connectivity reflect compensation or dedifferentiation.
Collapse
Affiliation(s)
- Anna Uta Rysop
- Department of Neurology, University Medicine Greifswald, Greifswald, Germany; Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, Leipzig 04103, Germany.
| | - Kathleen Anne Williams
- Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, Leipzig 04103, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Germany
| | - Lea-Maria Schmitt
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, Nijmegen 6525 EN, the Netherlands
| | - Marcus Meinzer
- Department of Neurology, University Medicine Greifswald, Greifswald, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Ratzeburger Allee 160, Lübeck 23562, Germany; Center of Brain, Behavior and Metabolism, University of Lübeck, Ratzeburger Allee 160, Lübeck 23562, Germany
| | - Gesa Hartwigsen
- Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, Leipzig 04103, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| |
Collapse
|
2
|
Shim L, Kim J, Kim G, Lee HJ. Ear-specific neuroplasticity for sound localization in individuals with single-sided deafness. Hear Res 2025; 459:109207. [PMID: 39933256 DOI: 10.1016/j.heares.2025.109207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 01/31/2025] [Accepted: 02/03/2025] [Indexed: 02/13/2025]
Abstract
Studies on cortical plasticity in individuals with single-sided deafness (SSD) show increased activity in the auditory cortex ipsilateral to the hearing ear, impacting auditory localization and rehabilitation outcomes. However, the direct relationship between neuroplastic changes and binaural processing in SSD remains unclear, as does the specificity of plasticity to the affected ear. In this study, two groups of SSD patients (left [Lt] SSD, 17; right [Rt] SSD, 18) of postlingual onset and 13 normal-hearing controls (NC) underwent fMRI during an auditory localization task. The NC group was also treated with earplugs to simulate acute monaural hearing. We compared the cortical networks involved in auditory localization and conducted correlation analyses to identify neural activity associated with SSD duration and localization performance. The response laterality in the auditory cortex was analyzed and compared across groups. Results indicated that extended SSD modulates auditory cortical response in the right primary auditory cortex. The posterior superior temporal gyrus and cingulo-opercular network were linked to improved localization performance. Findings suggest that cortical attentional resources are crucial for auditory spatial behavior in SSD, especially when the left ear is impaired.
Collapse
Affiliation(s)
- Leeseul Shim
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Gyeonggi-do 14068, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang-si, Gyeonggi-do, Republic of Korea
| | - Jahee Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon-si, Gangwon-do 24252, Republic of Korea
| | - Gibbeum Kim
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Gyeonggi-do 14068, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Gyeonggi-do 14068, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang-si, Gyeonggi-do, Republic of Korea; Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon-si, Gangwon-do 24252, Republic of Korea.
| |
Collapse
|
3
|
Lebiecka-Johansen P, Zekveld AA, Wendt D, Koelewijn T, Muhammad AI, Kramer SE. Classification of Hearing Status Based on Pupil Measures During Sentence Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:1188-1208. [PMID: 39951463 DOI: 10.1044/2024_jslhr-24-00005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2025]
Abstract
PURPOSE Speech understanding in noise can be effortful, especially for people with hearing impairment. To compensate for reduced acuity, hearing-impaired (HI) listeners may be allocating listening effort differently than normal-hearing (NH) peers. We expected that this might influence measures derived from the pupil dilation response. To investigate this in more detail, we assessed the sensitivity of pupil measures to hearing-related changes in effort allocation. We used a machine learning-based classification framework capable of combining and ranking measures to examine hearing-related, stimulus-related (signal-to-noise ratio [SNR]), and task response-related changes in pupil measures. METHOD Pupil data from 32 NH (40-70 years old, M = 51.3 years, six males) and 32 HI (31-76 years old, M = 59 years, 13 males) listeners were recorded during an adaptive speech reception threshold test. Peak pupil dilation (PPD), mean pupil dilation (MPD), principal pupil components (rotated principal components [RPCs]), and baseline pupil size (BPS) were calculated. As a precondition for ranking pupil measures, the ability to classify hearing status (NH/HI), SNR (high/low), and task response (correct/incorrect) above random prediction level was assessed. This precondition was met when classifying hearing status in subsets of data with varying SNR and task response, SNR in the NH group, and task response in the HI group. RESULTS A combination of pupil measures was necessary to classify the dependent factors. Hearing status, SNR, and task response were predicted primarily by the established measures-PPD (maximum effort), RPC2 (speech processing), and BPS (task anticipation)-and by the novel measures RPC1 (listening) and RPC3 (response preparation) in tasks involving SNR as an outcome or sometimes difficulty criterion. CONCLUSIONS A machine learning-based classification framework can assess sensitivity of, and rank the importance of, pupil measures in relation to three effort modulators (factors) during speech perception in noise. This indicates that the effects of these factors on the pupil measures allow for reasonable classification performance. Moreover, the varying contributions of each measure to the classification models suggest they are not equally affected by these factors. Thus, this study enhances our understanding of pupil responses and their sensitivity to relevant factors. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.28225199.
Collapse
Affiliation(s)
- Patrycja Lebiecka-Johansen
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A Zekveld
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Kongens Lyngby
| | - Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, the Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Afaan I Muhammad
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
| | - Sophia E Kramer
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
| |
Collapse
|
4
|
Hsin CH, Lee CY, Tsao Y. Exploring N400 Predictability Effects During Sustained Speech Comprehension: From Listening-Related Fatigue to Speech Enhancement Evaluation. Ear Hear 2025:00003446-990000000-00401. [PMID: 39967000 DOI: 10.1097/aud.0000000000001635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2025]
Abstract
OBJECTIVES This study investigated the predictability effect on the N400 as an objective measure of listening-related fatigue during speech comprehension by: (1) examining how its characteristics (amplitude, latency, and topographic distribution) changed over time under clear versus noisy conditions to assess its utility as a marker for listening-related fatigue, and (2) evaluating whether these N400 parameters could assess the effectiveness of speech enhancement (SE) systems. DESIGN Two event-related potential experiments were conducted on 140 young adults (aged 20 to 30) assigned to four age-matched groups. Using a between-subjects design for listening conditions, participants comprehended spoken sentences ending in high- or low-predictability words while their brain activity was recorded using electroencephalography. Experiment 1 compared the predictability effect on the N400 in clear and noise-masked conditions, while experiment 2 examined this effect under two enhanced conditions (denoised using the transformer- and minimum mean square error-based SE models). Electroencephalography data were divided into two blocks to analyze the changes in the predictability effect on the N400 over time, including amplitude, latency, and topographic distributions. RESULTS Experiment 1 compared N400 effects across blocks under different clarity conditions. Clear speech in block 2 elicited a more anteriorly distributed N400 effect without reduction or delay compared with block 1. Noisy speech in block 2 showed a reduced, delayed, and posteriorly distributed effect compared with block 1. Experiment 2 examined N400 effects during enhanced speech processing. Transformer-enhanced speech in block 1 demonstrated significantly increased N400 effect amplitude compared to noisy speech. However, both enhancement methods showed delayed N400 effects in block 2. CONCLUSIONS This study suggests that temporal changes in the N400 predictability effect might serve as objective markers of sustained speech processing under different clarity conditions. During clear speech comprehension, listeners appear to maintain efficient semantic processing through additional resource recruitment over time, while noisy speech leads to reduced processing efficiency. When applied to enhanced speech, these N400 patterns reveal both the immediate benefits of SE for semantic processing and potential limitations in supporting sustained listening. These findings demonstrate the potential utility of the N400 predictability effect for understanding sustained listening demands and evaluating SE effectiveness.
Collapse
Affiliation(s)
- Cheng-Hung Hsin
- Biomedical Acoustic Signal Processing Lab, Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Chia-Ying Lee
- Brain and Language Laboratory, Institute of Linguistics, Academia Sinica, Taipei, Taiwan
- Institute of Cognitive Neuroscience, National Central University, Taoyuan, Taiwan
- Research Center for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan
| | - Yu Tsao
- Biomedical Acoustic Signal Processing Lab, Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
- Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan, Taiwan
| |
Collapse
|
5
|
Levy O, Korisky A, Zvilichovsky Y, Zion Golumbic E. The Neurophysiological Costs of Learning in a Noisy Classroom: An Ecological Virtual Reality Study. J Cogn Neurosci 2025; 37:300-316. [PMID: 39348110 DOI: 10.1162/jocn_a_02249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Many real-life situations can be extremely noisy, which makes it difficult to understand what people say. Here, we introduce a novel audiovisual virtual reality experimental platform to study the behavioral and neurophysiological consequences of background noise on processing continuous speech in highly realistic environments. We focus on a context where the ability to understand speech is particularly important: the classroom. Participants (n = 32) experienced sitting in a virtual reality classroom and were told to pay attention to a virtual teacher giving a lecture. Trials were either quiet or contained background construction noise, emitted from outside the classroom window. Two realistic types of noise were used: continuous drilling and intermittent air hammers. Alongside behavioral outcomes, we measured several neurophysiological metrics, including neural activity (EEG), eye-gaze and skin conductance (galvanic skin response). Our results confirm the detrimental effect of background noise. Construction noise, and particularly intermittent noise, was associated with reduced behavioral performance, reduced neural tracking of the teacher's speech and an increase in skin conductance, although it did not have a significant effect on alpha-band oscillations or eye-gaze patterns. These results demonstrate the neurophysiological costs of learning in noisy environments and emphasize the role of temporal dynamics in speech-in-noise perception. The finding that intermittent noise was more disruptive than continuous noise supports a "habituation" rather than "glimpsing" hypothesis of speech-in-noise processing. These results also underscore the importance of increasing the ecologically relevance of neuroscientific research and considering acoustic, temporal, and semantic features of realistic stimuli as well as the cognitive demands of real-life environments.
Collapse
|
6
|
O'Leary RM, Amichetti NM, Brown Z, Kinney AJ, Wingfield A. Congruent Prosody Reduces Cognitive Effort in Memory for Spoken Sentences: A Pupillometric Study with Young and Older Adults. Exp Aging Res 2025; 51:35-58. [PMID: 38061985 DOI: 10.1080/0361073x.2023.2286872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 11/18/2023] [Indexed: 01/06/2025]
Abstract
BACKGROUND In spite of declines in working memory and other processes, older adults generally maintain good ability to understand and remember spoken sentences. In part this is due to preserved knowledge of linguistic rules and their implementation. Largely overlooked, however, is the support older adults may gain from the presence of sentence prosody (pitch contour, lexical stress, intra-and inter-word timing) as an aid to detecting the structure of a heard sentence. METHODS Twenty-four young and 24 older adults recalled recorded sentences in which the sentence prosody corresponded to the clausal structure of the sentence, when the prosody was in conflict with this structure, or when there was reduced prosody uninformative with regard to the clausal structure. Pupil size was concurrently recorded as a measure of processing effort. RESULTS Both young and older adults' recall accuracy was superior for sentences heard with supportive prosody than for sentences with uninformative prosody or for sentences in which the prosodic marking and causal structure were in conflict. The measurement of pupil dilation suggested that the task was generally more effortful for the older adults, but with both groups showing a similar pattern of effort-reducing effects of supportive prosody. CONCLUSIONS Results demonstrate the influence of prosody on young and older adults' ability to recall accurately multi-clause sentences, and the significant role effective prosody may play in preserving processing effort.
Collapse
Affiliation(s)
- Ryan M O'Leary
- Department of Psychology and Volen National Center for Complex System, Brandeis University, Waltham, USA
| | - Nicole M Amichetti
- Department of Psychology and Volen National Center for Complex System, Brandeis University, Waltham, USA
| | - Zoe Brown
- Department of Psychology and Volen National Center for Complex System, Brandeis University, Waltham, USA
| | - Alexander J Kinney
- Department of Psychology and Volen National Center for Complex System, Brandeis University, Waltham, USA
| | - Arthur Wingfield
- Department of Psychology and Volen National Center for Complex System, Brandeis University, Waltham, USA
| |
Collapse
|
7
|
Kemper M, Denk F, Husstedt H, Obleser J. Acoustically Transparent Hearing Aids Increase Physiological Markers of Listening Effort. Trends Hear 2025; 29:23312165251333225. [PMID: 40179130 PMCID: PMC11970058 DOI: 10.1177/23312165251333225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 03/15/2025] [Accepted: 03/21/2025] [Indexed: 04/05/2025] Open
Abstract
While hearing aids are beneficial in compensating for hearing loss and suppressing ambient noise, they may also introduce an unwanted processing burden to the listener's sensory and cognitive system. To investigate such adverse side effects, hearing aids may be set to a 'transparent mode', aiming to replicate natural hearing through the open ear as best as possible. Such transparent hearing aids have previously been demonstrated to exhibit a small but significant disadvantage in speech intelligibility, with less conclusive effects on self-rated listening effort. Here we aimed to reproduce these findings and expand them with neurophysiological measures of invested listening effort, including parietal alpha power and pupil size. Invested listening effort was measured across five task difficulties, ranging from nearly impossible to easy, with normal-hearing participants in both aided and unaided conditions. Results well reproduced a hearing aid disadvantage for speech intelligibility and subjective listening effort ratings. As to be expected, pupil size and parietal alpha power followed an inverted u-shape, peaking at moderate task difficulties (around SRT50). However, the transparent hearing aid increased pupil size and parietal alpha power at medium task demand (between SRT20 and SRT80). These neurophysiological effects were larger than those observed in speech intelligibility and subjective listening effort, respectively. The results gain plausibility by yielding a substantial association of individual pupil size and individual parietal alpha power. In sum, our findings suggest that key neurophysiological measures of invested listening effort are sensitive to the individual additional burden on speech intelligibility that hearing aid processing can introduce.
Collapse
Affiliation(s)
- Markus Kemper
- German Institute of Hearing Aids, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| | - Florian Denk
- German Institute of Hearing Aids, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| |
Collapse
|
8
|
Eckert MA, Matthews LJ, Vaden KI, Dubno JR. Executive Function Associations With Audibility-Adjusted Speech Perception in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4811-4828. [PMID: 39475684 DOI: 10.1044/2024_jslhr-24-00333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2024]
Abstract
PURPOSE Speech recognition in noise is challenging for listeners and appears to require support from executive functions to focus attention on rapidly unfolding target speech, track misunderstanding, and sustain attention. The current study was designed to test the hypothesis that lower executive function abilities explain poorer speech recognition in noise, including among older participants with hearing loss who often exhibit diminished speech recognition in noise and cognitive abilities. METHOD A cross-sectional sample of 400 younger-to-older adult participants (19 to < 90 years of age) from the community-based Medical University of South CarolinaLongitudinal Cohort Study of Age-related Hearing Loss were administered tasks with executive control demands to assess individual variability in a card-sorting measure of set-shifting/performance monitoring, a dichotic listening measure of selective attention/working memory, sustained attention, and processing speed. Key word recognition in the high- and low-context speech perception-in-noise (SPIN) tests provided measures of speech recognition in noise. The SPIN scores were adjusted for audibility using the Articulation Index to characterize the impact of varied hearing sensitivity unrelated to reduced audibility on cognitive and speech recognition associations. RESULTS Set-shifting, dichotic listening, and processing speed each explained unique and significant variance in audibility-adjusted, low-context SPIN scores (ps < .001), including after controlling for age, pure-tone threshold average (PTA), sex, and education level. The dichotic listening and processing speed effect sizes were significantly diminished when controlling for PTA, indicating that participants with poorer hearing sensitivity were also likely to have lower executive function and lower audibility-adjusted speech recognition. CONCLUSIONS Poor set-shifting/performance monitoring, slow processing speed, and poor selective attention/working memory appeared to partially explain difficulties with speech recognition in noise after accounting for audibility. These results are consistent with the premise that distinct executive functions support speech recognition in noise.
Collapse
Affiliation(s)
- Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
- Department of Otolaryngology-Head and Neck Surgery, Columbia University, New York, NY
| | - Lois J Matthews
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| |
Collapse
|
9
|
Zekveld AA, Kramer SE, Heslenfeld DJ, Versfeld NJ, Vriend C. Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4549-4566. [PMID: 39392910 DOI: 10.1044/2024_jslhr-24-00017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2024]
Abstract
PURPOSE A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response. METHOD Seventeen NH participants (Mage = 46 years) were compared to 17 HH participants (Mage = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired. RESULTS HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech. CONCLUSION Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27162135.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
- Institute of Psychology, Leiden University, the Netherlands
| | - Sophia E Kramer
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Dirk J Heslenfeld
- Faculty of Behavioural and Movement Sciences, Experimental and Applied Psychology, VU University, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Otolaryngology-Head and Neck Surgery, Amsterdam UMC location Vrije Universiteit Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Chris Vriend
- Department of Psychiatry and Department of Anatomy and Neuroscience, Amsterdam UMC, Vrije Universiteit Amsterdam, the Netherlands
- Brain Imaging, Amsterdam Neuroscience, the Netherlands
| |
Collapse
|
10
|
Mertes IB. Associations between the medial olivocochlear reflex, middle-ear muscle reflex, and sentence-in-noise recognition using steady and pulsed noise elicitors. Hear Res 2024; 453:109108. [PMID: 39244840 DOI: 10.1016/j.heares.2024.109108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 08/21/2024] [Accepted: 08/27/2024] [Indexed: 09/10/2024]
Abstract
The middle-ear muscle reflex (MEMR) and medial olivocochlear reflex (MOCR) modify peripheral auditory function, which may reduce masking and improve speech-in-noise (SIN) recognition. Previous work and our pilot data suggest that the two reflexes respond differently to static versus dynamic noise elicitors. However, little is known about how the two reflexes work in tandem to contribute to SIN recognition. We hypothesized that SIN recognition would be significantly correlated with the strength of the MEMR and with the strength of the MOCR. Additionally, we hypothesized that SIN recognition would be best when both reflexes were activated. A total of 43 healthy, normal-hearing adults met the inclusion/exclusion criteria (35 females, age range: 19-29 years). MEMR strength was assessed using wideband absorbance. MOCR strength was assessed using transient-evoked otoacoustic emissions. SIN recognition was assessed using a modified version of the QuickSIN. All measurements were made with and without two types of contralateral noise elicitors (steady and pulsed) at two levels (50 and 65 dB SPL). Steady noise was used to primarily elicit the MOCR and pulsed noise was used to elicit both reflexes. Two baseline conditions without a contralateral elicitor were also obtained. Results revealed differences in how the MEMR and MOCR responded to elicitor type and level. Contrary to hypotheses, SIN recognition was not significantly improved in the presence of any contralateral elicitors relative to the baseline conditions. Additionally, there were no significant correlations between MEMR strength and SIN recognition, or between MOCR strength and SIN recognition. MEMR and MOCR strength were significantly correlated for pulsed noise elicitors but not steady noise elicitors. Results suggest no association between SIN recognition and the MEMR or MOCR, at least as measured and analyzed in this study. SIN recognition may have been influenced by factors not accounted for in this study, such as contextual cues, warranting further study.
Collapse
Affiliation(s)
- Ian B Mertes
- Department of Speech and Hearing Science, 901 South Sixth Street, University of Illinois Urbana-Champaign, Champaign 61820 IL, USA.
| |
Collapse
|
11
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
12
|
He J, Frances C, Creemers A, Brehm L. Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Q J Exp Psychol (Hove) 2024; 77:1745-1769. [PMID: 38044368 PMCID: PMC11295403 DOI: 10.1177/17470218231219971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 09/04/2023] [Accepted: 10/03/2023] [Indexed: 12/05/2023]
Abstract
Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top-down attentional engagement.
Collapse
Affiliation(s)
- Jieying He
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- International Max Planck Research School for Language Sciences, Nijmegen, The Netherlands
| | - Candice Frances
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Ava Creemers
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Laurel Brehm
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Linguistics, University of California, Santa Barbara, Santa Barbara, CA, USA
| |
Collapse
|
13
|
Alavash M, Obleser J. Brain Network Interconnectivity Dynamics Explain Metacognitive Differences in Listening Behavior. J Neurosci 2024; 44:e2322232024. [PMID: 38839303 PMCID: PMC11293451 DOI: 10.1523/jneurosci.2322-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 04/29/2024] [Accepted: 05/01/2024] [Indexed: 06/07/2024] Open
Abstract
Complex auditory scenes pose a challenge to attentive listening, rendering listeners slower and more uncertain in their perceptual decisions. How can we explain such behaviors from the dynamics of cortical networks that pertain to the control of listening behavior? We here follow up on the hypothesis that human adaptive perception in challenging listening situations is supported by modular reconfiguration of auditory-control networks in a sample of N = 40 participants (13 males) who underwent resting-state and task functional magnetic resonance imaging (fMRI). Individual titration of a spatial selective auditory attention task maintained an average accuracy of ∼70% but yielded considerable interindividual differences in listeners' response speed and reported confidence in their own perceptual decisions. Whole-brain network modularity increased from rest to task by reconfiguring auditory, cinguloopercular, and dorsal attention networks. Specifically, interconnectivity between the auditory network and cinguloopercular network decreased during the task relative to the resting state. Additionally, interconnectivity between the dorsal attention network and cinguloopercular network increased. These interconnectivity dynamics were predictive of individual differences in response confidence, the degree of which was more pronounced after incorrect judgments. Our findings uncover the behavioral relevance of functional cross talk between auditory and attentional-control networks during metacognitive assessment of one's own perception in challenging listening situations and suggest two functionally dissociable cortical networked systems that shape the considerable metacognitive differences between individuals in adaptive listening behavior.
Collapse
Affiliation(s)
- Mohsen Alavash
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck 23562, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck 23562, Germany
| |
Collapse
|
14
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
15
|
Kim SG, De Martino F, Overath T. Linguistic modulation of the neural encoding of phonemes. Cereb Cortex 2024; 34:bhae155. [PMID: 38687241 PMCID: PMC11059272 DOI: 10.1093/cercor/bhae155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 05/02/2024] Open
Abstract
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, Netherlands
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Duke Institute for Brain Sciences, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
| |
Collapse
|
16
|
Alqudah S, Zuriekat M, Shatarah A. Impact of hearing impairment on the mental status of the adults and older adults in Jordanian society. PLoS One 2024; 19:e0298616. [PMID: 38437235 PMCID: PMC10911586 DOI: 10.1371/journal.pone.0298616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 01/27/2024] [Indexed: 03/06/2024] Open
Abstract
BACKGROUND Hearing loss is a common disorder, affecting both children and adults worldwide. Individuals with hearing loss suffer from mental health problems that affect their quality of life. OBJECTIVE This study aimed to investigate the social and emotional consequences of hearing loss in a Jordanian population using Arabic versions of the Hearing Handicap Inventory for Adults (HHIA) and the Hearing Handicap Inventory for the Elderly (HHIE). METHODS This study included 300 Jordanian participants aged 18-90 years with hearing loss. Each participant underwent a complete audiological evaluation before answering the questionnaires. RESULTS The median overall scores of the HHIA and HHIE groups were 39 and 65, respectively. Both HHIA (Cronbach's alpha = 0.79, p < 0.001) and HHIE (Cronbach's alpha = 0.78, p < 0.001) were significantly associated with the social, emotional, and overall scores. Compared to the adult group, the median emotional and social scores of the older adults group were significantly higher than the adults group (Z = -4.721, p = 0.001), using the Mann-Whitney test. CONCLUSION The present research revealed that psychological disabilities associated with hearing loss in the adult Jordanian population are more frequent and severe than in other nations. This may be attributed to the lack of awareness of the mental consequences of hearing loss among Jordanian healthcare providers and the public.
Collapse
Affiliation(s)
- Safa Alqudah
- Department of Rehabilitation Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, Jordan
| | - Margaret Zuriekat
- Department of Special Surgery, School of Medicine, The University of Jordan & Jordan University Hospital, Amman, Jordan
| | - Aya Shatarah
- Bachelor in Speech and Hearing, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
17
|
Johns MA, Calloway RC, Karunathilake IMD, Decruy LP, Anderson S, Simon JZ, Kuchinsky SE. Attention Mobilization as a Modulator of Listening Effort: Evidence From Pupillometry. Trends Hear 2024; 28:23312165241245240. [PMID: 38613337 PMCID: PMC11015766 DOI: 10.1177/23312165241245240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 03/11/2024] [Accepted: 03/15/2024] [Indexed: 04/14/2024] Open
Abstract
Listening to speech in noise can require substantial mental effort, even among younger normal-hearing adults. The task-evoked pupil response (TEPR) has been shown to track the increased effort exerted to recognize words or sentences in increasing noise. However, few studies have examined the trajectory of listening effort across longer, more natural, stretches of speech, or the extent to which expectations about upcoming listening difficulty modulate the TEPR. Seventeen younger normal-hearing adults listened to 60-s-long audiobook passages, repeated three times in a row, at two different signal-to-noise ratios (SNRs) while pupil size was recorded. There was a significant interaction between SNR, repetition, and baseline pupil size on sustained listening effort. At lower baseline pupil sizes, potentially reflecting lower attention mobilization, TEPRs were more sustained in the harder SNR condition, particularly when attention mobilization remained low by the third presentation. At intermediate baseline pupil sizes, differences between conditions were largely absent, suggesting these listeners had optimally mobilized their attention for both SNRs. Lastly, at higher baseline pupil sizes, potentially reflecting overmobilization of attention, the effect of SNR was initially reversed for the second and third presentations: participants initially appeared to disengage in the harder SNR condition, resulting in reduced TEPRs that recovered in the second half of the story. Together, these findings suggest that the unfolding of listening effort over time depends critically on the extent to which individuals have successfully mobilized their attention in anticipation of difficult listening conditions.
Collapse
Affiliation(s)
- M. A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - R. C. Calloway
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - I. M. D. Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - L. P. Decruy
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - S. Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
| | - J. Z. Simon
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - S. E. Kuchinsky
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD 20889, USA
| |
Collapse
|
18
|
Zaltz Y. The Impact of Trained Conditions on the Generalization of Learning Gains Following Voice Discrimination Training. Trends Hear 2024; 28:23312165241275895. [PMID: 39212078 PMCID: PMC11367600 DOI: 10.1177/23312165241275895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/03/2024] [Accepted: 07/29/2024] [Indexed: 09/04/2024] Open
Abstract
Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into "Trained-in-Quiet" (n = 15), "Trained-in-Noise" (n = 20), and "Control" (n = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.
Collapse
Affiliation(s)
- Yael Zaltz
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
19
|
Kraus F, Obleser J, Herrmann B. Pupil Size Sensitivity to Listening Demand Depends on Motivational State. eNeuro 2023; 10:ENEURO.0288-23.2023. [PMID: 37989588 PMCID: PMC10734370 DOI: 10.1523/eneuro.0288-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/19/2023] [Accepted: 10/22/2023] [Indexed: 11/23/2023] Open
Abstract
Motivation plays a role when a listener needs to understand speech under acoustically demanding conditions. Previous work has demonstrated pupil-linked arousal being sensitive to both listening demands and motivational state during listening. It is less clear how motivational state affects the temporal evolution of the pupil size and its relation to subsequent behavior. We used an auditory gap detection task (N = 33) to study the joint impact of listening demand and motivational state on the pupil size response and examine its temporal evolution. Task difficulty and a listener's motivational state were orthogonally manipulated through changes in gap duration and monetary reward prospect. We show that participants' performance decreased with task difficulty, but that reward prospect enhanced performance under hard listening conditions. Pupil size increased with both increased task difficulty and higher reward prospect, and this reward prospect effect was largest under difficult listening conditions. Moreover, pupil size time courses differed between detected and missed gaps, suggesting that the pupil response indicates upcoming behavior. Larger pre-gap pupil size was further associated with faster response times on a trial-by-trial within-participant level. Our results reiterate the utility of pupil size as an objective and temporally sensitive measure in audiology. However, such assessments of cognitive resource recruitment need to consider the individual's motivational state.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto M6A 2E1, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto M5S 3G3, Ontario, Canada
| |
Collapse
|
20
|
Park MH, Kim JS, Lee S, Kim DH, Oh SH. Increased Resting-State Positron Emission Tomography Activity After Cochlear Implantation in Adult Deafened Cats. Clin Exp Otorhinolaryngol 2023; 16:326-333. [PMID: 36397262 DOI: 10.21053/ceo.2022.00423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES Cochlear implants are widely used for hearing rehabilitation in patients with profound sensorineural hearing loss. However, Cochlear implants have variable. RESULTS and central neural plasticity is considered to be a reason for this variability. We hypothesized that resting-state cortical networks play a role in conditions of profound hearing loss and are affected by cochlear implants. To investigate the resting-state neuronal networks after cochlear implantation, we acquired 18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) images in experimental animals. METHODS Eight adult domestic cats were enrolled in this study. The hearing threshold of the animals was within the normal range, as measured by auditory evoked potential. They were divided into control (n=4) and hearing loss (n=4) groups. Hearing loss was induced by co-administration of ethacrynic acid and kanamycin. FDG-PET was performed in a normal hearing state and 4 and 11 months after the deafening procedure. Cochlear implantation was performed in the right ear, and electrical cochlear stimulation was performed for 7 months (from 4 to 11 months after the deafening procedure). PET images were compared between the two groups at the three time points. RESULTS Four months after hearing loss, the auditory cortical area's activity decreased, and activity in the associated visual area increased. After 7 months of cochlear stimulation, the superior marginal gyrus and cingulate gyrus, which are components of the default mode network, showed hypermetabolism. The inferior colliculi showed hypometabolism. CONCLUSION Resting-state cortical activity in the default mode network components was elevated after cochlear stimulation. This suggests that the animals' awareness level was elevated after hearing restoration by the cochlear implantation.
Collapse
Affiliation(s)
- Min-Hyun Park
- Department of Otorhinolaryngology, Seoul National University College of Medicine, Seoul, Korea
- Department of Otorhinolaryngology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Korea
| | - Jin Su Kim
- Division of RI Application, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Seonhwa Lee
- Division of RI Application, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Doo Hee Kim
- Department of Otorhinolaryngology, Seoul National University College of Medicine, Seoul, Korea
| | - Seung Ha Oh
- Department of Otorhinolaryngology, Seoul National University College of Medicine, Seoul, Korea
- Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul, Korea
| |
Collapse
|
21
|
Shetty HN, Raju S, Singh S S. The relationship between age, acceptable noise level, and listening effort in middle-aged and older-aged individuals. J Otol 2023; 18:220-229. [PMID: 37877073 PMCID: PMC10593579 DOI: 10.1016/j.joto.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 10/26/2023] Open
Abstract
Objective The purpose of the study was to evaluate listening effort in adults who experience varied annoyance towards noise. Materials and methods Fifty native Kannada-speaking adults aged 41-68 years participated. We evaluated the participant's acceptable noise level while listening to speech. Further, a sentence-final word-identification and recall test at 0 dB SNR (less favorable condition) and 4 dB SNR (relatively favorable condition) was used to assess listening effort. The repeat and recall scores were obtained for each condition. Results The regression model revealed that the listening effort increased by 0.6% at 0 dB SNR and by 0.5% at 4 dB SNR with every one-year advancement in age. Listening effort increased by 0.9% at 0 dB SNR and by 0.7% at 4 dB SNR with every one dB change in the value of Acceptable Noise Level (ANL). At 0 dB SNR and 4 dB SNR, a moderate and mild negative correlation was noted respectively between listening effort and annoyance towards noise when the factor age was controlled. Conclusion Listening effort increases with age, and its effect is more in less favorable than in relatively favorable conditions. However, if the annoyance towards noise was controlled, the impact of age on listening effort was reduced. Listening effort correlated with the level of annoyance once the age effect was controlled. Furthermore, the listening effort was predicted from the ANL to a moderate degree.
Collapse
Affiliation(s)
| | - Suma Raju
- Department of Speech-Language Pathology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| | - Sanjana Singh S
- Department of Audiology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| |
Collapse
|
22
|
Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage 2023; 278:120271. [PMID: 37442310 PMCID: PMC10460966 DOI: 10.1016/j.neuroimage.2023.120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/20/2023] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
23
|
Yasmin S, Irsik VC, Johnsrude IS, Herrmann B. The effects of speech masking on neural tracking of acoustic and semantic features of natural speech. Neuropsychologia 2023; 186:108584. [PMID: 37169066 DOI: 10.1016/j.neuropsychologia.2023.108584] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/30/2023] [Accepted: 05/08/2023] [Indexed: 05/13/2023]
Abstract
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Collapse
Affiliation(s)
- Sonia Yasmin
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Vanessa C Irsik
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada; School of Communication and Speech Disorders,The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada; Department of Psychology,University of Toronto, M5S 1A1, Toronto, ON, Canada
| |
Collapse
|
24
|
Cartocci G, Inguscio BMS, Giliberto G, Vozzi A, Giorgi A, Greco A, Babiloni F, Attanasio G. Listening Effort in Tinnitus: A Pilot Study Employing a Light EEG Headset and Skin Conductance Assessment during the Listening to a Continuous Speech Stimulus under Different SNR Conditions. Brain Sci 2023; 13:1084. [PMID: 37509014 PMCID: PMC10377270 DOI: 10.3390/brainsci13071084] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/07/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
Background noise elicits listening effort. What else is tinnitus if not an endogenous background noise? From such reasoning, we hypothesized the occurrence of increased listening effort in tinnitus patients during listening tasks. Such a hypothesis was tested by investigating some indices of listening effort through electroencephalographic and skin conductance, particularly parietal and frontal alpha and electrodermal activity (EDA). Furthermore, tinnitus distress questionnaires (THI and TQ12-I) were employed. Parietal alpha values were positively correlated to TQ12-I scores, and both were negatively correlated to EDA; Pre-stimulus frontal alpha correlated with the THI score in our pilot study; finally, results showed a general trend of increased frontal alpha activity in the tinnitus group in comparison to the control group. Parietal alpha during the listening to stimuli, positively correlated to the TQ12-I, appears to reflect a higher listening effort in tinnitus patients and the perception of tinnitus symptoms. The negative correlation between both listening effort (parietal alpha) and tinnitus symptoms perception (TQ12-I scores) with EDA levels could be explained by a less responsive sympathetic nervous system to prepare the body to expend increased energy during the "fight or flight" response, due to pauperization of energy from tinnitus perception.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
| | - Bianca Maria Serena Inguscio
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- Department of Human Neuroscience, Sapienza University of Rome, 00185 Rome, Italy
| | - Giovanna Giliberto
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
| | - Alessia Vozzi
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- SAIMLAL Department, Sapienza University of Rome, 00185 Rome, Italy
| | - Andrea Giorgi
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- SAIMLAL Department, Sapienza University of Rome, 00185 Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, 00161 Rome, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou 310005, China
| | | |
Collapse
|
25
|
Perea Pérez F, Hartley DEH, Kitterick PT, Zekveld AA, Naylor G, Wiggins IM. Listening efficiency in adult cochlear-implant users compared with normally-hearing controls at ecologically relevant signal-to-noise ratios. Front Hum Neurosci 2023; 17:1214485. [PMID: 37520928 PMCID: PMC10379644 DOI: 10.3389/fnhum.2023.1214485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 06/23/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction Due to having to work with an impoverished auditory signal, cochlear-implant (CI) users may experience reduced speech intelligibility and/or increased listening effort in real-world listening situations, compared to their normally-hearing (NH) peers. These two challenges to perception may be usefully integrated in a measure of listening efficiency: conceptually, the amount of accuracy achieved for a certain amount of effort expended. Methods We describe a novel approach to quantifying listening efficiency based on the rate of evidence accumulation toward a correct response in a linear ballistic accumulator (LBA) model of choice decision-making. Estimation of this objective measure within a hierarchical Bayesian framework confers further benefits, including full quantification of uncertainty in parameter estimates. We applied this approach to examine the speech-in-noise performance of a group of 24 CI users (M age: 60.3, range: 20-84 years) and a group of 25 approximately age-matched NH controls (M age: 55.8, range: 20-79 years). In a laboratory experiment, participants listened to reverberant target sentences in cafeteria noise at ecologically relevant signal-to-noise ratios (SNRs) of +20, +10, and +4 dB SNR. Individual differences in cognition and self-reported listening experiences were also characterised by means of cognitive tests and hearing questionnaires. Results At the group level, the CI group showed much lower listening efficiency than the NH group, even in favourable acoustic conditions. At the individual level, within the CI group (but not the NH group), higher listening efficiency was associated with better cognition (i.e., working-memory and linguistic-closure) and with more positive self-reported listening experiences, both in the laboratory and in daily life. Discussion We argue that listening efficiency, measured using the approach described here, is: (i) conceptually well-motivated, in that it is theoretically impervious to differences in how individuals approach the speed-accuracy trade-off that is inherent to all perceptual decision making; and (ii) of practical utility, in that it is sensitive to differences in task demand, and to differences between groups, even when speech intelligibility remains at or near ceiling level. Further research is needed to explore the sensitivity and practical utility of this metric across diverse listening situations.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- National Acoustic Laboratories, Sydney, NSW, Australia
| | - Adriana A. Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Graham Naylor
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
26
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
27
|
Ryan DB, Eckert MA, Sellers EW, Schairer KS, McBee MT, Ridley EA, Smith SL. Performance Monitoring and Cognitive Inhibition during a Speech-in-Noise Task in Older Listeners. Semin Hear 2023; 44:124-139. [PMID: 37122879 PMCID: PMC10147504 DOI: 10.1055/s-0043-1767695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
The goal of this study was to examine the effect of hearing loss on theta and alpha electroencephalography (EEG) frequency power measures of performance monitoring and cognitive inhibition, respectively, during a speech-in-noise task. It was hypothesized that hearing loss would be associated with an increase in the peak power of theta and alpha frequencies toward easier conditions compared to normal hearing adults. The shift would reflect how hearing loss modulates the recruitment of listening effort to easier listening conditions. Nine older adults with normal hearing (ONH) and 10 older adults with hearing loss (OHL) participated in this study. EEG data were collected from all participants while they completed the words-in-noise task. It hypothesized that hearing loss would also have an effect on theta and alpha power. The ONH group showed an inverted U -shape effect of signal-to-noise ratio (SNR), but there were limited effects of SNR on theta or alpha power in the OHL group. The results of the ONH group support the growing body of literature showing effects of listening conditions on alpha and theta power. The null results of listening condition in the OHL group add to a smaller body of literature, suggesting that listening effort research conditions should have near ceiling performance.
Collapse
Affiliation(s)
- David B. Ryan
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
| | - Mark A. Eckert
- Department of Otolaryngology - Head and Neck Surgery, Hearing Research Program, Medical University of South Carolina, Charleston, North Carolina
| | - Eric W. Sellers
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Kim S. Schairer
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Audiology and Speech Language Pathology, East Tennessee State University, Johnson City, Tennessee
| | - Matthew T. McBee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Elizabeth A. Ridley
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Sherri L. Smith
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
- Center for the Study of Aging and Human Development, Duke University, Durham, North Carolina
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina
- Audiology and Speech Pathology Service, Durham Veterans Affairs Healthcare System, Durham, North Carolina
| |
Collapse
|
28
|
Su Y, MacGregor LJ, Olasagasti I, Giraud AL. A deep hierarchy of predictions enables online meaning extraction in a computational model of human speech comprehension. PLoS Biol 2023; 21:e3002046. [PMID: 36947552 PMCID: PMC10079236 DOI: 10.1371/journal.pbio.3002046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 04/06/2023] [Accepted: 02/22/2023] [Indexed: 03/23/2023] Open
Abstract
Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.
Collapse
Affiliation(s)
- Yaqing Su
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research “Evolving Language” (NCCR EvolvingLanguage), Geneva, Switzerland
| | - Lucy J. MacGregor
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Itsaso Olasagasti
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research “Evolving Language” (NCCR EvolvingLanguage), Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research “Evolving Language” (NCCR EvolvingLanguage), Geneva, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, Paris, France
| |
Collapse
|
29
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
30
|
Zhang M, Siegle GJ. Linking Affective and Hearing Sciences-Affective Audiology. Trends Hear 2023; 27:23312165231208377. [PMID: 37904515 PMCID: PMC10619363 DOI: 10.1177/23312165231208377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 09/22/2023] [Accepted: 10/01/2023] [Indexed: 11/01/2023] Open
Abstract
A growing number of health-related sciences, including audiology, have increasingly recognized the importance of affective phenomena. However, in audiology, affective phenomena are mostly studied as a consequence of hearing status. This review first addresses anatomical and functional bidirectional connections between auditory and affective systems that support a reciprocal affect-hearing relationship. We then postulate, by focusing on four practical examples (hearing public campaigns, hearing intervention uptake, thorough hearing evaluation, and tinnitus), that some important challenges in audiology are likely affect-related and that potential solutions could be developed by inspiration from affective science advances. We continue by introducing useful resources from affective science that could help audiology professionals learn about the wide range of affective constructs and integrate them into hearing research and clinical practice in structured and applicable ways. Six important considerations for good quality affective audiology research are summarized. We conclude that it is worthwhile and feasible to explore the explanatory power of emotions, feelings, motivations, attitudes, moods, and other affective processes in depth when trying to understand and predict how people with hearing difficulties perceive, react, and adapt to their environment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China
| | - Greg J. Siegle
- Department of Psychiatry, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
31
|
Jakobsen Y, Christensen Andersen LA, Schmidt JH. Study protocol for a randomised controlled trial evaluating the benefits from bimodal solution with cochlear implant and hearing aid versus bilateral hearing aids in patients with asymmetric speech identification scores. BMJ Open 2022; 12:e070296. [PMID: 36581413 PMCID: PMC9806092 DOI: 10.1136/bmjopen-2022-070296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Cochlear implant (CI) and hearing aid (HA) in a bimodal solution (CI+HA) is compared with bilateral HAs (HA+HA) to test if the bimodal solution results in better speech intelligibility and self-reported quality of life. METHODS AND ANALYSIS This randomised controlled trial is conducted in Odense University Hospital, Denmark. Sixty adult bilateral HA users referred for CI surgery are enrolled if eligible and undergo: audiometry, speech perception in noise (HINT: Hearing in Noise Test), Speech Identification Scores and video head impulse test. All participants will receive new replacement HAs. After 1 month they will be randomly assigned (1:1) to the intervention group (CI+HA) or to the delayed intervention control group (HA+HA). The intervention group (CI+HA) will receive a CI on the ear with a poorer speech recognition score and continue using the HA on the other ear. The control group (HA+HA) will receive a CI after a total of 4 months of bilateral HA use.The primary outcome measures are speech intelligibility measured objectively with HINT (sentences in noise) and DANTALE I (words) and subjectively with the Speech, Spatial and Qualities of Hearing scale questionnaire. Secondary outcomes are patient reported Health-Related Quality of Life scores assessed with the Nijmegen Cochlear Implant Questionnaire, the Tinnitus Handicap Inventory and Dizziness Handicap Inventory. Third outcome is listening effort assessed with pupil dilation during HINT.In conclusion, the purpose is to improve the clinical decision-making for CI candidacy and optimise bimodal solutions. ETHICS AND DISSEMINATION This study protocol was approved by the Ethics Committee Southern Denmark project ID S-20200074G. All participants are required to sign an informed consent form.This study will be published on completion in peer-reviewed publications and scientific conferences. TRIAL REGISTRATION NUMBER NCT04919928.
Collapse
Affiliation(s)
- Yeliz Jakobsen
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| | | | - Jesper Hvass Schmidt
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| |
Collapse
|
32
|
Burg EA, Thakkar TD, Litovsky RY. Interaural speech asymmetry predicts bilateral speech intelligibility but not listening effort in adults with bilateral cochlear implants. Front Neurosci 2022; 16:1038856. [PMID: 36570844 PMCID: PMC9768552 DOI: 10.3389/fnins.2022.1038856] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
Introduction Bilateral cochlear implants (BiCIs) can facilitate improved speech intelligibility in noise and sound localization abilities compared to a unilateral implant in individuals with bilateral severe to profound hearing loss. Still, many individuals with BiCIs do not benefit from binaural hearing to the same extent that normal hearing (NH) listeners do. For example, binaural redundancy, a speech intelligibility benefit derived from having access to duplicate copies of a signal, is highly variable among BiCI users. Additionally, patients with hearing loss commonly report elevated listening effort compared to NH listeners. There is some evidence to suggest that BiCIs may reduce listening effort compared to a unilateral CI, but the limited existing literature has not shown this consistently. Critically, no studies to date have investigated this question using pupillometry to quantify listening effort, where large pupil sizes indicate high effort and small pupil sizes indicate low effort. Thus, the present study aimed to build on existing literature by investigating the potential benefits of BiCIs for both speech intelligibility and listening effort. Methods Twelve BiCI adults were tested in three listening conditions: Better Ear, Poorer Ear, and Bilateral. Stimuli were IEEE sentences presented from a loudspeaker at 0° azimuth in quiet. Participants were asked to repeat back the sentences, and responses were scored by an experimenter while changes in pupil dilation were measured. Results On average, participants demonstrated similar speech intelligibility in the Better Ear and Bilateral conditions, and significantly worse speech intelligibility in the Poorer Ear condition. Despite similar speech intelligibility in the Better Ear and Bilateral conditions, pupil dilation was significantly larger in the Bilateral condition. Discussion These results suggest that the BiCI users tested in this study did not demonstrate binaural redundancy in quiet. The large interaural speech asymmetries demonstrated by participants may have precluded them from obtaining binaural redundancy, as shown by the inverse relationship between the two variables. Further, participants did not obtain a release from effort when listening with two ears versus their better ear only. Instead, results indicate that bilateral listening elicited increased effort compared to better ear listening, which may be due to poor integration of asymmetric inputs.
Collapse
Affiliation(s)
- Emily A. Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States,Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States,*Correspondence: Emily A. Burg,
| | - Tanvi D. Thakkar
- Department of Psychology, University of Wisconsin-La Crosse, La Crosse, WI, United States
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States,Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States,Division of Otolaryngology, Department of Surgery, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
33
|
Lanzilotti C, Andéol G, Micheyl C, Scannella S. Cocktail party training induces increased speech intelligibility and decreased cortical activity in bilateral inferior frontal gyri. A functional near-infrared study. PLoS One 2022; 17:e0277801. [PMID: 36454948 PMCID: PMC9714910 DOI: 10.1371/journal.pone.0277801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
The human brain networks responsible for selectively listening to a voice amid other talkers remain to be clarified. The present study aimed to investigate relationships between cortical activity and performance in a speech-in-speech task, before (Experiment I) and after training-induced improvements (Experiment II). In Experiment I, 74 participants performed a speech-in-speech task while their cortical activity was measured using a functional near infrared spectroscopy (fNIRS) device. One target talker and one masker talker were simultaneously presented at three different target-to-masker ratios (TMRs): adverse, intermediate and favorable. Behavioral results show that performance may increase monotonically with TMR in some participants and failed to decrease, or even improved, in the adverse-TMR condition for others. On the neural level, an extensive brain network including the frontal (left prefrontal cortex, right dorsolateral prefrontal cortex and bilateral inferior frontal gyri) and temporal (bilateral auditory cortex) regions was more solicited by the intermediate condition than the two others. Additionally, bilateral frontal gyri and left auditory cortex activities were found to be positively correlated with behavioral performance in the adverse-TMR condition. In Experiment II, 27 participants, whose performance was the poorest in the adverse-TMR condition of Experiment I, were trained to improve performance in that condition. Results show significant performance improvements along with decreased activity in bilateral inferior frontal gyri, the right dorsolateral prefrontal cortex, the left inferior parietal cortex and the right auditory cortex in the adverse-TMR condition after training. Arguably, lower neural activity reflects higher efficiency in processing masker inhibition after speech-in-speech training. As speech-in-noise tasks also imply frontal and temporal regions, we suggest that regardless of the type of masking (speech or noise) the complexity of the task will prompt the implication of a similar brain network. Furthermore, the initial significant cognitive recruitment will be reduced following a training leading to an economy of cognitive resources.
Collapse
Affiliation(s)
- Cosima Lanzilotti
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
- ISAE-SUPAERO, Université de Toulouse, Toulouse, France
- Thales SIX GTS France, Gennevilliers, France
| | - Guillaume Andéol
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
| | | | | |
Collapse
|
34
|
Objective and Subjective Hearing Difficulties Are Associated With Lower Inhibitory Control. Ear Hear 2022; 43:1904-1916. [PMID: 35544449 DOI: 10.1097/aud.0000000000001227] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVE Evidence suggests that hearing loss increases the risk of cognitive impairment. However, the relationship between hearing loss and cognition can vary considerably across studies, which may be partially explained by demographic and health factors that are not systematically accounted for in statistical models. DESIGN Middle-aged to older adult participants (N = 149) completed a web-based assessment that included speech-in-noise (SiN) and self-report measures of hearing, as well as auditory and visual cognitive interference (Stroop) tasks. Correlations between hearing and cognitive interference measures were performed with and without controlling for age, sex, education, depression, anxiety, and self-rated health. RESULTS The risk of having objective SiN difficulties differed between males and females. All demographic and health variables, except education, influenced the likelihood of reporting hearing difficulties. Small but significant relationships between objective and reported hearing difficulties and the measures of cognitive interference were observed when analyses were controlled for demographic and health factors. Furthermore, when stratifying analyses for males and females, different relationships between hearing and cognitive interference measures were found. Self-reported difficulty with spatial hearing and objective SiN performance were better predictors of inhibitory control in females, whereas self-reported difficulty with speech was a better predictor of inhibitory control in males. This suggests that inhibitory control is associated with different listening abilities in males and females. CONCLUSIONS The results highlight the importance of controlling for participant characteristics when assessing the relationship between hearing and cognitive interference, which may also be the case for other cognitive functions, but this requires further investigations. Furthermore, this study is the first to show that the relationship between hearing and cognitive interference can be captured using web-based tasks that are simple to implement and administer at home without any assistance, paving the way for future online screening tests assessing the effects of hearing loss on cognition.
Collapse
|
35
|
Tarawneh HY, Jayakody DM, Sohrabi HR, Martins RN, Mulders WH. Understanding the Relationship Between Age-Related Hearing Loss and Alzheimer’s Disease: A Narrative Review. J Alzheimers Dis Rep 2022; 6:539-556. [PMID: 36275417 PMCID: PMC9535607 DOI: 10.3233/adr-220035] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/16/2022] [Indexed: 12/02/2022] Open
Abstract
Evidence suggests that hearing loss (HL), even at mild levels, increases the long-term risk of cognitive decline and incident dementia. Hearing loss is one of the modifiable risk factors for dementia, with approximately 4 million of the 50 million cases of dementia worldwide possibly attributed to untreated HL. This paper describes four possible mechanisms that have been suggested for the relationship between age-related hearing loss (ARHL) and Alzheimer’s disease (AD), which is the most common form of dementia. The first mechanism suggests mitochondrial dysfunction and altered signal pathways due to aging as a possible link between ARHL and AD. The second mechanism proposes that sensory degradation in hearing impaired people could explain the relationship between ARHL and AD. The occupation of cognitive resource (third) mechanism indicates that the association between ARHL and AD is a result of increased cognitive processing that is required to compensate for the degraded sensory input. The fourth mechanism is an expansion of the third mechanism, i.e., the function and structure interaction involves both cognitive resource occupation (neural activity) and AD pathology as the link between ARHL and AD. Exploring the specific mechanisms that provide the link between ARHL and AD has the potential to lead to innovative ideas for the diagnosis, prevention, and/or treatment of AD. This paper also provides insight into the current evidence for the use of hearing treatments as a possible treatment/prevention for AD, and if auditory assessments could provide an avenue for early detection of cognitive impairment associated with AD.
Collapse
Affiliation(s)
- Hadeel Y. Tarawneh
- School of Human Sciences, The University of Western Australia, Crawley, WA, Australia
- Ear Science Institute Australia, Subiaco, WA, Australia
| | - Dona M.P. Jayakody
- Ear Science Institute Australia, Subiaco, WA, Australia
- Centre of Ear Science, Medical School, The University of Western Australia, Crawley, WA, Australia
| | - Hamid R. Sohrabi
- Centre for Healthy Ageing, College of Science, Health, Engineering and Education, Murdoch University, WA, Australia
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia
- Department of Biomedical Sciences, Faculty of Medicine and Health Sciences, Macquarie University, NSW, Australia
| | - Ralph N. Martins
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia
- Department of Biomedical Sciences, Faculty of Medicine and Health Sciences, Macquarie University, NSW, Australia
| | | |
Collapse
|
36
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
37
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
38
|
Ritz H, Wild CJ, Johnsrude IS. Parametric Cognitive Load Reveals Hidden Costs in the Neural Processing of Perfectly Intelligible Degraded Speech. J Neurosci 2022; 42:4619-4628. [PMID: 35508382 PMCID: PMC9186799 DOI: 10.1523/jneurosci.1777-21.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.
Collapse
Affiliation(s)
- Harrison Ritz
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912
| | - Conor J Wild
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Departments of Psychology and Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
39
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
40
|
Irsik VC, Johnsrude IS, Herrmann B. Age-related deficits in dip-listening evident for isolated sentences but not for spoken stories. Sci Rep 2022; 12:5898. [PMID: 35393472 PMCID: PMC8991280 DOI: 10.1038/s41598-022-09805-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/23/2022] [Indexed: 12/03/2022] Open
Abstract
Fluctuating background sounds facilitate speech intelligibility by providing speech ‘glimpses’ (masking release). Older adults benefit less from glimpses, but masking release is typically investigated using isolated sentences. Recent work indicates that using engaging, continuous speech materials (e.g., spoken stories) may qualitatively alter speech-in-noise listening. Moreover, neural sensitivity to different amplitude envelope profiles (ramped, damped) changes with age, but whether this affects speech listening is unknown. In three online experiments, we investigate how masking release in younger and older adults differs for masked sentences and stories, and how speech intelligibility varies with masker amplitude profile. Intelligibility was generally greater for damped than ramped maskers. Masking release was reduced in older relative to younger adults for disconnected sentences, and stories with a randomized sentence order. Critically, when listening to stories with an engaging and coherent narrative, older adults demonstrated equal or greater masking release compared to younger adults. Older adults thus appear to benefit from ‘glimpses’ as much as, or more than, younger adults when the speech they are listening to follows a coherent topical thread. Our results highlight the importance of cognitive and motivational factors for speech understanding, and suggest that previous work may have underestimated speech-listening abilities in older adults.
Collapse
Affiliation(s)
- Vanessa C Irsik
- Department of Psychology & The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Ingrid S Johnsrude
- Department of Psychology & The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada.,School of Communication and Speech Disorders, The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Department of Psychology & The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada.,Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada.,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada
| |
Collapse
|
41
|
Irsik VC, Johnsrude IS, Herrmann B. Neural Activity during Story Listening Is Synchronized across Individuals Despite Acoustic Masking. J Cogn Neurosci 2022; 34:933-950. [PMID: 35258555 DOI: 10.1162/jocn_a_01842] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Older people with hearing problems often experience difficulties understanding speech in the presence of background sound. As a result, they may disengage in social situations, which has been associated with negative psychosocial health outcomes. Measuring listening (dis)engagement during challenging listening situations has received little attention thus far. We recruit young, normal-hearing human adults (both sexes) and investigate how speech intelligibility and engagement during naturalistic story listening is affected by the level of acoustic masking (12-talker babble) at different signal-to-noise ratios (SNRs). In Experiment 1, we observed that word-report scores were above 80% for all but the lowest SNR (-3 dB SNR) we tested, at which performance dropped to 54%. In Experiment 2, we calculated intersubject correlation (ISC) using EEG data to identify dynamic spatial patterns of shared neural activity evoked by the stories. ISC has been used as a neural measure of participants' engagement with naturalistic materials. Our results show that ISC was stable across all but the lowest SNRs, despite reduced speech intelligibility. Comparing ISC and intelligibility demonstrated that word-report performance declined more strongly with decreasing SNR compared to ISC. Our measure of neural engagement suggests that individuals remain engaged in story listening despite missing words because of background noise. Our work provides a potentially fruitful approach to investigate listener engagement with naturalistic, spoken stories that may be used to investigate (dis)engagement in older adults with hearing impairment.
Collapse
Affiliation(s)
| | | | - Björn Herrmann
- The University of Western Ontario.,Rotman Research Institute, Toronto, ON, Canada.,University of Toronto
| |
Collapse
|
42
|
Hong L, Zeng Q, Li K, Luo X, Xu X, Liu X, Li Z, Fu Y, Wang Y, Zhang T, Chen Y, Liu Z, Huang P, Zhang M. Intrinsic Brain Activity of Inferior Temporal Region Increased in Prodromal Alzheimer's Disease With Hearing Loss. Front Aging Neurosci 2022; 13:772136. [PMID: 35153717 PMCID: PMC8831745 DOI: 10.3389/fnagi.2021.772136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 12/31/2021] [Indexed: 01/13/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Hearing loss (HL) is one of the modifiable risk factors for Alzheimer's disease (AD). However, the underlying mechanism behind HL in AD remains elusive. A possible mechanism is cognitive load hypothesis, which postulates that over-processing of degraded auditory signals in the auditory cortex leads to deficits in other cognitive functions. Given mild cognitive impairment (MCI) is a prodromal stage of AD, untangling the association between HL and MCI might provide insights for potential mechanism behind HL. METHODS We included 85 cognitively normal (CN) subjects with no hearing loss (NHL), 24 CN with HL, 103 mild cognitive impairment (MCI) patients with NHL, and 23 MCI with HL from the ADNI database. All subjects underwent resting-state functional MRI and neuropsychological scale assessments. Fractional amplitude of low-frequency fluctuation (fALFF) was used to reflect spontaneous brain activity. The mixed-effects analysis was applied to explore the interactive effects between HL and cognitive status (GRF corrected, voxel p-value <0.005, cluster p-value < 0.05, two-tailed). Then, the FDG data was included to further reflect the regional neuronal abnormalities. Finally, Pearson correlation analysis was performed between imaging metrics and cognitive scores to explore the clinical significance (Bonferroni corrected, p < 0.05). RESULTS The interactive effects primarily located in the left superior temporal gyrus (STG) and bilateral inferior temporal gyrus (ITG). Post-hoc analysis showed that NC with HL had lower fALFF in bilateral ITG compared to NC with NHL. NC with HL had higher fALFF in the left STG and decreased fALFF in bilateral ITG compared to MCI with HL. In addition, NC with HL had lower fALFF in the right ITG compared to MCI with NHL. Correlation analysis revealed that fALFF was associated with MMSE and ADNI-VS, while SUVR was associated with MMSE, MoCA, ADNI-EF and ADNI-Lan. CONCLUSION HL showed different effects on NC and MCI stages. NC had increased spontaneous brain activity in auditory cortex while decreased activity in the ITG. Such pattern altered with disease stage changing and manifested as decreased activity in auditory cortex along with increased activity in ITG in MCI. This suggested that the cognitive load hypothesis may be the underlying mechanism behind HL.
Collapse
Affiliation(s)
- Luwei Hong
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Qingze Zeng
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Kaicheng Li
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiao Luo
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaopei Xu
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaocao Liu
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zheyu Li
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Yanv Fu
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Yanbo Wang
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Tianyi Zhang
- Department of Neurology, Tongde Hospital of Zhejiang Province, Hangzhou, China
| | - Yanxing Chen
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zhirong Liu
- Department of Neurology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Peiyu Huang
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Minming Zhang
- Department of Radiology, The 2nd Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
43
|
Sun PW, Hines A. Listening Effort Informed Quality of Experience Evaluation. Front Psychol 2022; 12:767840. [PMID: 35069342 PMCID: PMC8766726 DOI: 10.3389/fpsyg.2021.767840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/31/2021] [Indexed: 11/15/2022] Open
Abstract
Perceived quality of experience for speech listening is influenced by cognitive processing and can affect a listener's comprehension, engagement and responsiveness. Quality of Experience (QoE) is a paradigm used within the media technology community to assess media quality by linking quantifiable media parameters to perceived quality. The established QoE framework provides a general definition of QoE, categories of possible quality influencing factors, and an identified QoE formation pathway. These assist researchers to implement experiments and to evaluate perceived quality for any applications. The QoE formation pathways in the current framework do not attempt to capture cognitive effort effects and the standard experimental assessments of QoE minimize the influence from cognitive processes. The impact of cognitive processes and how they can be captured within the QoE framework have not been systematically studied by the QoE research community. This article reviews research from the fields of audiology and cognitive science regarding how cognitive processes influence the quality of listening experience. The cognitive listening mechanism theories are compared with the QoE formation mechanism in terms of the quality contributing factors, experience formation pathways, and measures for experience. The review prompts a proposal to integrate mechanisms from audiology and cognitive science into the existing QoE framework in order to properly account for cognitive load in speech listening. The article concludes with a discussion regarding how an extended framework could facilitate measurement of QoE in broader and more realistic application scenarios where cognitive effort is a material consideration.
Collapse
Affiliation(s)
- Pheobe Wenyi Sun
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| | - Andrew Hines
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| |
Collapse
|
44
|
Perea Pérez F, Hartley DE, Kitterick PT, Wiggins IM. Perceived Listening Difficulties of Adult Cochlear-Implant Users Under Measures Introduced to Combat the Spread of COVID-19. Trends Hear 2022; 26:23312165221087011. [PMID: 35440245 PMCID: PMC9024163 DOI: 10.1177/23312165221087011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 02/17/2022] [Accepted: 02/24/2022] [Indexed: 11/23/2022] Open
Abstract
Following the outbreak of the COVID-19 pandemic, public-health measures introduced to stem the spread of the disease caused profound changes to patterns of daily-life communication. This paper presents the results of an online survey conducted to document adult cochlear-implant (CI) users' perceived listening difficulties under four communication scenarios commonly experienced during the pandemic, specifically when talking: with someone wearing a facemask, under social/physical distancing guidelines, via telephone, and via video call. Results from ninety-four respondents indicated that people considered their in-person listening experiences in some common everyday scenarios to have been significantly worsened by the introduction of mask-wearing and physical distancing. Participants reported experiencing an array of listening difficulties, including reduced speech intelligibility and increased listening effort, which resulted in many people actively avoiding certain communication scenarios at least some of the time. Participants also found listening effortful during remote communication, which became rapidly more prevalent following the outbreak of the pandemic. Potential solutions identified by participants to ease the burden of everyday listening with a CI may have applicability beyond the context of the COVID-19 pandemic. Specifically, the results emphasized the importance of visual cues, including lipreading and live speech-to-text transcriptions, to improve in-person and remote communication for people with a CI.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Douglas E.H. Hartley
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
- Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - Pádraig T. Kitterick
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
- National Acoustic Laboratories, Sydney, Australia
| | - Ian M. Wiggins
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
45
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA.
| | | | - Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Carolyn M McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC, 29425-5500, USA
| |
Collapse
|
46
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
47
|
Murai S, Yang AN, Hiryu S, Kobayasi KI. Music in Noise: Neural Correlates Underlying Noise Tolerance in Music-Induced Emotion. Cereb Cortex Commun 2021; 2:tgab061. [PMID: 34746792 PMCID: PMC8564766 DOI: 10.1093/texcom/tgab061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 09/25/2021] [Accepted: 09/26/2021] [Indexed: 11/14/2022] Open
Abstract
Music can be experienced in various acoustic qualities. In this study, we investigated how the acoustic quality of the music can influence strong emotional experiences, such as musical chills, and the neural activity. The music’s acoustic quality was controlled by adding noise to musical pieces. Participants listened to clear and noisy musical pieces and pressed a button when they experienced chills. We estimated neural activity in response to chills under both clear and noisy conditions using functional magnetic resonance imaging (fMRI). The behavioral data revealed that compared with the clear condition, the noisy condition dramatically decreased the number of chills and duration of chills. The fMRI results showed that under both noisy and clear conditions the supplementary motor area, insula, and superior temporal gyrus were similarly activated when participants experienced chills. The involvement of these brain regions may be crucial for music-induced emotional processes under the noisy as well as the clear condition. In addition, we found a decrease in the activation of the right superior temporal sulcus when experiencing chills under the noisy condition, which suggests that music-induced emotional processing is sensitive to acoustic quality.
Collapse
Affiliation(s)
- Shota Murai
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Ae Na Yang
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Shizuko Hiryu
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Kohta I Kobayasi
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| |
Collapse
|
48
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
49
|
Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021; 240:118385. [PMID: 34256138 PMCID: PMC8503862 DOI: 10.1016/j.neuroimage.2021.118385] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 10/27/2022] Open
Abstract
In this study we used functional near-infrared spectroscopy (fNIRS) to investigate neural responses in normal-hearing adults as a function of speech recognition accuracy, intelligibility of the speech stimulus, and the manner in which speech is distorted. Participants listened to sentences and reported aloud what they heard. Speech quality was distorted artificially by vocoding (simulated cochlear implant speech) or naturally by adding background noise. Each type of distortion included high and low-intelligibility conditions. Sentences in quiet were used as baseline comparison. fNIRS data were analyzed using a newly developed image reconstruction approach. First, elevated cortical responses in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) were associated with speech recognition during the low-intelligibility conditions. Second, activation in the MTG was associated with recognition of vocoded speech with low intelligibility, whereas MFG activity was largely driven by recognition of speech in background noise, suggesting that the cortical response varies as a function of distortion type. Lastly, an accuracy effect in the MFG demonstrated significantly higher activation during correct perception relative to incorrect perception of speech. These results suggest that normal-hearing adults (i.e., untrained listeners of vocoded stimuli) do not exploit the same attentional mechanisms of the frontal cortex used to resolve naturally degraded speech and may instead rely on segmental and phonetic analyses in the temporal lobe to discriminate vocoded speech.
Collapse
Affiliation(s)
- Jessica Defenderfer
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Samuel Forbes
- Psychology, University of East Anglia, Norwich, England.
| | | | - Mark Hedrick
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Patrick Plyler
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Aaron T Buss
- Psychology, University of Tennessee, Knoxville, TN, United States.
| |
Collapse
|
50
|
De Groote E, Eqlimi E, Bockstael A, Botteldooren D, Santens P, De Letter M. Parkinson's disease affects the neural alpha oscillations associated with speech-in-noise processing. Eur J Neurosci 2021; 54:7355-7376. [PMID: 34617350 DOI: 10.1111/ejn.15477] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 09/03/2021] [Accepted: 09/21/2021] [Indexed: 11/29/2022]
Abstract
Parkinson's disease (PD) has increasingly been associated with auditory dysfunction, including alterations regarding the control of auditory information processing. Although these alterations may interfere with the processing of speech in degraded listening conditions, behavioural studies have generally found preserved speech-in-noise recognition in PD. However, behavioural speech audiometry does not capture the neurophysiological mechanisms supporting speech-in-noise processing. Therefore, the aim of this study was to investigate the neural oscillatory mechanisms associated with speech-in-noise processing in PD. Twelve persons with PD and 12 age- and gender-matched healthy controls (HCs) were included in this study. Persons with PD were studied in the medication off condition. All subjects underwent an audiometric screening and performed a sentence-in-noise recognition task under simultaneous electroencephalography (EEG) recording. Behavioural speech recognition scores and self-reported ratings of effort, performance, and motivation were collected. Time-frequency analysis of EEG data revealed no significant difference between persons with PD and HCs regarding delta-theta (2-8 Hz) inter-trial phase coherence to noise and sentence onset. In contrast, significantly increased alpha (8-12 Hz) power was found in persons with PD compared with HCs during the sentence-in-noise recognition task. Behaviourally, persons with PD demonstrated significantly decreased speech recognition scores, whereas no significant differences were found regarding effort, performance, and motivation ratings. These results suggest that persons with PD allocate more cognitive resources to support speech-in-noise processing. The interpretation of this finding is discussed in the context of a top-down mediated compensation mechanism for inefficient filtering and degradation of auditory input in PD.
Collapse
Affiliation(s)
- Evelien De Groote
- Department of Rehabilitation Sciences, BrainComm Research Group, Ghent University, Ghent, Belgium
| | - Ehsan Eqlimi
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Patrick Santens
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Miet De Letter
- Department of Rehabilitation Sciences, BrainComm Research Group, Ghent University, Ghent, Belgium
| |
Collapse
|