1
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
2
|
Homman L, Danielsson H, Rönnberg J. A structural equation mediation model captures the predictions amongst the parameters of the ease of language understanding model. Front Psychol 2023; 14:1015227. [PMID: 36936006 PMCID: PMC10020708 DOI: 10.3389/fpsyg.2023.1015227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/06/2023] [Indexed: 03/06/2023] Open
Abstract
Objective The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database. Methods Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed. Results Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions. Conclusion The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated.
Collapse
Affiliation(s)
- Lina Homman
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
- *Correspondence: Lina Homman,
| | - Henrik Danielsson
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
3
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
4
|
Sun J, Zhang Z, Sun B, Liu H, Wei C, Liu Y. The effect of aging on context use and reliance on context in speech: A behavioral experiment with Repeat–Recall Test. Front Aging Neurosci 2022; 14:924193. [PMID: 35936762 PMCID: PMC9354826 DOI: 10.3389/fnagi.2022.924193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/29/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeTo elucidate how aging would affect the extent of semantic context use and the reliance on semantic context measured with the Repeat–Recall Test (RRT).MethodsA younger adult group (YA) aged between 18 and 25 and an older adult group (OA) aged between 50 and 65 were recruited. Participants from both the groups performed RRT: sentence repeat and delayed recall tasks, and subjective listening effort and noise tolerable time, under two noise types and seven signal-to-noise ratios (SNR). Performance–Intensity curves were fitted. The performance in SRT50 and SRT75 was predicted.ResultsFor the repeat task, the OA group used more semantic context and relied more on semantic context than the YA group. For the recall task, OA used less semantic context but relied more on context than the YA group. Age did not affect the subjective listening effort but significantly affected noise tolerable time. Participants in both age groups could use more context in SRT75 than SRT50 on four tasks of RRT. Under the same SRT, however, the YA group could use more context in repeat and recall tasks than the OA group.ConclusionAge affected the use and reliance of semantic context. Even though the OA group used more context in speech recognition, they failed in speech information maintenance (recall) even with the help of semantic context. The OA group relied more on context while performing repeat and recall tasks. The amount of context used was also influenced by SRT.
Collapse
Affiliation(s)
- Jiayu Sun
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
- Department of Otorhinolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yuhe Liu,
| |
Collapse
|
5
|
Ashori M, Aghaziarati A. The relationships among social-emotional assets and resilience, empathy and behavioral problems in deaf and hard of hearing children. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03152-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Tamati TN, Sevich VA, Clausing EM, Moberly AC. Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners. Front Psychol 2022; 13:837644. [PMID: 35432072 PMCID: PMC9010567 DOI: 10.3389/fpsyg.2022.837644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/16/2022] [Indexed: 11/13/2022] Open
Abstract
When listening to degraded speech, such as speech delivered by a cochlear implant (CI), listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener's age. The current study investigated lexical effects in the compensation for speech that was degraded via noise-vocoding in younger and older listeners. In an online experiment, younger and older normal-hearing (NH) listeners rated the clarity of noise-vocoded sentences on a scale from 1 ("very unclear") to 7 ("completely clear"). Lexical information was provided by matching text primes and the lexical content of the target utterance. Half of the sentences were preceded by a matching text prime, while half were preceded by a non-matching prime. Each sentence also consisted of three key words of high or low lexical frequency and neighborhood density. Sentences were processed to simulate CI hearing, using an eight-channel noise vocoder with varying filter slopes. Results showed that lexical information impacted the perceived clarity of noise-vocoded speech. Noise-vocoded speech was perceived as clearer when preceded by a matching prime, and when sentences included key words with high lexical frequency and low neighborhood density. However, the strength of the lexical effects depended on the level of degradation. Matching text primes had a greater impact for speech with poorer spectral resolution, but lexical content had a smaller impact for speech with poorer spectral resolution. Finally, lexical information appeared to benefit both younger and older listeners. Findings demonstrate that lexical knowledge can be employed by younger and older listeners in cognitive compensation during the processing of noise-vocoded speech. However, lexical content may not be as reliable when the signal is highly degraded. Clinical implications are that for adult CI users, lexical knowledge might be used to compensate for the degraded speech signal, regardless of age, but some CI users may be hindered by a relatively poor signal.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Victoria A. Sevich
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH, United States
| | - Emily M. Clausing
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| | - Aaron C. Moberly
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| |
Collapse
|
7
|
Shechter Shvartzman L, Lavie L, Banai K. Speech Perception in Older Adults: An Interplay of Hearing, Cognition, and Learning? Front Psychol 2022; 13:816864. [PMID: 35250748 PMCID: PMC8891456 DOI: 10.3389/fpsyg.2022.816864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 01/26/2022] [Indexed: 11/29/2022] Open
Abstract
Older adults with age-related hearing loss exhibit substantial individual differences in speech perception in adverse listening conditions. We propose that the ability to rapidly adapt to changes in the auditory environment (i.e., perceptual learning) is among the processes contributing to these individual differences, in addition to the cognitive and sensory processes that were explored in the past. Seventy older adults with age-related hearing loss participated in this study. We assessed the relative contribution of hearing acuity, cognitive factors (working memory, vocabulary, and selective attention), rapid perceptual learning of time-compressed speech, and hearing aid use to the perception of speech presented at a natural fast rate (fast speech), speech embedded in babble noise (speech in noise), and competing speech (dichotic listening). Speech perception was modeled as a function of the other variables. For fast speech, age [odds ratio (OR) = 0.79], hearing acuity (OR = 0.62), pre-learning (baseline) perception of time-compressed speech (OR = 1.47), and rapid perceptual learning (OR = 1.36) were all significant predictors. For speech in noise, only hearing and pre-learning perception of time-compressed speech were significant predictors (OR = 0.51 and OR = 1.53, respectively). Consistent with previous findings, the severity of hearing loss and auditory processing (as captured by pre-learning perception of time-compressed speech) was strong contributors to individual differences in fast speech and speech in noise perception. Furthermore, older adults with good rapid perceptual learning can use this capacity to partially offset the effects of age and hearing loss on the perception of speech presented at fast conversational rates. Our results highlight the potential contribution of dynamic processes to speech perception.
Collapse
|
8
|
Zhong L, Noud BP, Pruitt H, Marcrum SC, Picou EM. Effects of text supplementation on speech intelligibility for listeners with normal and impaired hearing: a systematic review with implications for telecommunication. Int J Audiol 2021; 61:1-11. [PMID: 34154488 DOI: 10.1080/14992027.2021.1937346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE Telecommunication can be difficult in the presence of noise or hearing loss. The purpose of this study was to systematically review evidence regarding the effects of text supplementation (e.g. captions, subtitles) of auditory or auditory-visual signals on speech intelligibility for listeners with normal or impaired hearing. DESIGN Three databases were searched. Articles were evaluated for inclusion based on the Population Intervention Comparison Outcome framework. The Effective Public Health Practice Project instrument was used to evaluate the quality of the identified articles. STUDY SAMPLE After duplicates were removed, the titles and abstracts of 2019 articles were screened. Forty-six full texts were reviewed; ten met inclusion criteria. RESULTS The quality of all ten articles was moderate or strong. The articles demonstrated that text added to auditory (or auditory-visual) signals improved speech intelligibility and that the benefits were largest when auditory signal integrity was low, accuracy of the text was high, and the auditory signal and text were synchronous. Age and hearing loss did not affect benefits from the addition of text. CONCLUSIONS Although only based on ten studies, these data support the use of text as a supplement during telecommunication, such as while watching television or during telehealth appointments.
Collapse
Affiliation(s)
- Ling Zhong
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Brianne P Noud
- Department of Audiology, Center for Hearing and Speech, St. Louis, MO, USA
| | - Harriet Pruitt
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Speech-Language Pathology, Advanced Therapy Solutions, Clarksville, TN, USA
| | - Steven C Marcrum
- Department of Otolaryngology, University Hospital Regensburg, Regensburg, Germany
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
9
|
Brännström KJ, Rudner M, Carlie J, Sahlén B, Gulz A, Andersson K, Johansson R. Listening effort and fatigue in native and non-native primary school children. J Exp Child Psychol 2021; 210:105203. [PMID: 34118494 DOI: 10.1016/j.jecp.2021.105203] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/15/2021] [Accepted: 05/14/2021] [Indexed: 11/26/2022]
Abstract
Background noise makes listening effortful and may lead to fatigue. This may compromise classroom learning, especially for children with a non-native background. In the current study, we used pupillometry to investigate listening effort and fatigue during listening comprehension under typical (0 dB signal-to-noise ratio [SNR]) and favorable (+10 dB SNR) listening conditions in 63 Swedish primary school children (7-9 years of age) performing a narrative speech-picture verification task. Our sample comprised both native (n = 25) and non-native (n = 38) speakers of Swedish. Results revealed greater pupil dilation, indicating more listening effort, in the typical listening condition compared with the favorable listening condition, and it was primarily the non-native speakers who contributed to this effect (and who also had lower performance accuracy than the native speakers). Furthermore, the native speakers had greater pupil dilation during successful trials, whereas the non-native speakers showed greatest pupil dilation during unsuccessful trials, especially in the typical listening condition. This set of results indicates that whereas native speakers can apply listening effort to good effect, non-native speakers may have reached their effort ceiling, resulting in poorer listening comprehension. Finally, we found that baseline pupil size decreased over trials, which potentially indicates more listening-related fatigue, and this effect was greater in the typical listening condition compared with the favorable listening condition. Collectively, these results provide novel insight into the underlying dynamics of listening effort, fatigue, and listening comprehension in typical classroom conditions compared with favorable classroom conditions, and they demonstrate for the first time how sensitive this interplay is to language experience.
Collapse
Affiliation(s)
- K Jonas Brännström
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, 581 83 Linköping, Sweden
| | - Johanna Carlie
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Birgitta Sahlén
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Agneta Gulz
- Division of Cognitive Science, Lund University, 221 00 Lund, Sweden
| | - Ketty Andersson
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Roger Johansson
- Department of Psychology, Lund University, 221 00 Lund, Sweden.
| |
Collapse
|
10
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
11
|
Bell L, Peng ZE, Pausch F, Reindl V, Neuschaefer-Rube C, Fels J, Konrad K. fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. CHILDREN (BASEL, SWITZERLAND) 2020; 7:E219. [PMID: 33171753 PMCID: PMC7695031 DOI: 10.3390/children7110219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 11/16/2022]
Abstract
The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors' spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.
Collapse
Affiliation(s)
- Laura Bell
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
| | - Z. Ellen Peng
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
- Waisman Center, University of Wisconsin-Madison, Madison, WI 53705, USA;
| | - Florian Pausch
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Vanessa Reindl
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| | - Christiane Neuschaefer-Rube
- Clinic of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany;
| | - Janina Fels
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Kerstin Konrad
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| |
Collapse
|
12
|
Signoret C, Andersen LM, Dahlström Ö, Blomberg R, Lundqvist D, Rudner M, Rönnberg J. The Influence of Form- and Meaning-Based Predictions on Cortical Speech Processing Under Challenging Listening Conditions: A MEG Study. Front Neurosci 2020; 14:573254. [PMID: 33100961 PMCID: PMC7546411 DOI: 10.3389/fnins.2020.573254] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 09/01/2020] [Indexed: 01/07/2023] Open
Abstract
Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.
Collapse
Affiliation(s)
- Carine Signoret
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Lau M Andersen
- The National Research Facility for Magnetoencephalography, Department of Clinical Neuroscience, Karolinska Institutet, Solna, Sweden.,Center of Functionally Integrative Neuroscience, Institute of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Örjan Dahlström
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Rina Blomberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Daniel Lundqvist
- The National Research Facility for Magnetoencephalography, Department of Clinical Neuroscience, Karolinska Institutet, Solna, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
13
|
Ayasse ND, Wingfield A. The Two Sides of Linguistic Context: Eye-Tracking as a Measure of Semantic Competition in Spoken Word Recognition Among Younger and Older Adults. Front Hum Neurosci 2020; 14:132. [PMID: 32327987 PMCID: PMC7161414 DOI: 10.3389/fnhum.2020.00132] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Accepted: 03/20/2020] [Indexed: 12/17/2022] Open
Abstract
Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process.
Collapse
Affiliation(s)
- Nicolai D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
14
|
Rudner M, Danielsson H, Lyxell B, Lunner T, Rönnberg J. Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss. Front Psychol 2019; 10:1149. [PMID: 31191388 PMCID: PMC6546845 DOI: 10.3389/fpsyg.2019.01149] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/01/2019] [Indexed: 12/23/2022] Open
Abstract
Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Henrik Danielsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lyxell
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | - Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
15
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|