1
|
Yu W, Xie Y, Yang X. Impact of Respectfulness on Semantic Integration During Discourse Processing. Behav Sci (Basel) 2025; 15:448. [PMID: 40282070 PMCID: PMC12024179 DOI: 10.3390/bs15040448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2025] [Revised: 03/10/2025] [Accepted: 03/20/2025] [Indexed: 04/29/2025] Open
Abstract
Linguistic expressions of respectful terms are shaped by social status. Previous studies have shown respectful term usage affects online language processing. This study investigates its impact on semantic integration through three self-pace reading experiments, manipulating Respect Consistency (Respect vs. Disrespect) and Semantic Consistency (Semantic Consistent vs. Semantic Inconsistent). In Experiment 1, disrespect was manipulated by using the plain form of pronouns instead of the respectful form when addressing individuals of higher social status. The results showed longer reading times for semantically inconsistent sentences compared to consistent ones, reflecting the classic semantic integration effect. Nevertheless, this effect was only detected when respectful pronouns were employed. For Experiments 2 and 3, disrespect was operationalized by directly addressing individuals of higher social status by their personal names. A comparable interaction to that in Experiment 1 was identified solely in Experiment 3, which involved an appropriateness judgment task. In contrast, no such interaction was observed in Experiment 2, which involved a reading comprehension task. These results indicated that both disrespectful pronouns and addressing individuals by their personal names hinder semantic integration, but through different mechanisms. These findings provide important insights into the role of respectful term usage on semantic integration during discourse comprehension.
Collapse
Affiliation(s)
| | | | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing 100872, China
| |
Collapse
|
2
|
López-Higes R, Rubio-Valdehita S, López-Sanz D, Fernandes SM, Rodrigues PFS, Delgado-Losada ML. Cognitive Performance Among Older Adults with Subjective Cognitive Decline. Geriatrics (Basel) 2025; 10:39. [PMID: 40126289 PMCID: PMC11932273 DOI: 10.3390/geriatrics10020039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2025] [Revised: 03/06/2025] [Accepted: 03/10/2025] [Indexed: 03/25/2025] Open
Abstract
Objectives: The main objective of this cross-sectional study was to investigate if there are significant differences in cognition between a group of older adults with subjective cognitive decline (SCD) and cognitively intact controls. Methods: An initial sample of 132 older people underwent an extensive neuropsychological evaluation (memory, executive functions, and language) and were classified according to diagnostic criteria. Two groups of 33 subjects each, controls and SCD, were formed using an a priori case-matching procedures in different variables: age, biological sex, years of education, cognitive reserve, and Mini-Mental State Exam. Results: The mean age and standard deviation in the control group were equal to 70.39 ± 4.31 years, while in the SCD group, they were 70.30 ± 4.33 years. The number of males (n = 9) and females (n = 24) was equal in both groups; the means of years of education were also quite similar. SCD participants have a significantly lower mood than the controls. Significant differences between groups were obtained in delayed recall, inhibitory control, and comprehension of sentences not fitted to canonical word order in Spanish. A logistic regression revealed that a lower score on the Stroop's interference condition is associated with a higher likelihood of having SCD. Finally, ROC analysis provided a model that performs better than random chance, and a cut-off score in Stroop's interference condition equal to 49 was suggested for clinically differentiating the two groups. Conclusions: This study highlights that, compared to a matched control group, participants with SCD showed subtle but significant neuropsychological differences.
Collapse
Affiliation(s)
- Ramón López-Higes
- Departamento de Psicología Experimental, Complutense University of Madrid (UCM), 28223 Madrid, Spain; (D.L.-S.); (M.L.D.-L.)
| | - Susana Rubio-Valdehita
- Departamento de Psicología Social, del Trabajo y Diferencial, Complutense University of Madrid (UCM), 28223 Madrid, Spain;
| | - David López-Sanz
- Departamento de Psicología Experimental, Complutense University of Madrid (UCM), 28223 Madrid, Spain; (D.L.-S.); (M.L.D.-L.)
- Centro de Neurociencia Cognitiva y Computacional (C3N), Universidad Complutense de Madrid, 28015 Madrid, Spain
| | - Sara M. Fernandes
- CINTESIS.UPT@RISE-Health, Portucalense University, 4200-072 Porto, Portugal; (S.M.F.); (P.F.S.R.)
| | - Pedro F. S. Rodrigues
- CINTESIS.UPT@RISE-Health, Portucalense University, 4200-072 Porto, Portugal; (S.M.F.); (P.F.S.R.)
| | - María Luisa Delgado-Losada
- Departamento de Psicología Experimental, Complutense University of Madrid (UCM), 28223 Madrid, Spain; (D.L.-S.); (M.L.D.-L.)
| |
Collapse
|
3
|
Lansford KL, Hirsch ME, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaption to Dysarthric Speech in Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025:1-18. [PMID: 39772701 DOI: 10.1044/2024_jslhr-24-00345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
Abstract
PURPOSE In effortful listening conditions, speech perception and adaptation abilities are constrained by aging and often linked to age-related hearing loss and cognitive decline. Given that older adults are frequent communication partners of individuals with dysarthria, the current study examines cognitive-linguistic and hearing predictors of dysarthric speech perception and adaptation in older listeners. METHOD Fifty-eight older adult listeners (aged 55-80 years) completed a battery of hearing and cognitive tasks administered via the National Institutes of Health Toolbox. Participants also completed a three-phase familiarization task (pretest, training, and posttest) with one of two speakers with dysarthria. Elastic net regression models of initial intelligibility (pretest) and intelligibility improvement (posttest) were constructed for each speaker with dysarthria to identify important cognitive and hearing predictors. RESULTS Overall, the regression models indicated that intelligibility outcomes were optimized for older listeners with better words-in-noise thresholds, vocabulary knowledge, working memory capacity, and cognitive flexibility. Despite some convergence across models, unique constellations of cognitive-linguistic and hearing parameters and their two-way interactions predicted speech perception and adaptation outcomes for the two speakers with dysarthria, who varied in terms of their severity and perceptual characteristics. CONCLUSION Here, we add to an extensive body of work in related disciplines by demonstrating age-related declines in speech perception and adaptation to dysarthric speech can be traced back to specific hearing and cognitive-linguistic factors.
Collapse
Affiliation(s)
- Kaitlin L Lansford
- Department of Communication Science and Disorders, Florida State University, Tallahassee
| | - Micah E Hirsch
- Department of Communication Science and Disorders, Florida State University, Tallahassee
| | - Tyson S Barrett
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
4
|
Marsja E, Holmer E, Stenbäck V, Micula A, Tirado C, Danielsson H, Rönnberg J. Fluid Intelligence Partially Mediates the Effect of Working Memory on Speech Recognition in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:399-410. [PMID: 39666895 DOI: 10.1044/2024_jslhr-24-00465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2024]
Abstract
PURPOSE Although the existing literature has explored the link between cognitive functioning and speech recognition in noise, the specific role of fluid intelligence still needs to be studied. Given the established association between working memory capacity (WMC) and fluid intelligence and the predictive power of WMC for speech recognition in noise, we aimed to elucidate the mediating role of fluid intelligence. METHOD We used data from the n200 study, a longitudinal investigation into aging, hearing ability, and cognitive functioning. We analyzed two age-matched samples: participants with hearing aids and a group with normal hearing. WMC was assessed using the Reading Span task, and fluid intelligence was measured with Raven's Progressive Matrices. Speech recognition in noise was evaluated using Hagerman sentences presented to target 80% speech-reception thresholds in four-talker babble. Data were analyzed using mediation analysis to examine fluid intelligence as a mediator between WMC and speech recognition in noise. RESULTS We found a partial mediating effect of fluid intelligence on the relationship between WMC and speech recognition in noise, and that hearing status did not moderate this effect. In other words, WMC and fluid intelligence were related, and fluid intelligence partially explained the influence of WMC on speech recognition in noise. CONCLUSIONS This study shows the importance of fluid intelligence in speech recognition in noise, regardless of hearing status. Future research should use other advanced statistical techniques and explore various speech recognition tests and background maskers to deepen our understanding of the interplay between WMC and fluid intelligence in speech recognition.
Collapse
Affiliation(s)
- Erik Marsja
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Victoria Stenbäck
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Division of Education, Teaching and Learning, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Andreea Micula
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- National Institute of Public Health, University of Southern Denmark, Copenhagen
- Eriksholm Research Centre, Oticon A/S, Copenhagen, Denmark
| | - Carlos Tirado
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Henrik Danielsson
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Jerker Rönnberg
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
5
|
Moberly AC, Du L, Tamati TN. Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. Trends Hear 2025; 29:23312165241312449. [PMID: 39819389 PMCID: PMC11742172 DOI: 10.1177/23312165241312449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 12/02/2024] [Accepted: 12/12/2024] [Indexed: 01/19/2025] Open
Abstract
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Liping Du
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Terrin N. Tamati
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
6
|
Ju P, Zhou Z, Xie Y, Hui J, Yang X. Music training influences online temporal order processing during reading comprehension. Acta Psychol (Amst) 2024; 248:104340. [PMID: 38870685 DOI: 10.1016/j.actpsy.2024.104340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/16/2024] [Accepted: 06/10/2024] [Indexed: 06/15/2024] Open
Abstract
Numerous studies have demonstrated the influence of musical expertise on spoken language processing; however, its effects on reading comprehension remain largely unexplored. This study aims to investigate the role of musical expertise in sentence comprehension, particularly concerning the processing of temporal order. Using two self-paced reading experiments, we examined individuals' responses to two-clause sentences connected by the temporal connectives "before" or "after". "After" sentences consistently presented events in their actual order of temporal occurrence, while "before" sentences described events in reverse temporal order. In both experiments, our analyses of reading times consistently uncovered a significant temporal order effect, with words immediately following the temporal connectives being processed slower in "before" sentences compared to "after" sentences. This suggests the presence of immediate online processing costs associated with "before" sentences. Notably, these processing costs were found to be attenuated in individuals with musical expertise compared to those without. However, analyses of comprehension accuracy showed no advantage of musicians over non-musicians. Specifically, in Experiment 1, the two groups showed no difference in comprehension accuracy, while in Experiment 2, musicians exhibited lower accuracy rates compared to non-musicians in both "before" and "after" sentences. These results suggest that musical expertise may attenuate online processing costs associated with complex linguistic constructs, but could not promote reading accuracy. We concluded that music training is associated with a restricted effect on written sentence comprehension.
Collapse
Affiliation(s)
- Ping Ju
- Department of Psychology, Renmin University of China, Beijing, China
| | - Zihang Zhou
- Department of Psychology, Renmin University of China, Beijing, China; School of foreign languages, Renmin University of China, Beijing, China
| | - Yuhan Xie
- Department of Psychology, Renmin University of China, Beijing, China
| | - Jiaying Hui
- Department of Psychology, Renmin University of China, Beijing, China
| | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing, China; Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China.
| |
Collapse
|
7
|
Wagner TM, Wagner L, Plontke SK, Rahne T. Enhancing Cochlear Implant Outcomes across Age Groups: The Interplay of Forward Focus and Advanced Combination Encoder Coding Strategies in Noisy Conditions. J Clin Med 2024; 13:1399. [PMID: 38592239 PMCID: PMC10931918 DOI: 10.3390/jcm13051399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/14/2024] [Accepted: 02/23/2024] [Indexed: 04/10/2024] Open
Abstract
Background: Hearing in noise is challenging for cochlear implant users and requires significant listening effort. This study investigated the influence of ForwardFocus and number of maxima of the Advanced Combination Encoder (ACE) strategy, as well as age, on speech recognition threshold and listening effort in noise. Methods: A total of 33 cochlear implant recipients were included (age ≤ 40 years: n = 15, >40 years: n = 18). The Oldenburg Sentence Test was used to measure 50% speech recognition thresholds (SRT50) in fluctuating and stationary noise. Speech was presented frontally, while three frontal or rear noise sources were used, and the number of ACE maxima varied between 8 and 12. Results: ForwardFocus significantly improved the SRT50 when noise was presented from the back, independent of subject age. The use of 12 maxima further improved the SRT50 when ForwardFocus was activated and when noise and speech were presented frontally. Listening effort was significantly worse in the older age group compared to the younger age group and was reduced by ForwardFocus but not by increasing the number of ACE maxima. Conclusion: Forward Focus can improve speech recognition in noisy environments and reduce listening effort, especially in older cochlear implant users.
Collapse
Affiliation(s)
- Telse M. Wagner
- Department of Otorhinolaryngology, University Medicine Halle, Ernst-Grube-Straße 40, 06120 Halle (Saale), Germany; (L.W.); (S.K.P.); (T.R.)
| | | | | | | |
Collapse
|
8
|
Kuchinsky SE, Razeghi N, Pandža NB. Auditory, Lexical, and Multitasking Demands Interactively Impact Listening Effort. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4066-4082. [PMID: 37672797 PMCID: PMC10713022 DOI: 10.1044/2023_jslhr-22-00548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/12/2023] [Accepted: 06/27/2023] [Indexed: 09/08/2023]
Abstract
PURPOSE This study examined the extent to which acoustic, linguistic, and cognitive task demands interactively impact listening effort. METHOD Using a dual-task paradigm, on each trial, participants were instructed to perform either a single task or two tasks. In the primary word recognition task, participants repeated Northwestern University Auditory Test No. 6 words presented in speech-shaped noise at either an easier or a harder signal-to-noise ratio (SNR). The words varied in how commonly they occur in the English language (lexical frequency). In the secondary visual task, participants were instructed to press a specific key as soon as a number appeared on screen (simpler task) or one of two keys to indicate whether the visualized number was even or odd (more complex task). RESULTS Manipulation checks revealed that key assumptions of the dual-task design were met. A significant three-way interaction was observed, such that the expected effect of SNR on effort was only observable for words with lower lexical frequency and only when multitasking demands were relatively simpler. CONCLUSIONS This work reveals that variability across speech stimuli can influence the sensitivity of the dual-task paradigm for detecting changes in listening effort. In line with previous work, the results of this study also suggest that higher cognitive demands may limit the ability to detect expected effects of SNR on measures of effort. With implications for real-world listening, these findings highlight that even relatively minor changes in lexical and multitasking demands can alter the effort devoted to listening in noise.
Collapse
Affiliation(s)
- Stefanie E. Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Niki Razeghi
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Nick B. Pandža
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Program in Second Language Acquisition, University of Maryland, College Park
- Maryland Language Science Center, University of Maryland, College Park
| |
Collapse
|
9
|
Shen S, Sayyid Z, Andresen N, Carver C, Dunham R, Marsiglia D, Yeagle J, Della Santina CC, Bowditch S, Sun DQ. Longitudinal Auditory Benefit for Elderly Patients After Cochlear Implant for Bilateral Hearing Loss, Including Those Meeting Expanded Centers for Medicare & Medicaid Services Criteria. Otol Neurotol 2023; 44:866-872. [PMID: 37621128 PMCID: PMC10527933 DOI: 10.1097/mao.0000000000003983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/26/2023]
Abstract
OBJECTIVE To examine the effect of patient age on longitudinal speech understanding outcomes after cochlear implantation (CI) in bilateral hearing loss. STUDY DESIGN Retrospective cohort study. SETTING Tertiary academic center. PATIENTS One thousand one hundred five adult patients with bilateral hearing loss receiving a unilateral CI between 1987 and 2022InterventionsNone. MAIN OUTCOME MEASURES Postoperative speech recognition outcomes, including AzBio sentences, consonant-nucleus-consonant word, and Hearing in Noise Test in quiet were analyzed at short-term (<2 yr), medium-term (2-8 y), and long-term (>8 yr) term postoperative intervals. RESULTS Eighty-six very elderly (>80 yr), 409 elderly (65-80 yr), and 709 nonelderly (18-65 yr) patients were included. Short-term postoperative AzBio scores demonstrated similar magnitude of improvement relative to preoperative scores in the very elderly (47.6, 95% confidence interval [CI], 28.9-66.4), elderly (49.0; 95% CI, 39.2-58.8), and nonelderly (47.9; 95% CI, 35.4-60.4). Scores for those older than 80 years remained stable after 2 years after implant, but in those 80 years or younger, scores continued to improve for up to 8 years (elderly: 6.2 [95% CI, 1.5-12.4]; nonelderly: 9.9 [95% CI, 2.1-17.7]) after implantation. Similar patterns were observed for consonant-nucleus-consonant word scores. Across all age cohorts, patients with preoperative Hearing in Noise Test scores between 40 and 60% had similar scores to those with preoperative scores of less than 40%, at short-term (82.4, 78.9; 95% CI, -23.1 to 10.0), medium-term (77.2, 83.9; 95% CI, -15.4 to 8.2), or long-term (73.4, 71.2; 95% CI, -18.2 to 12.2) follow-up. CONCLUSIONS Patients older than 80 years gain significant and sustained auditory benefit after CI, including those meeting expanded Centers for Medicare & Medicaid Service criteria for implantation. Patients younger than 80 years demonstrated continued improvement over longer periods than older patients, suggesting a role of central plasticity in mediating CI outcomes as a function of age.
Collapse
Affiliation(s)
- Sarek Shen
- Johns Hopkins School of Medicine, Department of Otolaryngology-Head and Neck Surgery. Baltimore, Maryland. USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Visentin C, Pellegatti M, Garraffa M, Di Domenico A, Prodi N. Individual characteristics moderate listening effort in noisy classrooms. Sci Rep 2023; 13:14285. [PMID: 37652970 PMCID: PMC10471719 DOI: 10.1038/s41598-023-40660-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
Comprehending the teacher's message when other students are chatting is challenging. Even though the sound environment is the same for a whole class, differences in individual performance can be observed, which might depend on a variety of personal factors and their specific interaction with the listening condition. This study was designed to explore the role of individual characteristics (reading comprehension, inhibitory control, noise sensitivity) when primary school children perform a listening comprehension task in the presence of a two-talker masker. The results indicated that this type of noise impairs children's accuracy, effort, and motivation during the task. Its specific impact depended on the level and was modulated by the child's characteristics. In particular, reading comprehension was found to support task accuracy, whereas inhibitory control moderated the effect of listening condition on the two measures of listening effort included in the study (response time and self-ratings), even though with a different pattern of association. A moderation effect of noise sensitivity on perceived listening effort was also observed. Understanding the relationship between individual characteristics and classroom sound environment has practical implications for the acoustic design of spaces promoting students' well-being, and supporting their learning performance.
Collapse
Affiliation(s)
- Chiara Visentin
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy.
- Institute for Renewable Energy, Eurac Research, Via A. Volta/A. Volta Straße 13/A, 39100, Bolzano-Bozen, Italy.
| | - Matteo Pellegatti
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| | - Maria Garraffa
- School of Health Sciences, University of East Anglia, Norwich Research Park, Norwich, Norfolk, NR4 7TJ, UK
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences, University of Chieti-Pescara, Via dei Vestini 31, 66100, Chieti, Italy
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| |
Collapse
|
11
|
Burleson AM, Souza PE. Cognitive and linguistic abilities and perceptual restoration of missing speech: Evidence from online assessment. Front Psychol 2022; 13:1059192. [PMID: 36571056 PMCID: PMC9773209 DOI: 10.3389/fpsyg.2022.1059192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 11/23/2022] [Indexed: 12/13/2022] Open
Abstract
When speech is clear, speech understanding is a relatively simple and automatic process. However, when the acoustic signal is degraded, top-down cognitive and linguistic abilities, such as working memory capacity, lexical knowledge (i.e., vocabulary), inhibitory control, and processing speed can often support speech understanding. This study examined whether listeners aged 22-63 (mean age 42 years) with better cognitive and linguistic abilities would be better able to perceptually restore missing speech information than those with poorer scores. Additionally, the role of context and everyday speech was investigated using high-context, low-context, and realistic speech corpi to explore these effects. Sixty-three adult participants with self-reported normal hearing completed a short cognitive and linguistic battery before listening to sentences interrupted by silent gaps or noise bursts. Results indicated that working memory was the most reliable predictor of perceptual restoration ability, followed by lexical knowledge, and inhibitory control and processing speed. Generally, silent gap conditions were related to and predicted by a broader range of cognitive abilities, whereas noise burst conditions were related to working memory capacity and inhibitory control. These findings suggest that higher-order cognitive and linguistic abilities facilitate the top-down restoration of missing speech information and contribute to individual variability in perceptual restoration.
Collapse
|
12
|
Stenbäck V, Marsja E, Hällgren M, Lyxell B, Larsby B. Informational Masking and Listening Effort in Speech Recognition in Noise: The Role of Working Memory Capacity and Inhibitory Control in Older Adults With and Without Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4417-4428. [PMID: 36283680 DOI: 10.1044/2022_jslhr-21-00674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The study aimed to assess the relationship between (a) speech recognition in noise, mask type, working memory capacity (WMC), and inhibitory control and (b) self-rated listening effort, speech material, and mask type, in older adults with and without hearing impairment. It was of special interest to assess the relationship between WMC, inhibitory control, and speech recognition in noise when informational maskers masked target speech. METHOD A mixed design was used. A group (N = 24) of older (Mage = 69.7 years) individuals with hearing impairment and a group of age normal-hearing adults (Mage = 59.3 years, SD = 6.5) participated in the study. The participants were presented with auditory tests in a sound-attenuated room and with cognitive tests in a quiet office. The participants were asked to rate listening effort after being presented with energetic and informational background maskers in two different speech materials used in this study (i.e., Hearing In Noise Test and Hagerman test). Linear mixed-effects models were set up to assess the effect of the two different speech materials, energetic and informational maskers, hearing ability, WMC, inhibitory control, and self-rated listening effort. RESULTS Results showed that WMC and inhibitory control were of importance for speech recognition in noise, even when controlling for pure-tone average 4 hearing thresholds and age, when the maskers were informational. Concerning listening effort, on the other hand, the results suggest that hearing ability, but not cognitive abilities, is important for self-rated listening effort in speech recognition in noise. CONCLUSIONS Speech-in-noise recognition is more dependent on WMC for older adults in informational maskers than in energetic maskers. Hearing ability is a stronger predictor than cognition for self-rated listening effort. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357648.
Collapse
Affiliation(s)
- Victoria Stenbäck
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Division of Education, Teaching and Learning, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Erik Marsja
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology in Östergötland and Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| | - Björn Lyxell
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Department of Special Needs Education, University of Oslo, Norway
| | - Birgitta Larsby
- Department of Otorhinolaryngology in Östergötland and Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| |
Collapse
|
13
|
Is Having Hearing Loss Fundamentally Different? Multigroup Structural Equation Modeling of the Effect of Cognitive Functioning on Speech Identification. Ear Hear 2022; 43:1437-1446. [PMID: 34983896 DOI: 10.1097/aud.0000000000001196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Previous research suggests that there is a robust relationship between cognitive functioning and speech-in-noise performance for older adults with age-related hearing loss. For normal-hearing adults, on the other hand, the research is not entirely clear. Therefore, the current study aimed to examine the relationship between cognitive functioning, aging, and speech-in-noise, in a group of older normal-hearing persons and older persons with hearing loss who wear hearing aids. DESIGN We analyzed data from 199 older normal-hearing individuals (mean age = 61.2) and 200 older individuals with hearing loss (mean age = 60.9) using multigroup structural equation modeling. Four cognitively related tasks were used to create a cognitive functioning construct: the reading span task, a visuospatial working memory task, the semantic word-pairs task, and Raven's progressive matrices. Speech-in-noise, on the other hand, was measured using Hagerman sentences. The Hagerman sentences were presented via an experimental hearing aid to both normal hearing and hearing-impaired groups. Furthermore, the sentences were presented with one of the two background noise conditions: the Hagerman original speech-shaped noise or four-talker babble. Each noise condition was also presented with three different hearing processing settings: linear processing, fast compression, and noise reduction. RESULTS Cognitive functioning was significantly related to speech-in-noise identification. Moreover, aging had a significant effect on both speech-in-noise and cognitive functioning. With regression weights constrained to be equal for the two groups, the final model had the best fit to the data. Importantly, the results showed that the relationship between cognitive functioning and speech-in-noise was not different for the two groups. Furthermore, the same pattern was evident for aging: the effects of aging on cognitive functioning and aging on speech-in-noise were not different between groups. CONCLUSION Our findings revealed similar cognitive functioning and aging effects on speech-in-noise performance in older normal-hearing and aided hearing-impaired listeners. In conclusion, the findings support the Ease of Language Understanding model as cognitive processes play a critical role in speech-in-noise independent from the hearing status of elderly individuals.
Collapse
|
14
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
15
|
Abdel-Latif KHA, Meister H. Speech Recognition and Listening Effort in Cochlear Implant Recipients and Normal-Hearing Listeners. Front Neurosci 2022; 15:725412. [PMID: 35221883 PMCID: PMC8867819 DOI: 10.3389/fnins.2021.725412] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
The outcome of cochlear implantation is typically assessed by speech recognition tests in quiet and in noise. Many cochlear implant recipients reveal satisfactory speech recognition especially in quiet situations. However, since cochlear implants provide only limited spectro-temporal cues the effort associated with understanding speech might be increased. In this respect, measures of listening effort could give important extra information regarding the outcome of cochlear implantation. In order to shed light on this topic and to gain knowledge for clinical applications we compared speech recognition and listening effort in cochlear implants (CI) recipients and age-matched normal-hearing listeners while considering potential influential factors, such as cognitive abilities. Importantly, we estimated speech recognition functions for both listener groups and compared listening effort at similar performance level. Therefore, a subjective listening effort test (adaptive scaling, “ACALES”) as well as an objective test (dual-task paradigm) were applied and compared. Regarding speech recognition CI users needed about 4 dB better signal-to-noise ratio to reach the same performance level of 50% as NH listeners and even 5 dB better SNR to reach 80% speech recognition revealing shallower psychometric functions in the CI listeners. However, when targeting a fixed speech intelligibility of 50 and 80%, respectively, CI users and normal hearing listeners did not differ significantly in terms of listening effort. This applied for both the subjective and the objective estimation. Outcome for subjective and objective listening effort was not correlated with each other nor with age or cognitive abilities of the listeners. This study did not give evidence that CI users and NH listeners differ in terms of listening effort – at least when the same performance level is considered. In contrast, both listener groups showed large inter-individual differences in effort determined with the subjective scaling and the objective dual-task. Potential clinical implications of how to assess listening effort as an outcome measure for hearing rehabilitation are discussed.
Collapse
|