1
|
Moberly AC, Du L, Tamati TN. Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. Trends Hear 2025; 29:23312165241312449. [PMID: 39819389 PMCID: PMC11742172 DOI: 10.1177/23312165241312449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 12/02/2024] [Accepted: 12/12/2024] [Indexed: 01/19/2025] Open
Abstract
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Liping Du
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Terrin N. Tamati
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
2
|
Frei V, Schmitt R, Meyer M, Giroud N. Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment. Trends Hear 2024; 28:23312165241287622. [PMID: 39444375 PMCID: PMC11520018 DOI: 10.1177/23312165241287622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 08/21/2024] [Accepted: 09/11/2024] [Indexed: 10/25/2024] Open
Abstract
Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, N = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.
Collapse
Affiliation(s)
- Vanessa Frei
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
- International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany
| | - Raffael Schmitt
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
- International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany
- Competence Center Language & Medicine, Center of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Competence Center Language & Medicine, Center of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich, Switzerland
- University of Zurich, University Research Priority Program Dynamics of Healthy Aging, Zurich, Switzerland
- Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland
- Evolutionary Neuroscience of Language, Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| | - Nathalie Giroud
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
- International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany
- Competence Center Language & Medicine, Center of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich, Switzerland
- Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Lemel R, Shalev L, Nitsan G, Ben-David BM. Listen up! ADHD slows spoken-word processing in adverse listening conditions: Evidence from eye movements. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 133:104401. [PMID: 36577332 DOI: 10.1016/j.ridd.2022.104401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/23/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. AIMS Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. METHODS AND PROCEDURES Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. OUTCOMES AND RESULTS In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. CONCLUSIONS AND IMPLICATIONS ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. WHAT THIS PAPER ADDS?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.
Collapse
Affiliation(s)
- Rony Lemel
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Lilach Shalev
- Constantiner School of Education and Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| | - Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada; Toronto Rehabilitation Institute, University Health Networks (UHN), ON, Canada.
| |
Collapse
|
4
|
Lewis DE. Speech Understanding in Complex Environments by School-Age Children with Mild Bilateral or Unilateral Hearing Loss. Semin Hear 2023; 44:S36-S48. [PMID: 36970648 PMCID: PMC10033204 DOI: 10.1055/s-0043-1764134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
Abstract
Numerous studies have shown that children with mild bilateral (MBHL) or unilateral hearing loss (UHL) experience speech perception difficulties in poor acoustics. Much of the research in this area has been conducted via laboratory studies using speech-recognition tasks with a single talker and presentation via earphones and/or from a loudspeaker located directly in front of the listener. Real-world speech understanding is more complex, however, and these children may need to exert greater effort than their peers with normal hearing to understand speech, potentially impacting progress in a number of developmental areas. This article discusses issues and research relative to speech understanding in complex environments for children with MBHL or UHL and implications for real-world listening and understanding.
Collapse
Affiliation(s)
- Dawna E. Lewis
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska
| |
Collapse
|
5
|
Li MM, Moberly AC, Tamati TN. Factors affecting talker discrimination ability in adult cochlear implant users. JOURNAL OF COMMUNICATION DISORDERS 2022; 99:106255. [PMID: 35988314 PMCID: PMC10659049 DOI: 10.1016/j.jcomdis.2022.106255] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Real-world speech communication involves interacting with many talkers with diverse voices and accents. Many adults with cochlear implants (CIs) demonstrate poor talker discrimination, which may contribute to real-world communication difficulties. However, the factors contributing to talker discrimination ability, and how discrimination ability relates to speech recognition outcomes in adult CI users are still unknown. The current study investigated talker discrimination ability in adult CI users, and the contributions of age, auditory sensitivity, and neurocognitive skills. In addition, the relation between talker discrimination ability and multiple-talker sentence recognition was explored. METHODS Fourteen post-lingually deaf adult CI users (3 female, 11 male) with ≥1 year of CI use completed a talker discrimination task. Participants listened to two monosyllabic English words, produced by the same talker or by two different talkers, and indicated if the words were produced by the same or different talkers. Nine female and nine male native English talkers were paired, resulting in same- and different-talker pairs as well as same-gender and mixed-gender pairs. Participants also completed measures of spectro-temporal processing, neurocognitive skills, and multiple-talker sentence recognition. RESULTS CI users showed poor same-gender talker discrimination, but relatively good mixed-gender talker discrimination. Older age and weaker neurocognitive skills, in particular inhibitory control, were associated with less accurate mixed-gender talker discrimination. Same-gender discrimination was significantly related to multiple-talker sentence recognition accuracy. CONCLUSION Adult CI users demonstrate overall poor talker discrimination ability. Individual differences in mixed-gender discrimination ability were related to age and neurocognitive skills, suggesting that these factors contribute to the ability to make use of available, degraded talker characteristics. Same-gender talker discrimination was associated with multiple-talker sentence recognition, suggesting that access to subtle talker-specific cues may be important for speech recognition in challenging listening conditions.
Collapse
Affiliation(s)
- Michael M Li
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Aaron C Moberly
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Terrin N Tamati
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA; University Medical Center Groningen, University of Groningen, Department of Otorhinolaryngology/Head and Neck Surgery, Groningen, the Netherlands.
| |
Collapse
|
6
|
Ashori M. Working Memory-Based Cognitive Rehabilitation: Spoken Language of Deaf and Hard-of-Hearing Children. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 27:234-244. [PMID: 35543013 DOI: 10.1093/deafed/enac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
This research examined the effect of the Working Memory-based Cognitive Rehabilitation (WMCR) intervention on the spoken language development of deaf and hard-of-hearing (DHH) children. In this clinical trial study, 28 DHH children aged between 5 and 6 years were selected by random sampling method. The participants were randomly assigned to experimental and control groups. The experimental group participated in the WMCR intervention involving 11 sessions. All participants were assessed pre-and postintervention. Data were collected by the Newsha Development Scale and analyzed through MANCOVA. The results revealed a significant difference between the scores of the receptive and expressive language of the experimental group that were exposed to the WMCR intervention compared with the control group. The receptive and expressive language skills of the experimental group indicated a significant improvement after the intervention. Therefore, the WMCR intervention is an effective method that affects the spoken language skills of DHH children. These findings have critical implications for teachers, parents, and therapists in supporting DHH young children to develop their language skills.
Collapse
Affiliation(s)
- Mohammad Ashori
- Associate Professor, Department of Psychology and Education of People with Special Needs, Faculty of Education and Psychology, University of Isfahan, Isfahan, Iran
| |
Collapse
|
7
|
Zhong L, Noud BP, Pruitt H, Marcrum SC, Picou EM. Effects of text supplementation on speech intelligibility for listeners with normal and impaired hearing: a systematic review with implications for telecommunication. Int J Audiol 2021; 61:1-11. [PMID: 34154488 DOI: 10.1080/14992027.2021.1937346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE Telecommunication can be difficult in the presence of noise or hearing loss. The purpose of this study was to systematically review evidence regarding the effects of text supplementation (e.g. captions, subtitles) of auditory or auditory-visual signals on speech intelligibility for listeners with normal or impaired hearing. DESIGN Three databases were searched. Articles were evaluated for inclusion based on the Population Intervention Comparison Outcome framework. The Effective Public Health Practice Project instrument was used to evaluate the quality of the identified articles. STUDY SAMPLE After duplicates were removed, the titles and abstracts of 2019 articles were screened. Forty-six full texts were reviewed; ten met inclusion criteria. RESULTS The quality of all ten articles was moderate or strong. The articles demonstrated that text added to auditory (or auditory-visual) signals improved speech intelligibility and that the benefits were largest when auditory signal integrity was low, accuracy of the text was high, and the auditory signal and text were synchronous. Age and hearing loss did not affect benefits from the addition of text. CONCLUSIONS Although only based on ten studies, these data support the use of text as a supplement during telecommunication, such as while watching television or during telehealth appointments.
Collapse
Affiliation(s)
- Ling Zhong
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Brianne P Noud
- Department of Audiology, Center for Hearing and Speech, St. Louis, MO, USA
| | - Harriet Pruitt
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Speech-Language Pathology, Advanced Therapy Solutions, Clarksville, TN, USA
| | - Steven C Marcrum
- Department of Otolaryngology, University Hospital Regensburg, Regensburg, Germany
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
8
|
Chen YC, Yong W, Xing C, Feng Y, Haidari NA, Xu JJ, Gu JP, Yin X, Wu Y. Directed functional connectivity of the hippocampus in patients with presbycusis. Brain Imaging Behav 2021; 14:917-926. [PMID: 31270776 DOI: 10.1007/s11682-019-00162-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Presbycusis, associated with a diminished quality of life characterized by bilateral sensorineural hearing loss at high frequencies, has become an increasingly critical public health problem. This study aimed to identify directed functional connectivity (FC) of the hippocampus in patients with presbycusis and to explore the causes if the directed functional connections of the hippocampus were disrupted. Presbycusis patients (n = 32) and age-, sex-, and education-matched healthy controls (n = 40) were included in this study. The seed regions of bilateral hippocampus were selected to identify directed FC in patients with presbycusis using Granger causality analysis (GCA) approach. Correlation analyses were conducted to detect the associations of disrupted directed FC of hippocampus with clinical measures of presbycusis. Compared to healthy controls, decreased directed FC between inferior parietal lobule, insula, right supplementary motor area, middle temporal gyrus and hippocampus were detected in presbycusis patients. Furthermore, a negative correlation between TMB score and the decline of directed FC from left inferior parietal lobule to left hippocampus (r = -0.423, p = 0.025) and from right inferior parietal lobule to right hippocampus (r = -0.516, p = 0.005) were also observed. The decreased directed functional connections of the hippocampus were detected in patients with presbycusis, which was associated with specific cognitive performance. This study mainly emphasizes the crucial role of hippocampus in presbycusis and will enhance our understanding of the neuropathological mechanisms of presbycusis.
Collapse
Affiliation(s)
- Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Wei Yong
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Chunhua Xing
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Yuan Feng
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Nasir Ahmad Haidari
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Jin-Jing Xu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Jian-Ping Gu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China.
| | - Yuanqing Wu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China.
| |
Collapse
|
9
|
Tamati TN, Vasil KJ, Kronenberger WG, Pisoni DB, Moberly AC, Ray C. Word and Nonword Reading Efficiency in Postlingually Deafened Adult Cochlear Implant Users. Otol Neurotol 2021; 42:e272-e278. [PMID: 33306660 PMCID: PMC7874984 DOI: 10.1097/mao.0000000000002925] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
HYPOTHESIS This study tested the hypotheses that 1) experienced adult cochlear implants (CI) users demonstrate poorer reading efficiency relative to normal-hearing controls, 2) reading efficiency reflects basic, underlying neurocognitive skills, and 3) reading efficiency relates to speech recognition outcomes in CI users. BACKGROUND Weak phonological processing skills have been associated with poor speech recognition outcomes in postlingually deaf adult CI users. Phonological processing can be captured in nonauditory measures of reading efficiency, which may have wide use in patients with hearing loss. This study examined reading efficiency in adults CI users, and its relation to speech recognition outcomes. METHODS Forty-eight experienced, postlingually deaf adult CI users (ECIs) and 43 older age-matched peers with age-normal hearing (ONHs) completed the Test of Word Reading Efficiency (TOWRE-2), which measures word and nonword reading efficiency. Participants also completed a battery of nonauditory neurocognitive measures and auditory sentence recognition tasks. RESULTS ECIs and ONHs did not differ in word (ECIs: M = 78.2, SD = 11.4; ONHs: M = 83.3, SD = 10.2) or nonword reading efficiency (ECIs: M = 42.0, SD = 11.2; ONHs: M = 43.7, SD = 10.3). For ECIs, both scores were related to untimed word reading with moderate to strong effect sizes (r = 0.43-0.69), but demonstrated differing relations with other nonauditory neurocognitive measures with weak to moderate effect sizes (word: r = 0.11-0.44; nonword: r = (-)0.15 to (-)0.42). Word reading efficiency was moderately related to sentence recognition outcomes in ECIs (r = 0.36-0.40). CONCLUSION Findings suggest that postlingually deaf adult CI users demonstrate neither impaired word nor nonword reading efficiency, and these measures reflect different underlying mechanisms involved in language processing. The relation between sentence recognition and word reading efficiency, a measure of lexical access speed, suggests that this measure may be useful for explaining outcome variability in adult CI users.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Kara J. Vasil
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - William G. Kronenberger
- Department of Otolaryngology—Head and Neck Surgery, DeVault Otologic Research Laboratory, Indianapolis
| | - David B. Pisoni
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, USA
| | - Aaron C. Moberly
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Christin Ray
- Department of Otolaryngology—Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| |
Collapse
|
10
|
Silverman MJ, Bourdaghs SW, Schwartzberg ET. Effects of Visual and Aural Presentation Styles and Rhythm on Working Memory as Measured by Monosyllabic Sequential Digit Recall. Psychol Rep 2020; 124:1282-1297. [PMID: 32539640 DOI: 10.1177/0033294120930974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Although information is frequently paired with music to enhance recall, there is a lack of basic research investigating how aspects of recorded music, as well as how it is presented, facilitate working memory. Therefore, the purpose of this study was to determine the effects of visual and aural presentation styles, rhythm, and participant major on working memory as measured by sequential monosyllabic digit recall performance. We isolated visual and aural presentation styles and rhythm conditions during six different treatment stimuli presented on a computer screen in the study: (a) Visual Rhythm; (b) Visual No Rhythm; (c) Aural Rhythm; (d) Aural No Rhythm; (e) Visual + Aural Rhythm; (f) Visual + Aural No Rhythm. Participants' (N = 60; 30 nonmusic majors and 30 music majors) task was to immediately recall the information paired with music within each condition. Analyses of variance indicated a significant difference between the visual and visual + aural presentation style conditions with the visual + aural condition having more accurate recall. While descriptive data indicated that rhythm tended to facilitate recall, there was no significant difference between rhythm and no rhythm conditions. Nonmusic major participants tended to have slightly more accurate recall than music major participants, although this difference was not significant. Participants tended to have higher recall accuracy during primacy and recency serial positions. As participants had most accurate recall during the visual + aural presentation style conditions, it seems that the multi-sensory presentation modes can be effective for teaching information to be immediately recalled as long as they do not contain too much information and overload the limited storage capacity of working memory. Implications for clinical practice, limitations, and suggestions for future research are provided.
Collapse
|
11
|
Strand JF, Ray L, Dillman-Hasso NH, Villanueva J, Brown VA. Understanding Speech Amid the Jingle and Jangle: Recommendations for Improving Measurement Practices in Listening Effort Research. ACTA ACUST UNITED AC 2020; 3:169-188. [PMID: 34240011 DOI: 10.1080/25742442.2021.1903293] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The latent constructs psychologists study are typically not directly accessible, so researchers must design measurement instruments that are intended to provide insights about those constructs. Construct validation-assessing whether instruments measure what they intend to-is therefore critical for ensuring that the conclusions we draw actually reflect the intended phenomena. Insufficient construct validation can lead to the jingle fallacy-falsely assuming two instruments measure the same construct because the instruments share a name (Thorndike, 1904)-and the jangle fallacy-falsely assuming two instruments measure different constructs because the instruments have different names (Kelley, 1927). In this paper, we examine construct validation practices in research on listening effort and identify patterns that strongly suggest the presence of jingle and jangle in the literature. We argue that the lack of construct validation for listening effort measures has led to inconsistent findings and hindered our understanding of the construct. We also provide specific recommendations for improving construct validation of listening effort instruments, drawing on the framework laid out in a recent paper on improving measurement practices (Flake & Fried, 2020). Although this paper addresses listening effort, the issues raised and recommendations presented are widely applicable to tasks used in research on auditory perception and cognitive psychology.
Collapse
Affiliation(s)
| | - Lucia Ray
- Carleton College, Department of Psychology
| | | | | | - Violet A Brown
- Washington University in St. Louis, Department of Psychological & Brain Sciences
| |
Collapse
|
12
|
Taitelbaum-Swead R, Kozol Z, Fostick L. Listening Effort Among Adults With and Without Attention-Deficit/Hyperactivity Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4554-4563. [PMID: 31747524 DOI: 10.1044/2019_jslhr-h-19-0134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Few studies have assessed listening effort (LE)-the cognitive resources required to perceive speech-among populations with intact hearing but reduced availability of cognitive resources. Attention/deficit/hyperactivity disorder (ADHD) is theorized to restrict attention span, possibly making speech perception in adverse conditions more challenging. This study examined the effect of ADHD on LE among adults using a behavioral dual-task paradigm (DTP). Method Thirty-nine normal-hearing adults (aged 21-27 years) participated: 19 with ADHD (ADHD group) and 20 without ADHD (control group). Baseline group differences were measured in visual and auditory attention as well as speech perception. LE using DTP was assessed as the performance difference on a visual-motor task versus a simultaneous auditory and visual-motor task. Results Group differences in attention were confirmed by differences in visual attention (larger reaction times between congruent and incongruent conditions) and auditory attention (lower accuracy in the presence of distractors) among the ADHD group, compared to the controls. LE was greater among the ADHD group than the control group. Nevertheless, no group differences were found in speech perception. Conclusions LE is increased among those with ADHD. As a DTP assumes limited cognitive capacity to allocate attentional resources, LE among those with ADHD may be increased because higher level cognitive processes are more taxed in this population. Studies on LE using a DTP should take into consideration mechanisms of selective and divided attention. Among young adults who need to continuously process great volumes of auditory and visual information, much more effort may be expended by those with ADHD than those without it. As a result, those with ADHD may be more prone to fatigue and irritability, similar to those who are engaged in more outwardly demanding tasks.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Zvi Kozol
- Department of Physiotherapy, Ariel University, Israel
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|
13
|
Abstract
It is widely accepted that seeing a talker improves a listener's ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone-indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).
Collapse
|
14
|
Auditory-frontal Channeling in α and β Bands is Altered by Age-related Hearing Loss and Relates to Speech Perception in Noise. Neuroscience 2019; 423:18-28. [PMID: 31705894 DOI: 10.1016/j.neuroscience.2019.10.044] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 09/19/2019] [Accepted: 10/27/2019] [Indexed: 01/16/2023]
Abstract
Difficulty understanding speech-in-noise (SIN) is a pervasive problem faced by older adults particularly those with hearing loss. Previous studies have identified structural and functional changes in the brain that contribute to older adults' speech perception difficulties. Yet, many of these studies use neuroimaging techniques that evaluate only gross activation in isolated brain regions. Neural oscillations may provide further insight into the processes underlying SIN perception as well as the interaction between auditory cortex and prefrontal linguistic brain regions that mediate complex behaviors. We examined frequency-specific neural oscillations and functional connectivity of the EEG in older adults with and without hearing loss during an active SIN perception task. Brain-behavior correlations revealed listeners who were more resistant to the detrimental effects of noise also demonstrated greater modulation of α phase coherence between clean and noise-degraded speech, suggesting α desynchronization reflects release from inhibition and more flexible allocation of neural resources. Additionally, we found top-down β connectivity between prefrontal and auditory cortices strengthened with poorer hearing thresholds despite minimal behavioral differences. This is consistent with the proposal that linguistic brain areas may be recruited to compensate for impoverished auditory inputs through increased top-down predictions to assist SIN perception. Overall, these results emphasize the importance of top-down signaling in low-frequency brain rhythms that help compensate for hearing-related declines and facilitate efficient SIN processing.
Collapse
|
15
|
Abstract
OBJECTIVES The present study investigated presentation modality differences in lexical encoding and working memory representations of spoken words of older, hearing-impaired adults. Two experiments were undertaken: a memory-scanning experiment and a stimulus gating experiment. The primary objective of experiment 1 was to determine whether memory encoding and retrieval and scanning speeds are different for easily identifiable words presented in auditory-visual (AV), auditory-only (AO), and visual-only (VO) modalities. The primary objective of experiment 2 was to determine if memory encoding and retrieval speed differences observed in experiment 1 could be attributed to the early availability of AV speech information compared with AO or VO conditions. DESIGN Twenty-six adults over age 60 years with bilateral mild to moderate sensorineural hearing loss participated in experiment 1, and 24 adults who took part in experiment 1 participated in experiment 2. An item recognition reaction-time paradigm (memory-scanning) was used in experiment 1 to measure (1) lexical encoding speed, that is, the speed at which an easily identifiable word was recognized and placed into working memory, and (2) retrieval speed, that is, the speed at which words were retrieved from memory and compared with similarly encoded words (memory scanning) presented in AV, AO, and VO modalities. Experiment 2 used a time-gated word identification task to test whether the time course of stimulus information available to participants predicted the modality-related memory encoding and retrieval speed results from experiment 1. RESULTS The results of experiment 1 revealed significant differences among the modalities with respect to both memory encoding and retrieval speed, with AV fastest and VO slowest. These differences motivated an examination of the time course of stimulus information available as a function of modality. Results from experiment 2 indicated the encoding and retrieval speed advantages for AV and AO words compared with VO words were mostly driven by the time course of stimulus information. The AV advantage seen in encoding and retrieval speeds is likely due to a combination of robust stimulus information available to the listener earlier in time and lower attentional demands compared with AO or VO encoding and retrieval. CONCLUSIONS Significant modality differences in lexical encoding and memory retrieval speeds were observed across modalities. The memory scanning speed advantage observed for AV compared with AO or VO modalities was strongly related to the time course of stimulus information. In contrast, lexical encoding and retrieval speeds for VO words could not be explained by the time-course of stimulus information alone. Working memory processes for the VO modality may be impacted by greater attentional demands and less information availability compared with the AV and AO modalities. Overall, these results support the hypothesis that the presentation modality for speech inputs (AV, AO, or VO) affects how older adult listeners with hearing loss encode, remember, and retrieve what they hear.
Collapse
|
16
|
Francis AL, Love J. Listening effort: Are we measuring cognition or affect, or both? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1514. [PMID: 31381275 DOI: 10.1002/wcs.1514] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 07/07/2019] [Accepted: 07/10/2019] [Indexed: 12/14/2022]
Abstract
Listening effort is increasingly recognized as a factor in communication, particularly for and with nonnative speakers, for the elderly, for individuals with hearing impairment and/or for those working in noise. However, as highlighted by McGarrigle et al., International Journal of Audiology, 2014, 53, 433-445, the term "listening effort" encompasses a wide variety of concepts, including the engagement and control of multiple possibly distinct neural systems for information processing, and the affective response to the expenditure of those resources in a given context. Thus, experimental or clinical methods intended to objectively quantify listening effort may ultimately reflect a complex interaction between the operations of one or more of those information processing systems, and/or the affective and motivational response to the demand on those systems. Here we examine theoretical, behavioral, and psychophysiological factors related to resolving the question of what we are measuring, and why, when we measure "listening effort." This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Theory and Methods Psychology > Attention Psychology > Emotion and Motivation.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| |
Collapse
|
17
|
"Paying" attention to audiovisual speech: Do incongruent stimuli incur greater costs? Atten Percept Psychophys 2019; 81:1743-1756. [PMID: 31197661 DOI: 10.3758/s13414-019-01772-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The McGurk effect is a multisensory phenomenon in which discrepant auditory and visual speech signals typically result in an illusory percept. McGurk stimuli are often used in studies assessing the attentional requirements of audiovisual integration, but no study has directly compared the costs associated with integrating congruent versus incongruent audiovisual speech. Some evidence suggests that the McGurk effect may not be representative of naturalistic audiovisual speech processing - susceptibility to the McGurk effect is not associated with the ability to derive benefit from the addition of the visual signal, and distinct cortical regions are recruited when processing congruent versus incongruent speech. In two experiments, one using response times to identify congruent and incongruent syllables and one using a dual-task paradigm, we assessed whether congruent and incongruent audiovisual speech incur different attentional costs. We demonstrated that response times to both the speech task (Experiment 1) and a secondary vibrotactile task (Experiment 2) were indistinguishable for congruent compared to incongruent syllables, but McGurk fusions were responded to more quickly than McGurk non-fusions. These results suggest that despite documented differences in how congruent and incongruent stimuli are processed, they do not appear to differ in terms of processing time or effort, at least in the open-set task speech task used here. However, responses that result in McGurk fusions are processed more quickly than those that result in non-fusions, though attentional cost is comparable for the two response types.
Collapse
|
18
|
Strand JF, Brown VA, Barbour DL. Talking points: A modulating circle reduces listening effort without improving speech recognition. Psychon Bull Rev 2019; 26:291-297. [PMID: 29790122 DOI: 10.3758/s13423-018-1489-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Speech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall et al. in Perception & Psychophysics, 66(4), 574-583, 2004), and static or moving shapes can improve speech detection accuracy (Bernstein et al. in Speech Communication, 44(1/4), 5-18, 2004), aspects of the visual signal other than fine phonetic detail may also contribute to the perception of speech. In two experiments, we show that a modulating circle providing information about the onset, offset, and acoustic amplitude envelope of the speech does not improve recognition of spoken sentences (Experiment 1) or words (Experiment 2), but does reduce the effort necessary to recognize speech. These results suggest that although fine phonetic detail may be required for the visual signal to benefit speech recognition, low-level features of the visual signal may function to reduce the cognitive effort associated with processing speech.
Collapse
Affiliation(s)
- Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA.
| | - Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Dennis L Barbour
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
19
|
Michalek AMP, Ash I, Schwartz K. The independence of working memory capacity and audiovisual cues when listening in noise. Scand J Psychol 2018; 59:578-585. [DOI: 10.1111/sjop.12480] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 07/04/2018] [Indexed: 11/26/2022]
Affiliation(s)
- Anne M. P. Michalek
- Department of Communication Disorders & Special Education; Old Dominion University; Norfolk VA USA
| | - Ivan Ash
- Department of Psychology; Old Dominion University; Norfolk VA USA
| | - Kathryn Schwartz
- Department of Communication Disorders & Special Education; Old Dominion University; Norfolk VA USA
| |
Collapse
|
20
|
Strand JF, Brown VA, Merchant MB, Brown HE, Smith J. Measuring Listening Effort: Convergent Validity, Sensitivity, and Links With Cognitive and Personality Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1463-1486. [PMID: 29800081 DOI: 10.1044/2018_jslhr-h-17-0257] [Citation(s) in RCA: 89] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 02/06/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE Listening effort (LE) describes the attentional or cognitive requirements for successful listening. Despite substantial theoretical and clinical interest in LE, inconsistent operationalization makes it difficult to make generalizations across studies. The aims of this large-scale validation study were to evaluate the convergent validity and sensitivity of commonly used measures of LE and assess how scores on those tasks relate to cognitive and personality variables. METHOD Young adults with normal hearing (N = 111) completed 7 tasks designed to measure LE, 5 tests of cognitive ability, and 2 personality measures. RESULTS Scores on some behavioral LE tasks were moderately intercorrelated but were generally not correlated with subjective and physiological measures of LE, suggesting that these tasks may not be tapping into the same underlying construct. LE measures differed in their sensitivity to changes in signal-to-noise ratio and the extent to which they correlated with cognitive and personality variables. CONCLUSIONS Given that LE measures do not show consistent, strong intercorrelations and differ in their relationships with cognitive and personality predictors, these findings suggest caution in generalizing across studies that use different measures of LE. The results also indicate that people with greater cognitive ability appear to use their resources more efficiently, thereby diminishing the detrimental effects associated with increased background noise during language processing.
Collapse
Affiliation(s)
- Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN
| | - Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN
| | | | - Hunter E Brown
- Department of Psychology, Carleton College, Northfield, MN
| | - Julia Smith
- Department of Psychology, Carleton College, Northfield, MN
| |
Collapse
|
21
|
Nirme J, Haake M, Lyberg Åhlander V, Brännström J, Sahlén B. A virtual speaker in noisy classroom conditions: supporting or disrupting children's listening comprehension? LOGOP PHONIATR VOCO 2018; 44:79-86. [PMID: 29619859 DOI: 10.1080/14015439.2018.1455894] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AIM Seeing a speaker's face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children's listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. MATERIALS AND METHODS We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. RESULTS We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. CONCLUSIONS Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.
Collapse
Affiliation(s)
- Jens Nirme
- a Division of Cognitive Science , Lund University , Lund , Sweden
| | - Magnus Haake
- a Division of Cognitive Science , Lund University , Lund , Sweden
| | - Viveka Lyberg Åhlander
- b Division of Logopedics, Phoniatrics and Vocology, Department of Clinical Sciences , Lund University , Lund , Sweden
| | - Jonas Brännström
- b Division of Logopedics, Phoniatrics and Vocology, Department of Clinical Sciences , Lund University , Lund , Sweden
| | - Birgitta Sahlén
- b Division of Logopedics, Phoniatrics and Vocology, Department of Clinical Sciences , Lund University , Lund , Sweden
| |
Collapse
|
22
|
Zekveld AA, Pronk M, Danielsson H, Rönnberg J. Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:762-775. [PMID: 29450534 DOI: 10.1044/2017_jslhr-h-17-0196] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 10/12/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions. METHOD First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables. RESULTS The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average. CONCLUSIONS We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
- Section Ear & Hearing, Department of Otolaryngology/Head & Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Marieke Pronk
- Section Ear & Hearing, Department of Otolaryngology/Head & Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
| |
Collapse
|
23
|
Looking Behavior and Audiovisual Speech Understanding in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss. Ear Hear 2017; 39:783-794. [PMID: 29252979 DOI: 10.1097/aud.0000000000000534] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. DESIGN Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. RESULTS Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children with MBHL. Eye-tracker analysis revealed that children with NH looked more at the screens overall than did children with MBHL or UHL, though individual differences were greater in the groups with hearing loss. Listeners in all groups spent a small proportion of time looking at relevant screens as talkers spoke. Although looking was distributed across all screens, there was a bias toward the right side of the display. There was no relationship between overall looking behavior and performance on the task. CONCLUSIONS The present study examined the processing of audiovisual speech in the context of a naturalistic task. Results demonstrated that children distributed their looking to a variety of sources during the task, but that children with NH were more likely to look at screens than were those with MBHL/UHL. However, all groups looked at the relevant talkers as they were speaking only a small proportion of the time. Despite variability in looking behavior, listeners were able to follow the audiovisual instructions and children with NH demonstrated better performance than children with MBHL/UHL. These results suggest that performance on some challenging multi-talker audiovisual tasks is not dependent on visual fixation to relevant talkers for children with NH or with MBHL/UHL.
Collapse
|
24
|
Miller CW, Stewart EK, Wu YH, Bishop C, Bentler RA, Tremblay K. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2310-2320. [PMID: 28744550 PMCID: PMC5829805 DOI: 10.1044/2017_jslhr-h-16-0284] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 09/23/2016] [Accepted: 02/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. METHOD Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. RESULTS A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. CONCLUSION The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Collapse
Affiliation(s)
- Christi W. Miller
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Erin K. Stewart
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Christopher Bishop
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Ruth A. Bentler
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kelly Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
25
|
Ellis RJ, Sörqvist P, Zekveld AA, Rönnberg J. Editorial: Cognitive Hearing Mechanisms of Language Understanding: Short- and Long-Term Perspectives. Front Psychol 2017; 8:1060. [PMID: 28690579 PMCID: PMC5479909 DOI: 10.3389/fpsyg.2017.01060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 06/08/2017] [Indexed: 11/13/2022] Open
Affiliation(s)
- Rachel J Ellis
- Department of Behavioural Sciences and Learning, Linköping UniversityLinköping, Sweden.,Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| | - Patrik Sörqvist
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden.,Department of Building, Energy and Environmental Engineering, University of GävleGävle, Sweden
| | - Adriana A Zekveld
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden.,Section Ear and Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health Research Institute, VU University Medical CenterAmsterdam, Netherlands
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping UniversityLinköping, Sweden.,Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| |
Collapse
|
26
|
Sommers MS, Tye-Murray N, Barcroft J, Spehar BP. The Effects of Meaning-Based Auditory Training on Behavioral Measures of Perceptual Effort in Individuals with Impaired Hearing. Semin Hear 2016; 36:263-72. [PMID: 27587913 DOI: 10.1055/s-0035-1564454] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
There has been considerable interest in measuring the perceptual effort required to understand speech, as well as to identify factors that might reduce such effort. In the current study, we investigated whether, in addition to improving speech intelligibility, auditory training also could reduce perceptual or listening effort. Perceptual effort was assessed using a modified version of the n-back memory task in which participants heard lists of words presented without background noise and were asked to continually update their memory of the three most recently presented words. Perceptual effort was indexed by memory for items in the three-back position immediately before, immediately after, and 3 months after participants completed the Computerized Learning Exercises for Aural Rehabilitation (clEAR), a 12-session computerized auditory training program. Immediate posttraining measures of perceptual effort indicated that participants could remember approximately one additional word compared to pretraining. Moreover, some training gains were retained at the 3-month follow-up, as indicated by significantly greater recall for the three-back item at the 3-month measurement than at pretest. There was a small but significant correlation between gains in intelligibility and gains in perceptual effort. The findings are discussed within the framework of a limited-capacity speech perception system.
Collapse
Affiliation(s)
- Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | - Nancy Tye-Murray
- Department of Otolaryngology, Washington University School of Medicine
| | - Joe Barcroft
- Department of Romance Languages and Literatures, Washington University, St. Louis, Missouri
| | - Brent P Spehar
- Department of Otolaryngology, Washington University School of Medicine
| |
Collapse
|
27
|
Carroll R, Warzybok A, Kollmeier B, Ruigendijk E. Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise. Front Psychol 2016; 7:990. [PMID: 27458400 PMCID: PMC4930932 DOI: 10.3389/fpsyg.2016.00990] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 06/16/2016] [Indexed: 11/21/2022] Open
Abstract
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
Collapse
Affiliation(s)
- Rebecca Carroll
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Institute of Dutch Studies, University of OldenburgOldenburg, Germany
| | - Anna Warzybok
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Medizinische Physik, University of OldenburgOldenburg, Germany
| | - Birger Kollmeier
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Medizinische Physik, University of OldenburgOldenburg, Germany
| | - Esther Ruigendijk
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Institute of Dutch Studies, University of OldenburgOldenburg, Germany
| |
Collapse
|
28
|
|
29
|
Vercammen C, Goossens T, Wouters J, van Wieringen A. How age affects memory task performance in clinically normal hearing persons. AGING NEUROPSYCHOLOGY AND COGNITION 2016; 24:264-280. [PMID: 27338260 DOI: 10.1080/13825585.2016.1200005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.
Collapse
Affiliation(s)
- Charlotte Vercammen
- a Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology , KU Leuven - University of Leuven , Leuven , Belgium
| | - Tine Goossens
- a Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology , KU Leuven - University of Leuven , Leuven , Belgium
| | - Jan Wouters
- a Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology , KU Leuven - University of Leuven , Leuven , Belgium
| | - Astrid van Wieringen
- a Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology , KU Leuven - University of Leuven , Leuven , Belgium
| |
Collapse
|
30
|
Rudner M, Mishra S, Stenfelt S, Lunner T, Rönnberg J. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:590-599. [PMID: 27280873 DOI: 10.1044/2015_jslhr-h-15-0014] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2015] [Accepted: 11/18/2015] [Indexed: 06/06/2023]
Abstract
PURPOSE Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. METHOD Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. RESULTS Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. CONCLUSIONS We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.
Collapse
|
31
|
Cardin V. Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions. Front Neurosci 2016; 10:199. [PMID: 27242405 PMCID: PMC4862970 DOI: 10.3389/fnins.2016.00199] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 04/22/2016] [Indexed: 11/13/2022] Open
Abstract
Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss.
Collapse
Affiliation(s)
- Velia Cardin
- Department of Experimental Psychology, Deafness, Cognition and Language Research Centre, University College LondonLondon, UK; Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| |
Collapse
|
32
|
Roberts KL, Allen HA. Perception and Cognition in the Ageing Brain: A Brief Review of the Short- and Long-Term Links between Perceptual and Cognitive Decline. Front Aging Neurosci 2016; 8:39. [PMID: 26973514 PMCID: PMC4772631 DOI: 10.3389/fnagi.2016.00039] [Citation(s) in RCA: 103] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 02/15/2016] [Indexed: 11/13/2022] Open
Abstract
Ageing is associated with declines in both perception and cognition. We review evidence for an interaction between perceptual and cognitive decline in old age. Impoverished perceptual input can increase the cognitive difficulty of tasks, while changes to cognitive strategies can compensate, to some extent, for impaired perception. While there is strong evidence from cross-sectional studies for a link between sensory acuity and cognitive performance in old age, there is not yet compelling evidence from longitudinal studies to suggest that poor perception causes cognitive decline, nor to demonstrate that correcting sensory impairment can improve cognition in the longer term. Most studies have focused on relatively simple measures of sensory (visual and auditory) acuity, but more complex measures of suprathreshold perceptual processes, such as temporal processing, can show a stronger link with cognition. The reviewed evidence underlines the importance of fully accounting for perceptual deficits when investigating cognitive decline in old age.
Collapse
Affiliation(s)
| | - Harriet A Allen
- School of Psychology, University of Nottingham Nottingham, UK
| |
Collapse
|
33
|
Frölander HE, Möller C, Rudner M, Mishra S, Marshall JD, Piacentini H, Lyxell B. Theory-of-mind in individuals with Alström syndrome is related to executive functions, and verbal ability. Front Psychol 2015; 6:1426. [PMID: 26441796 PMCID: PMC4585138 DOI: 10.3389/fpsyg.2015.01426] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Accepted: 09/07/2015] [Indexed: 02/03/2023] Open
Abstract
Objective: This study focuses on cognitive prerequisites for the development of theory-of-mind (ToM), the ability to impute mental states to self and others in young adults with Alström syndrome (AS). AS is a rare and quite recently described recessively inherited ciliopathic disorder which causes progressive sensorineural hearing loss and juvenile blindness, as well as many other organ dysfunctions. Two cognitive abilities were considered; Phonological working memory (WM) and executive functions (EF), both of importance in speech development. Methods: Ten individuals (18–37 years) diagnosed with AS, and 20 individuals with no known impairment matched for age, gender, and educational level participated. Sensory functions were measured. Information about motor functions and communicative skills was obtained from responses to a questionnaire. ToM was assessed using Happés strange stories, verbal ability by a vocabulary test, phonological WM by means of an auditory presented non-word serial recall task and EF by tests of updating and inhibition. Results: The AS group performed at a significantly lower level than the control group in both the ToM task and the EF tasks. A significant correlation was observed between recall of non-words and EF in the AS group. Updating, but not inhibition, correlated significantly with verbal ability, whereas both updating and inhibition were significantly related to the ability to initiate and sustain communication. Poorer performance in the ToM and EF tasks were related to language perseverance and motor mannerisms. Conclusion: The AS group displayed a delayed ToM as well as reduced phonological WM, EF, and verbal ability. A significant association between ToM and EF, suggests a compensatory role of EF. This association may reflect the importance of EF to perceive and process input from the social environment when the social interaction is challenged by dual sensory loss. We argue that limitations in EF capacity in individuals with AS, to some extent, may be related to early blindness and progressive hearing loss, but maybe also to gene specific abnormalities.
Collapse
Affiliation(s)
- Hans-Erik Frölander
- Health Academy, School of Health and Medical Sciences, Örebro University Örebro, Sweden ; Audiological Research Centre, Örebro University Hospital Örebro, Sweden ; Swedish Institute for Disability Research Linköping, Sweden ; Linnaeus Centre HEAD Linköping, Sweden ; Research on Hearing and Deafness (HEAD) graduate School Linköping, Sweden
| | - Claes Möller
- Health Academy, School of Health and Medical Sciences, Örebro University Örebro, Sweden ; Audiological Research Centre, Örebro University Hospital Örebro, Sweden ; Swedish Institute for Disability Research Linköping, Sweden ; Linnaeus Centre HEAD Linköping, Sweden ; Department of Audiology, Örebro University Hospital Örebro, Sweden
| | - Mary Rudner
- Swedish Institute for Disability Research Linköping, Sweden ; Linnaeus Centre HEAD Linköping, Sweden ; Department of Behavioral Sciences and Learning, Linköping University Linköping, Sweden
| | - Sushmit Mishra
- Institute of Health Sciences, Utkal University Bhubaneswar, India
| | - Jan D Marshall
- Jackson Laboratory Bar Harbor, ME, USA ; Alstrom Syndrome International Mount Desert, ME, USA
| | | | - Björn Lyxell
- Swedish Institute for Disability Research Linköping, Sweden ; Linnaeus Centre HEAD Linköping, Sweden ; Department of Behavioral Sciences and Learning, Linköping University Linköping, Sweden
| |
Collapse
|
34
|
Rudner M, Toscano E, Holmer E. Load and distinctness interact in working memory for lexical manual gestures. Front Psychol 2015; 6:1147. [PMID: 26321979 PMCID: PMC4535352 DOI: 10.3389/fpsyg.2015.01147] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Accepted: 07/23/2015] [Indexed: 11/13/2022] Open
Abstract
The Ease of Language Understanding model (Rönnberg et al., 2013) predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Elena Toscano
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| |
Collapse
|
35
|
Keidser G, Best V, Freeston K, Boyce A. Cognitive spare capacity: evaluation data and its association with comprehension of dynamic conversations. Front Psychol 2015; 6:597. [PMID: 25999904 PMCID: PMC4422016 DOI: 10.3389/fpsyg.2015.00597] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2015] [Accepted: 04/22/2015] [Indexed: 11/29/2022] Open
Abstract
It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. Using simulated natural listening environments, the aims of this paper were to (1) evaluate an English version of a recently introduced auditory test to measure CSC that targets the updating process of the executive function, (2) investigate if the test predicts speech comprehension better than the reading span test (RST) commonly used to measure working memory capacity, and (3) determine if the test is sensitive to increasing the number of attended locations during listening. In Experiment I, the CSC test was presented using a male and a female talker, in quiet and in spatially separated babble- and cafeteria-noises, in an audio-only and in an audio-visual mode. Data collected on 21 listeners with normal and impaired hearing confirmed that the English version of the CSC test is sensitive to population group, noise condition, and clarity of speech, but not presentation modality. In Experiment II, performance by 27 normal-hearing listeners on a novel speech comprehension test presented in noise was significantly associated with working memory capacity, but not with CSC. Moreover, this group showed no significant difference in CSC as the number of talker locations in the test increased. There was no consistent association between the CSC test and the RST. It is recommended that future studies investigate the psychometric properties of the CSC test, and examine its sensitivity to the complexity of the listening environment in participants with both normal and impaired hearing.
Collapse
Affiliation(s)
- Gitte Keidser
- National Acoustic LaboratoriesSydney, NSW, Australia
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston UniversityBoston, MA, USA
| | | | - Alexandra Boyce
- Department of Audiology, Macquarie UniversitySydney, NSW, Australia
| |
Collapse
|
36
|
Rönnberg N, Rudner M, Lunner T, Stenfelt S. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility. Front Psychol 2014; 5:1490. [PMID: 25566159 PMCID: PMC4273615 DOI: 10.3389/fpsyg.2014.01490] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 12/03/2014] [Indexed: 11/17/2022] Open
Abstract
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.
Collapse
Affiliation(s)
- Niklas Rönnberg
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | - Thomas Lunner
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Oticon Research Centre Eriksholm Snekkersten, Denmark
| | - Stefan Stenfelt
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
37
|
Cognitive spare capacity and speech communication: a narrative overview. BIOMED RESEARCH INTERNATIONAL 2014; 2014:869726. [PMID: 24971355 PMCID: PMC4058272 DOI: 10.1155/2014/869726] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 05/13/2014] [Indexed: 01/27/2023]
Abstract
Background noise can make speech communication tiring and cognitively taxing, especially for individuals with hearing impairment. It is now well established that better working memory capacity is associated with better ability to understand speech under adverse conditions as well as better ability to benefit from the advanced signal processing in modern hearing aids. Recent work has shown that although such processing cannot overcome hearing handicap, it can increase cognitive spare capacity, that is, the ability to engage in higher level processing of speech. This paper surveys recent work on cognitive spare capacity and suggests new avenues of investigation.
Collapse
|
38
|
Mishra S, Stenfelt S, Lunner T, Rönnberg J, Rudner M. Cognitive spare capacity in older adults with hearing loss. Front Aging Neurosci 2014; 6:96. [PMID: 24904409 PMCID: PMC4033040 DOI: 10.3389/fnagi.2014.00096] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 04/29/2014] [Indexed: 12/28/2022] Open
Abstract
Individual differences in working memory capacity (WMC) are associated with speech recognition in adverse conditions, reflecting the need to maintain and process speech fragments until lexical access can be achieved. When working memory resources are engaged in unlocking the lexicon, there is less Cognitive Spare Capacity (CSC) available for higher level processing of speech. CSC is essential for interpreting the linguistic content of speech input and preparing an appropriate response, that is, engaging in conversation. Previously, we showed, using a Cognitive Spare Capacity Test (CSCT) that in young adults with normal hearing, CSC was not generally related to WMC and that when CSC decreased in noise it could be restored by visual cues. In the present study, we investigated CSC in 24 older adults with age-related hearing loss, by administering the CSCT and a battery of cognitive tests. We found generally reduced CSC in older adults with hearing loss compared to the younger group in our previous study, probably because they had poorer cognitive skills and deployed them differently. Importantly, CSC was not reduced in the older group when listening conditions were optimal. Visual cues improved CSC more for this group than for the younger group in our previous study. CSC of older adults with hearing loss was not generally related to WMC but it was consistently related to episodic long term memory, suggesting that the efficiency of this processing bottleneck is important for executive processing of speech in this group.
Collapse
Affiliation(s)
- Sushmit Mishra
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Stefan Stenfelt
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden ; Department of Clinical and Experimental Medicine, Linköping University Linköping, Sweden ; Eriksholm Research Centre, Oticon A/S Snekkersten, Denmark
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
39
|
Zekveld AA, Rudner M, Kramer SE, Lyzenga J, Rönnberg J. Cognitive processing load during listening is reduced more by decreasing voice similarity than by increasing spatial separation between target and masker speech. Front Neurosci 2014; 8:88. [PMID: 24808818 PMCID: PMC4010736 DOI: 10.3389/fnins.2014.00088] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2013] [Accepted: 04/07/2014] [Indexed: 11/24/2022] Open
Abstract
We investigated changes in speech recognition and cognitive processing load due to the masking release attributable to decreasing similarity between target and masker speech. This was achieved by using masker voices with either the same (female) gender as the target speech or different gender (male) and/or by spatially separating the target and masker speech using HRTFs. We assessed the relation between the signal-to-noise ratio required for 50% sentence intelligibility, the pupil response and cognitive abilities. We hypothesized that the pupil response, a measure of cognitive processing load, would be larger for co-located maskers and for same-gender compared to different-gender maskers. We further expected that better cognitive abilities would be associated with better speech perception and larger pupil responses as the allocation of larger capacity may result in more intense mental processing. In line with previous studies, the performance benefit from different-gender compared to same-gender maskers was larger for co-located masker signals. The performance benefit of spatially-separated maskers was larger for same-gender maskers. The pupil response was larger for same-gender than for different-gender maskers, but was not reduced by spatial separation. We observed associations between better perception performance and better working memory, better information updating, and better executive abilities when applying no corrections for multiple comparisons. The pupil response was not associated with cognitive abilities. Thus, although both gender and location differences between target and masker facilitate speech perception, only gender differences lower cognitive processing load. Presenting a more dissimilar masker may facilitate target-masker separation at a later (cognitive) processing stage than increasing the spatial separation between the target and masker. The pupil response provides information about speech perception that complements intelligibility data.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre for Hearing and Deafness Research, The Swedish Institute for Disability Research, Linköping and Örebro Universities Linköping, Sweden ; Section Audiology, Department of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre for Hearing and Deafness Research, The Swedish Institute for Disability Research, Linköping and Örebro Universities Linköping, Sweden
| | - Sophia E Kramer
- Section Audiology, Department of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Johannes Lyzenga
- Section Audiology, Department of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center Amsterdam, Netherlands
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre for Hearing and Deafness Research, The Swedish Institute for Disability Research, Linköping and Örebro Universities Linköping, Sweden
| |
Collapse
|
40
|
Michalek AMP, Watson SM, Ash I, Ringleb S, Raymer A. Effects of noise and audiovisual cues on speech processing in adults with and without ADHD. Int J Audiol 2014; 53:145-52. [DOI: 10.3109/14992027.2013.866282] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
41
|
Rönnberg N, Rudner M, Lunner T, Stenfelt S. Assessing listening effort by measuring short-term memory storage and processing of speech in noise. SPEECH LANGUAGE AND HEARING 2014. [DOI: 10.1179/2050572813y.0000000033] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
42
|
Mishra S, Lunner T, Stenfelt S, Rönnberg J, Rudner M. Seeing the talker's face supports executive processing of speech in steady state noise. Front Syst Neurosci 2013; 7:96. [PMID: 24324411 PMCID: PMC3840300 DOI: 10.3389/fnsys.2013.00096] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2013] [Accepted: 11/09/2013] [Indexed: 11/21/2022] Open
Abstract
Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.
Collapse
Affiliation(s)
- Sushmit Mishra
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden
| | | | | | | | | |
Collapse
|
43
|
Ng EHN, Rudner M, Lunner T, Rönnberg J. Relationships between self-report and cognitive measures of hearing aid outcome. SPEECH LANGUAGE AND HEARING 2013. [PMID: 26213622 PMCID: PMC4500453 DOI: 10.1179/205057113x13782848890774] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
This present study examined the relationship between cognitive measures and self-report hearing aid outcome. A sentence-final word identification and recall (SWIR) test was used to investigate how hearing aid use may relate to experienced explicit cognitive processing. A visually based cognitive test battery was also administered. To measure self-report hearing aid outcome, the International Outcome Inventory – Hearing Aids (IOI-HA) and the Speech, Spatial and Qualities of Hearing Scale (SSQ) were employed. Twenty-six experienced hearing aid users (mean age of 59 years) with symmetrical moderate-to-moderately severe sensorineural hearing loss were recruited. Free recall performance in the SWIR test correlated negatively with item 3 of IOI-HA, which measures residual difficulty in adverse listening situations. Cognitive abilities related to verbal information processing were correlated positively with self-reported hearing aid use and overall success. The present study showed that reported residual difficulty with hearing aid may relate to experienced explicit processing in difficult listening conditions, such that individuals with better cognitive capacity tended to report more remaining difficulty in challenging listening situations. The possibility of using cognitive measures to predict hearing aid outcome in real life should be explored in future research.
Collapse
Affiliation(s)
- Elaine Hoi Ning Ng
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden
| | - Thomas Lunner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden; Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark; and Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University, Sweden
| |
Collapse
|
44
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 PMCID: PMC3710434 DOI: 10.3389/fnsys.2013.00031] [Citation(s) in RCA: 608] [Impact Index Per Article: 50.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 12/28/2022] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 DOI: 10.3389/fnsys] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 05/28/2023] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Besser J, Koelewijn T, Zekveld AA, Kramer SE, Festen JM. How linguistic closure and verbal working memory relate to speech recognition in noise--a review. Trends Amplif 2013; 17:75-93. [PMID: 23945955 PMCID: PMC4070613 DOI: 10.1177/1084713813495459] [Citation(s) in RCA: 100] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
Collapse
Affiliation(s)
- Jana Besser
- VU University Medical Center, Amsterdam, Netherlands
| | | | - Adriana A. Zekveld
- VU University Medical Center, Amsterdam, Netherlands
- The Swedish Institute for Disability Research, Sweden
- Linköping University, Linköping, Sweden
| | | | | |
Collapse
|