1
|
Bieber RE, Phillips I, Ellis GM, Brungart DS. Current Age and Language Use Impact Speech-in-Noise Differently for Monolingual and Bilingual Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:2026-2046. [PMID: 40020659 DOI: 10.1044/2024_jslhr-24-00264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
PURPOSE Some bilinguals may exhibit lower performance when recognizing speech in noise (SiN) in their second language (L2) compared to monolinguals in their first language. Poorer performance has been found mostly for late bilinguals (L2 acquired after childhood) listening to sentences containing linguistic context and less so for simultaneous/early bilinguals (L2 acquired during childhood) and when testing context-free stimuli. However, most previous studies tested younger participants, meaning little is known about interactions with age; the purpose of this study was to address this gap. METHOD Context-free SiN understanding was measured via the Modified Rhyme Test (MRT) in 3,803 young and middle-aged bilingual and monolingual adults (ages 18-57 years; 19.6% bilinguals, all L2 English) with normal to near-normal hearing. Bilingual adults included simultaneous (n = 462), early (n = 185), and late (n = 97) bilinguals. Performance on the MRT was measured with both accuracy and response time. A self-reported measure of current English use was also collected for bilinguals to evaluate its impact on MRT performance. RESULTS Current age impacted MRT accuracy scores differently for each listener group. Relative to monolinguals, simultaneous and early bilinguals showed decreased performance with older age. Response times slowed with increasing current age at similar rates for all groups, despite faster overall response times for monolinguals. Among all bilingual listeners, greater current English language use predicted higher MRT accuracy. For simultaneous bilinguals, greater English use was associated with faster response times. CONCLUSIONS SiN outcomes in bilingual adults are impacted by age at time of testing and by fixed features of their language history (i.e., age of acquisition) as well as language practices, which can shift over time (i.e., current language use). Results support routine querying of language history and use in the audiology clinic. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.28405430.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Gregory M Ellis
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Po'okela Solutions, LLC, Honolulu, HI
| | - Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| |
Collapse
|
2
|
Lunardelo PP, Fukuda MTH, Zanchetta S. Age-Related Listening Performance Changes Across Adulthood. Ear Hear 2025; 46:408-420. [PMID: 39370558 DOI: 10.1097/aud.0000000000001595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
OBJECTIVES This study compares auditory processing performance across different decades of adulthood, including young adults and middle-aged individuals with normal hearing and no spontaneous auditory complaints. DESIGN We assessed 80 participants with normal hearing, at least 10 years of education, and normal global cognition. The participants completed various auditory tests, including speech-in-noise, dichotic digits, duration, pitch pattern sequence, gap in noise, and masking level difference. In addition, we conducted working memory assessments and administered a questionnaire on self-perceived hearing difficulties. RESULTS Our findings revealed significant differences in auditory test performance across different age groups, except for the masking level difference. The youngest group outperformed all other age groups in the speech-in-noise test, while differences in dichotic listening and temporal resolution emerged from the age of 40 and in temporal ordering from the age of 50. Moreover, higher education levels and better working memory test scores were associated with better auditory performance as individuals aged. However, the influence of these factors varied across different auditory tests. It is interesting that we observed increased self-reported hearing difficulties with age, even in participants without spontaneous auditory complaints. CONCLUSIONS Our study highlights significant variations in auditory test performance, with noticeable changes occurring from age 30 and becoming more pronounced from age 40 onward. As individuals grow older, they tend to perceive more hearing difficulties. Furthermore, the impact of age on auditory processing performance is influenced by factors such as education and working memory.
Collapse
Affiliation(s)
- Pamela P Lunardelo
- Department of Psychology, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Brazil
| | - Marisa T H Fukuda
- Department of Psychology, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Brazil
- Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Sthella Zanchetta
- Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| |
Collapse
|
3
|
Bent T, Baese-Berk M, Puckett B, Ryherd E, Perry S, Manley NA. Older adults' recognition of medical terminology in hospital noise. Cogn Res Princ Implic 2024; 9:79. [PMID: 39636386 PMCID: PMC11621266 DOI: 10.1186/s41235-024-00606-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 11/13/2024] [Indexed: 12/07/2024] Open
Abstract
Word identification accuracy is modulated by many factors including linguistic characteristics of words (frequent vs. infrequent), listening environment (noisy vs. quiet), and listener-related differences (older vs. younger). Nearly, all studies investigating these factors use high-familiarity words and noise signals that are either energetic maskers (e.g., white noise) or informational maskers composed of competing talkers (e.g., multitalker babble). Here, we expand on these findings by examining younger and older listeners' speech-in-noise perception for words varying in both frequency and familiarity within a simulated hospital noise that has important non-speech information. The method was inspired by the real-world challenges aging patients can face in understanding less familiar medical terminology used by healthcare professionals in noisy hospital environments. Word familiarity data from older and young adults were collected for 800 medically related terms. Familiarity ratings were highly correlated between the two age groups. Older adults' transcription accuracy for sentences with medical terminology that vary in their familiarity and frequency was assessed across four listening conditions: hospital noise, speech-shaped noise, amplitude-modulated speech-shaped noise, and quiet. Listeners were less accurate in noise conditions than in a quiet condition and were more impacted by hospital noise than either speech-shaped noise. Sentences with low-familiarity and low-frequency medical words combined with hospital noise were particularly detrimental for older adults compared to younger adults. The results impact our theoretical understanding of speech perception in noise and highlight real-world consequences of older adults' difficulties with speech-in-noise and specifically noise containing competing, non-speech information.
Collapse
Affiliation(s)
- Tessa Bent
- Department of Speech, Language and Hearing Sciences, Indiana University, Tessa Bent, 2631 E. Discovery Parkway, Bloomington, IN, 47408, USA.
| | | | - Brian Puckett
- Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln, Lincoln, USA
| | - Erica Ryherd
- Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln, Lincoln, USA
| | - Sydney Perry
- Department of Speech, Language and Hearing Sciences, Indiana University, Tessa Bent, 2631 E. Discovery Parkway, Bloomington, IN, 47408, USA
| | - Natalie A Manley
- Division of Geriatrics, Gerontology and Palliative Medicine, University of Nebraska Medical Center Department of Internal Medicine, Omaha, USA
| |
Collapse
|
4
|
Anshu K, Kristensen K, Godar SP, Zhou X, Hartley SL, Litovsky RY. Speech Recognition and Spatial Hearing in Young Adults With Down Syndrome: Relationships With Hearing Thresholds and Auditory Working Memory. Ear Hear 2024; 45:1568-1584. [PMID: 39090791 PMCID: PMC11493531 DOI: 10.1097/aud.0000000000001549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
OBJECTIVES Individuals with Down syndrome (DS) have a higher incidence of hearing loss (HL) compared with their peers without developmental disabilities. Little is known about the associations between HL and functional hearing for individuals with DS. This study investigated two aspects of auditory functions, "what" (understanding the content of sound) and "where" (localizing the source of sound), in young adults with DS. Speech reception thresholds in quiet and in the presence of interferers provided insight into speech recognition, that is, the "what" aspect of auditory maturation. Insights into "where" aspect of auditory maturation were gained from evaluating speech reception thresholds in colocated versus separated conditions (quantifying spatial release from masking) as well as right versus left discrimination and sound location identification. Auditory functions in the "where" domain develop during earlier stages of cognitive development in contrast with the later developing "what" functions. We hypothesized that young adults with DS would exhibit stronger "where" than "what" auditory functioning, albeit with the potential impact of HL. Considering the importance of auditory working memory and receptive vocabulary for speech recognition, we hypothesized that better speech recognition in young adults with DS, in quiet and with speech interferers, would be associated with better auditory working memory ability and receptive vocabulary. DESIGN Nineteen young adults with DS (aged 19 to 24 years) participated in the study and completed assessments on pure-tone audiometry, right versus left discrimination, sound location identification, and speech recognition in quiet and with speech interferers that were colocated or spatially separated. Results were compared with published data from children and adults without DS and HL, tested using similar protocols and stimuli. Digit Span tests assessed auditory working memory. Receptive vocabulary was examined using the Peabody Picture Vocabulary Test Fifth Edition. RESULTS Seven participants (37%) had HL in at least 1 ear; 4 individuals had mild HL, and 3 had moderate HL or worse. Participants with mild or no HL had ≥75% correct at 5° separation on the discrimination task and sound localization root mean square errors (mean ± SD: 8.73° ± 2.63°) within the range of adults in the comparison group. Speech reception thresholds in young adults with DS were higher than all comparison groups. However, spatial release from masking did not differ between young adults with DS and comparison groups. Better (lower) speech reception thresholds were associated with better hearing and better auditory working memory ability. Receptive vocabulary did not predict speech recognition. CONCLUSIONS In the absence of HL, young adults with DS exhibited higher accuracy during spatial hearing tasks as compared with speech recognition tasks. Thus, auditory processes associated with the "where" pathways appear to be a relative strength than those associated with "what" pathways in young adults with DS. Further, both HL and auditory working memory impairments contributed to difficulties in speech recognition in the presence of speech interferers. Future larger-sized samples are needed to replicate and extend our findings.
Collapse
Affiliation(s)
- Kumari Anshu
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Kayla Kristensen
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Shelly P. Godar
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Xin Zhou
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Currently at The Chinese University of Hong Kong, Hong Kong
| | - Sigan L. Hartley
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- School of Human Ecology, University of Wisconsin–Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison, Madison, WI, USA
| |
Collapse
|
5
|
Linn W, Barrios‐Martinez J, Fernandes‐Cabral D, Jacquesson T, Nuñez M, Gomez R, Anania Y, Fernandez‐Miranda J, Yeh F. Probabilistic coverage of the frontal aslant tract in young adults: Insights into individual variability, lateralization, and language functions. Hum Brain Mapp 2024; 45:e26630. [PMID: 38376145 PMCID: PMC10878181 DOI: 10.1002/hbm.26630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/03/2024] [Accepted: 02/06/2024] [Indexed: 02/21/2024] Open
Abstract
The frontal aslant tract (FAT) is a crucial neural pathway of language and speech, but little is known about its connectivity and segmentation differences across populations. In this study, we investigate the probabilistic coverage of the FAT in a large sample of 1065 young adults. Our primary goal was to reveal individual variability and lateralization of FAT and its structure-function correlations in language processing. The study utilized diffusion MRI data from 1065 subjects obtained from the Human Connectome Project. Automated tractography using DSI Studio software was employed to map white matter bundles, and the results were examined to study the population variation of the FAT. Additionally, anatomical dissections were performed to validate the fiber tracking results. The tract-to-region connectome, based on Human Connectome Project-MMP parcellations, was utilized to provide population probability of the tract-to-region connections. Our results showed that the left anterior FAT exhibited the most substantial individual differences, particularly in the superior and middle frontal gyrus, with greater variability in the superior than the inferior region. Furthermore, we found left lateralization in FAT, with a greater difference in coverage in the inferior and posterior portions. Additionally, our analysis revealed a significant positive correlation between the left FAT inferior coverage area and the performance on the oral reading recognition (p = .016) and picture vocabulary (p = .0026) tests. In comparison, fractional anisotropy of the right FAT exhibited marginal significance in its correlation (p = .056) with Picture Vocabulary Test. Our findings, combined with the connectivity patterns of the FAT, allowed us to segment its structure into anterior and posterior segments. We found significant variability in FAT coverage among individuals, with left lateralization observed in both macroscopic shape measures and microscopic diffusion metrics. Our findings also suggested a potential link between the size of the left FAT's inferior coverage area and language function tests. These results enhance our understanding of the FAT's role in brain connectivity and its potential implications for language and executive functions.
Collapse
Affiliation(s)
- Wen‐Jieh Linn
- Department of Neurological SurgeryUniversity of PittsburghPittsburghPennsylvaniaUSA
| | | | | | - Timothée Jacquesson
- CHU de Lyon – Hôpital Neurologique et Neurochirurgical Pierre WertheimerLyonFrance
| | - Maximiliano Nuñez
- Department of Neurological SurgeryHospital El CruceBuenos AiresArgentina
| | - Ricardo Gomez
- Department of Neurological SurgeryUniversity of PittsburghPittsburghPennsylvaniaUSA
| | - Yury Anania
- Department of Neurological SurgeryUniversity of PittsburghPittsburghPennsylvaniaUSA
| | | | - Fang‐Cheng Yeh
- Department of Neurological SurgeryUniversity of PittsburghPittsburghPennsylvaniaUSA
- Department of BioengineeringUniversity of PittsburghPittsburghPennsylvaniaUSA
| |
Collapse
|
6
|
Johns MA, Calloway RC, Karunathilake IMD, Decruy LP, Anderson S, Simon JZ, Kuchinsky SE. Attention Mobilization as a Modulator of Listening Effort: Evidence From Pupillometry. Trends Hear 2024; 28:23312165241245240. [PMID: 38613337 PMCID: PMC11015766 DOI: 10.1177/23312165241245240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 03/11/2024] [Accepted: 03/15/2024] [Indexed: 04/14/2024] Open
Abstract
Listening to speech in noise can require substantial mental effort, even among younger normal-hearing adults. The task-evoked pupil response (TEPR) has been shown to track the increased effort exerted to recognize words or sentences in increasing noise. However, few studies have examined the trajectory of listening effort across longer, more natural, stretches of speech, or the extent to which expectations about upcoming listening difficulty modulate the TEPR. Seventeen younger normal-hearing adults listened to 60-s-long audiobook passages, repeated three times in a row, at two different signal-to-noise ratios (SNRs) while pupil size was recorded. There was a significant interaction between SNR, repetition, and baseline pupil size on sustained listening effort. At lower baseline pupil sizes, potentially reflecting lower attention mobilization, TEPRs were more sustained in the harder SNR condition, particularly when attention mobilization remained low by the third presentation. At intermediate baseline pupil sizes, differences between conditions were largely absent, suggesting these listeners had optimally mobilized their attention for both SNRs. Lastly, at higher baseline pupil sizes, potentially reflecting overmobilization of attention, the effect of SNR was initially reversed for the second and third presentations: participants initially appeared to disengage in the harder SNR condition, resulting in reduced TEPRs that recovered in the second half of the story. Together, these findings suggest that the unfolding of listening effort over time depends critically on the extent to which individuals have successfully mobilized their attention in anticipation of difficult listening conditions.
Collapse
Affiliation(s)
- M. A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - R. C. Calloway
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - I. M. D. Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - L. P. Decruy
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - S. Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
| | - J. Z. Simon
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - S. E. Kuchinsky
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD 20889, USA
| |
Collapse
|
7
|
Fogerty D, Ahlstrom JB, Dubno JR. Sentence recognition with modulation-filtered speech segments for younger and older adults: Effects of hearing impairment and cognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3328-3343. [PMID: 37983296 PMCID: PMC10663055 DOI: 10.1121/10.0022445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/23/2023] [Accepted: 11/01/2023] [Indexed: 11/22/2023]
Abstract
This study investigated word recognition for sentences temporally filtered within and across acoustic-phonetic segments providing primarily vocalic or consonantal cues. Amplitude modulation was filtered at syllabic (0-8 Hz) or slow phonemic (8-16 Hz) rates. Sentence-level modulation properties were also varied by amplifying or attenuating segments. Participants were older adults with normal or impaired hearing. Older adult speech recognition was compared to groups of younger normal-hearing adults who heard speech unmodified or spectrally shaped with and without threshold matching noise that matched audibility to hearing-impaired thresholds. Participants also completed cognitive and speech recognition measures. Overall, results confirm the primary contribution of syllabic speech modulations to recognition and demonstrate the importance of these modulations across vowel and consonant segments. Group differences demonstrated a hearing loss-related impairment in processing modulation-filtered speech, particularly at 8-16 Hz. This impairment could not be fully explained by age or poorer audibility. Principal components analysis identified a single factor score that summarized speech recognition across modulation-filtered conditions; analysis of individual differences explained 81% of the variance in this summary factor among the older adults with hearing loss. These results suggest that a combination of cognitive abilities and speech glimpsing abilities contribute to speech recognition in this group.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Jayne B Ahlstrom
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| | - Judy R Dubno
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| |
Collapse
|
8
|
Pearson DV, Shen Y, McAuley JD, Kidd GR. Differential sensitivity to speech rhythms in young and older adults. Front Psychol 2023; 14:1160236. [PMID: 37251054 PMCID: PMC10213510 DOI: 10.3389/fpsyg.2023.1160236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 04/19/2023] [Indexed: 05/31/2023] Open
Abstract
Sensitivity to the temporal properties of auditory patterns tends to be poorer in older listeners, and this has been hypothesized to be one factor contributing to their poorer speech understanding. This study examined sensitivity to speech rhythms in young and older normal-hearing subjects, using a task designed to measure the effect of speech rhythmic context on the detection of changes in the timing of word onsets in spoken sentences. A temporal-shift detection paradigm was used in which listeners were presented with an intact sentence followed by two versions of the sentence in which a portion of speech was replaced with a silent gap: one with correct gap timing (the same duration as the missing speech) and one with altered gap timing (shorter or longer than the duration of the missing speech), resulting in an early or late resumption of the sentence after the gap. The sentences were presented with either an intact rhythm or an altered rhythm preceding the silent gap. Listeners judged which sentence had the altered gap timing, and thresholds for the detection of deviations from the correct timing were calculated separately for shortened and lengthened gaps. Both young and older listeners demonstrated lower thresholds in the intact rhythm condition than in the altered rhythm conditions. However, shortened gaps led to lower thresholds than lengthened gaps for the young listeners, while older listeners were not sensitive to the direction of the change in timing. These results show that both young and older listeners rely on speech rhythms to generate temporal expectancies for upcoming speech events. However, the absence of lower thresholds for shortened gaps among the older listeners indicates a change in speech-timing expectancies with age. A further examination of individual differences within the older group revealed that those with better rhythm-discrimination abilities (from a separate study) tended to show the same heightened sensitivity to early events observed with the young listeners.
Collapse
Affiliation(s)
- Dylan V. Pearson
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
| | - Yi Shen
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
| | - J. Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Gary R. Kidd
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
| |
Collapse
|
9
|
li T, Gao Y, Wu Y. The influences of working memory updating on word association effects and thematic role assignment during sentence processing. Neuropsychologia 2023; 184:108547. [PMID: 36967041 DOI: 10.1016/j.neuropsychologia.2023.108547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 03/13/2023] [Accepted: 03/23/2023] [Indexed: 03/28/2023]
Abstract
The current study investigated how individual variability in working memory (WM) updating affects real-time processing of thematic role assignment and word association during sentence reading comprehension when ERPs were recorded. By adopting a factorial design, four types of sentences were formed by crossing word association and role assignment as independent variables. The results indicated that associated words evoked a smaller N400 effect but a larger P600 effect than unassociated words in the high WM group, whereas no word association effect was found in the low WM group. In contrast, role reversal elicited larger N400 effects for both groups. These results suggest that individual differences in WM updating influenced whether and how readers retrieved and integrated the associated word in whole sentences but did not influence the online assignment of thematic roles during sentence reading. Individuals with high WM updating, in contrast to those with low WM updating, were good at making use of word-associated information provided by the preceding context in current processing.
Collapse
|
10
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
11
|
Hülsmeier D, Kollmeier B. How much individualization is required to predict the individual effect of suprathreshold processing deficits? Assessing Plomp's distortion component with psychoacoustic detection thresholds and FADE. Hear Res 2022; 426:108609. [DOI: 10.1016/j.heares.2022.108609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 08/17/2022] [Accepted: 09/12/2022] [Indexed: 11/29/2022]
|
12
|
Völter C, Oberländer K, Haubitz I, Carroll R, Dazert S, Thomas JP. Poor Performer: A Distinct Entity in Cochlear Implant Users? Audiol Neurootol 2022; 27:356-367. [PMID: 35533653 PMCID: PMC9533457 DOI: 10.1159/000524107] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 03/10/2022] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Several factors are known to influence speech perception in cochlear implant (CI) users. To date, the underlying mechanisms have not yet been fully clarified. Although many CI users achieve a high level of speech perception, a small percentage of patients does not or only slightly benefit from the CI (poor performer, PP). In a previous study, PP showed significantly poorer results on nonauditory-based cognitive and linguistic tests than CI users with a very high level of speech understanding (star performer, SP). We now investigate if PP also differs from the CI user with an average performance (average performer, AP) in cognitive and linguistic performance. METHODS Seventeen adult postlingually deafened CI users with speech perception scores in quiet of 55 (9.32) % (AP) on the German Freiburg monosyllabic speech test at 65 dB underwent neurocognitive (attention, working memory, short- and long-term memory, verbal fluency, inhibition) and linguistic testing (word retrieval, lexical decision, phonological input lexicon). The results were compared to the performance of 15 PP (speech perception score of 15 [11.80] %) and 19 SP (speech perception score of 80 [4.85] %). For statistical analysis, U-Test and discrimination analysis have been done. RESULTS Significant differences between PP and AP were observed on linguistic tests, in Rapid Automatized Naming (RAN: p = 0.0026), lexical decision (LexDec: p = 0.026), phonological input lexicon (LEMO: p = 0.0085), and understanding of incomplete words (TRT: p = 0.0024). AP also had significantly better neurocognitive results than PP in the domains of attention (M3: p = 0.009) and working memory (OSPAN: p = 0.041; RST: p = 0.015) but not in delayed recall (delayed recall: p = 0.22), verbal fluency (verbal fluency: p = 0.084), and inhibition (Flanker: p = 0.35). In contrast, no differences were found hereby between AP and SP. Based on the TRT and the RAN, AP and PP could be separated in 100%. DISCUSSION The results indicate that PP constitute a distinct entity of CI users that differs even in nonauditory abilities from CI users with an average speech perception, especially with regard to rapid word retrieval either due to reduced phonological abilities or limited storage. Further studies should investigate if improved word retrieval by increased phonological and semantic training results in better speech perception in these CI users.
Collapse
Affiliation(s)
- Christiane Völter
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Kirsten Oberländer
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany,
| | - Imme Haubitz
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Rebecca Carroll
- Institute of English and American Studies, Technical University Braunschweig, Braunschweig, Germany
| | - Stefan Dazert
- Department of Otorhinolaryngology, Head and Neck Surgery, Cochlear Implant Center Ruhrgebiet, St Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Peter Thomas
- Department of Otorhinolaryngology, Head and Neck Surgery, St-Johannes-Hospital, Dortmund, Germany
| |
Collapse
|
13
|
Tamati TN, Sevich VA, Clausing EM, Moberly AC. Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners. Front Psychol 2022; 13:837644. [PMID: 35432072 PMCID: PMC9010567 DOI: 10.3389/fpsyg.2022.837644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/16/2022] [Indexed: 11/13/2022] Open
Abstract
When listening to degraded speech, such as speech delivered by a cochlear implant (CI), listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener's age. The current study investigated lexical effects in the compensation for speech that was degraded via noise-vocoding in younger and older listeners. In an online experiment, younger and older normal-hearing (NH) listeners rated the clarity of noise-vocoded sentences on a scale from 1 ("very unclear") to 7 ("completely clear"). Lexical information was provided by matching text primes and the lexical content of the target utterance. Half of the sentences were preceded by a matching text prime, while half were preceded by a non-matching prime. Each sentence also consisted of three key words of high or low lexical frequency and neighborhood density. Sentences were processed to simulate CI hearing, using an eight-channel noise vocoder with varying filter slopes. Results showed that lexical information impacted the perceived clarity of noise-vocoded speech. Noise-vocoded speech was perceived as clearer when preceded by a matching prime, and when sentences included key words with high lexical frequency and low neighborhood density. However, the strength of the lexical effects depended on the level of degradation. Matching text primes had a greater impact for speech with poorer spectral resolution, but lexical content had a smaller impact for speech with poorer spectral resolution. Finally, lexical information appeared to benefit both younger and older listeners. Findings demonstrate that lexical knowledge can be employed by younger and older listeners in cognitive compensation during the processing of noise-vocoded speech. However, lexical content may not be as reliable when the signal is highly degraded. Clinical implications are that for adult CI users, lexical knowledge might be used to compensate for the degraded speech signal, regardless of age, but some CI users may be hindered by a relatively poor signal.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Victoria A. Sevich
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH, United States
| | - Emily M. Clausing
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| | - Aaron C. Moberly
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| |
Collapse
|
14
|
Braza MD, Porter HL, Buss E, Calandruccio L, McCreery RW, Leibold LJ. Effects of word familiarity and receptive vocabulary size on speech-in-noise recognition among young adults with normal hearing. PLoS One 2022; 17:e0264581. [PMID: 35271608 PMCID: PMC8912124 DOI: 10.1371/journal.pone.0264581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 02/11/2022] [Indexed: 11/29/2022] Open
Abstract
Having a large receptive vocabulary benefits speech-in-noise recognition for young children, though this is not always the case for older children or adults. These observations could indicate that effects of receptive vocabulary size on speech-in-noise recognition differ depending on familiarity of the target words, with effects observed only for more recently acquired and less frequent words. Two experiments were conducted to evaluate effects of vocabulary size on open-set speech-in-noise recognition for adults with normal hearing. Targets were words acquired at 4, 9, 12 and 15 years of age, and they were presented at signal-to-noise ratios (SNRs) of -5 and -7 dB. Percent correct scores tended to fall with increasing age of acquisition (AoA), with the caveat that performance at -7 dB SNR was better for words acquired at 9 years of age than earlier- or later-acquired words. Similar results were obtained whether the AoA of the target words was blocked or mixed across trials. Differences in word duration appear to account for nonmonotonic effects of AoA. For all conditions, a positive correlation was observed between recognition and vocabulary size irrespective of target word AoA, indicating that effects of vocabulary size are not limited to recently acquired words. This dataset does not support differential assessment of AoA, lexical frequency, and other stimulus features known to affect lexical access.
Collapse
Affiliation(s)
- Meredith D. Braza
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| | - Heather L. Porter
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio, United States of America
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| |
Collapse
|
15
|
Destoky F, Bertels J, Niesen M, Wens V, Vander Ghinst M, Rovai A, Trotta N, Lallier M, De Tiège X, Bourguignon M. The role of reading experience in atypical cortical tracking of speech and speech-in-noise in dyslexia. Neuroimage 2022; 253:119061. [PMID: 35259526 DOI: 10.1016/j.neuroimage.2022.119061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 02/28/2022] [Accepted: 03/04/2022] [Indexed: 11/18/2022] Open
Abstract
Dyslexia is a frequent developmental disorder in which reading acquisition is delayed and that is usually associated with difficulties understanding speech in noise. At the neuronal level, children with dyslexia were reported to display abnormal cortical tracking of speech (CTS) at phrasal rate. Here, we aimed to determine if abnormal tracking relates to reduced reading experience, and if it is modulated by the severity of dyslexia or the presence of acoustic noise. We included 26 school-age children with dyslexia, 26 age-matched controls and 26 reading-level matched controls. All were native French speakers. Children's brain activity was recorded with magnetoencephalography while they listened to continuous speech in noiseless and multiple noise conditions. CTS values were compared between groups, conditions and hemispheres, and also within groups, between children with mild and severe dyslexia. Syllabic CTS was significantly reduced in the right superior temporal gyrus in children with dyslexia compared with controls matched for age but not for reading level. Severe dyslexia was characterized by lower rapid automatized naming (RAN) abilities compared with mild dyslexia, and phrasal CTS lateralized to the right hemisphere in children with mild dyslexia and all control groups but not in children with severe dyslexia. Finally, an alteration in phrasal CTS was uncovered in children with dyslexia compared with age-matched controls in babble noise conditions but not in other less challenging listening conditions (non-speech noise or noiseless conditions); no such effect was seen in comparison with reading-level matched controls. Overall, our results confirmed the finding of altered neuronal basis of speech perception in noiseless and babble noise conditions in dyslexia compared with age-matched peers. However, the absence of alteration in comparison with reading-level matched controls demonstrates that such alterations are associated with reduced reading level, suggesting they are merely driven by reduced reading experience rather than a cause of dyslexia. Finally, our result of altered hemispheric lateralization of phrasal CTS in relation with altered RAN abilities in severe dyslexia is in line with a temporal sampling deficit of speech at phrasal rate in dyslexia.
Collapse
Affiliation(s)
- Florian Destoky
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium.
| | - Julie Bertels
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Consciousness, Cognition and Computation Group, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Maxime Niesen
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Service d'ORL et de Chirurgie Cervico-Faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Vincent Wens
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Marc Vander Ghinst
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Service d'ORL et de Chirurgie Cervico-Faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Antonin Rovai
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Nicola Trotta
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian 20009, Spain
| | - Xavier De Tiège
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; BCBL, Basque Center on Cognition, Brain and Language, San Sebastian 20009, Spain; Laboratory of Neurophysiology and Movement Biomechanics, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
16
|
Cutting Through the Noise: Noise-Induced Cochlear Synaptopathy and Individual Differences in Speech Understanding Among Listeners With Normal Audiograms. Ear Hear 2022; 43:9-22. [PMID: 34751676 PMCID: PMC8712363 DOI: 10.1097/aud.0000000000001147] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Following a conversation in a crowded restaurant or at a lively party poses immense perceptual challenges for some individuals with normal hearing thresholds. A number of studies have investigated whether noise-induced cochlear synaptopathy (CS; damage to the synapses between cochlear hair cells and the auditory nerve following noise exposure that does not permanently elevate hearing thresholds) contributes to this difficulty. A few studies have observed correlations between proxies of noise-induced CS and speech perception in difficult listening conditions, but many have found no evidence of a relationship. To understand these mixed results, we reviewed previous studies that have examined noise-induced CS and performance on speech perception tasks in adverse listening conditions in adults with normal or near-normal hearing thresholds. Our review suggests that superficially similar speech perception paradigms used in previous investigations actually placed very different demands on sensory, perceptual, and cognitive processing. Speech perception tests that use low signal-to-noise ratios and maximize the importance of fine sensory details- specifically by using test stimuli for which lexical, syntactic, and semantic cues do not contribute to performance-are more likely to show a relationship to estimated CS levels. Thus, the current controversy as to whether or not noise-induced CS contributes to individual differences in speech perception under challenging listening conditions may be due in part to the fact that many of the speech perception tasks used in past studies are relatively insensitive to CS-induced deficits.
Collapse
|
17
|
Foreign Language Training to Stimulate Cognitive Functions. Brain Sci 2021; 11:brainsci11101315. [PMID: 34679380 PMCID: PMC8533724 DOI: 10.3390/brainsci11101315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 09/09/2021] [Accepted: 09/11/2021] [Indexed: 11/17/2022] Open
Abstract
Adult development throughout a lifetime implies a series of changes in systems, including cognitive and linguistic functioning. The aim of this article is to study the effect of foreign language training on linguistic processing, particularly the frequency of the tip-of-the-tongue (TOT) phenomenon and on other cognitive processes such as processing speed and working memory in adults aged 40 to 60 years. Sixty-six healthy Colombian teachers were enrolled in this study. They were then randomly divided into an experimental group (33 healthy adults who underwent a four-week training period) and a passive control group (33 healthy adults who did not undergo any training). All participants performed induction tasks for the TOT phenomenon, working memory and processing speed before and after the four weeks. Results showed more of an effect in the semantic access, phonological access and processing speed measures with a better performance in the experimental group than in the control group. In Colombia, this type of training is still new and little is known to date about programs to prevent cognitive impairments. The need to conduct more studies confirming or refuting these findings is discussed, thus raising awareness about the extent of this type of training to increase the linguistic and cognitive performance of adults.
Collapse
|
18
|
Hülsmeier D, Buhl M, Wardenga N, Warzybok A, Schädler MR, Kollmeier B. Inference of the distortion component of hearing impairment from speech recognition by predicting the effect of the attenuation component. Int J Audiol 2021; 61:205-219. [PMID: 34081564 DOI: 10.1080/14992027.2021.1929515] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE A model-based determination of the average supra-threshold ("distortion") component of hearing impairment which limits the benefit of hearing aid amplification. DESIGN Published speech recognition thresholds (SRTs) were predicted with the framework for auditory discrimination experiments (FADE), which simulates recognition processes, the speech intelligibility index (SII), which exploits frequency-dependent signal-to-noise ratios (SNR), and a modified SII with a hearing-loss-dependent band importance function (PAV). Their attenuation-component-based prediction errors were interpreted as estimates of the distortion component. STUDY SAMPLE Unaided SRTs of 315 hearing-impaired ears measured with the German matrix sentence test in stationary noise. RESULTS Overall, the models showed root-mean-square errors (RMSEs) of 7 dB, but for steeply sloping hearing loss FADE and PAV were more accurate (RMSE = 9 dB) than the SII (RMSE = 23 dB). Prediction errors of FADE and PAV increased linearly with the average hearing loss. The consideration of the distortion component estimate significantly improved the accuracy of FADE's and PAV's predictions. CONCLUSIONS The supra-threshold distortion component-estimated by prediction errors of FADE and PAV-seems to increase with the average hearing loss. Accounting for a distortion component improves the model predictions and implies a need for effective compensation strategies for supra-threshold processing deficits with increasing audibility loss.
Collapse
Affiliation(s)
- David Hülsmeier
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Mareike Buhl
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Nina Wardenga
- Cluster of Excellence Hearing4all, Oldenburg, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Anna Warzybok
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Marc René Schädler
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Birger Kollmeier
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| |
Collapse
|
19
|
Abstract
INTRODUCTION Despite substantial benefits of cochlear implantation (CI) there is a high variability in speech recognition, the reasons for which are not fully understood. Especially the group of low-performing CI users is under-researched. Because of limited perceptual quality, top-down mechanisms play an important role in decoding the speech signal transmitted by the CI. Thereby, differences in cognitive functioning and linguistic skills may explain speech outcome in these CI subjects. MATERIAL AND METHODS Fifteen post-lingually deaf CI recipients with a maximum speech perception of 30% in the Freiburger monosyllabic test (low performer = LP) underwent visually presented neurocognitive and linguistic test batteries assessing attention, memory, inhibition, working memory, lexical access, phonological input as well as automatic naming. Nineteen high performer (HP) with a speech perception of more than 70% were included as a control. Pairwise comparison of the two extreme groups and discrimination analysis were carried out. RESULTS Significant differences were found between LP and HP in phonological input lexicon and word retrieval (p = 0.0039∗∗). HP were faster in lexical access (p = 0.017∗) and distinguished more reliably between non-existing and existing words (p = 0.0021∗∗). Furthermore, HP outperformed LP in neurocognitive subtests, most prominently in attention (p = 0.003∗∗). LP and HP were primarily discriminated by linguistic performance and to a smaller extent by cognitive functioning (canonic r = 0.68, p = 0.0075). Poor rapid automatic naming of numbers helped to discriminate LP from HP CI users 91.7% of the time. CONCLUSION Severe phonologically based deficits in fast automatic speech processing contribute significantly to distinguish LP from HP CI users. Cognitive functions might partially help to overcome these difficulties.
Collapse
|
20
|
Errors on a Speech-in-Babble Sentence Recognition Test Reveal Individual Differences in Acoustic Phonetic Perception and Babble Misallocations. Ear Hear 2021; 42:673-690. [PMID: 33928926 DOI: 10.1097/aud.0000000000001020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVES The ability to recognize words in connected speech under noisy listening conditions is critical to everyday communication. Many processing levels contribute to the individual listener's ability to recognize words correctly against background speech, and there is clinical need for measures of individual differences at different levels. Typical listening tests of speech recognition in noise require a list of items to obtain a single threshold score. Diverse abilities measures could be obtained through mining various open-set recognition errors during multi-item tests. This study sought to demonstrate that an error mining approach using open-set responses from a clinical sentence-in-babble-noise test can be used to characterize abilities beyond signal-to-noise ratio (SNR) threshold. A stimulus-response phoneme-to-phoneme sequence alignment software system was used to achieve automatic, accurate quantitative error scores. The method was applied to a database of responses from normal-hearing (NH) adults. Relationships between two types of response errors and words correct scores were evaluated through use of mixed models regression. DESIGN Two hundred thirty-three NH adults completed three lists of the Quick Speech in Noise test. Their individual open-set speech recognition responses were automatically phonemically transcribed and submitted to a phoneme-to-phoneme stimulus-response sequence alignment system. The computed alignments were mined for a measure of acoustic phonetic perception, a measure of response text that could not be attributed to the stimulus, and a count of words correct. The mined data were statistically analyzed to determine whether the response errors were significant factors beyond stimulus SNR in accounting for the number of words correct per response from each participant. This study addressed two hypotheses: (1) Individuals whose perceptual errors are less severe recognize more words correctly under difficult listening conditions due to babble masking and (2) Listeners who are better able to exclude incorrect speech information such as from background babble and filling in recognize more stimulus words correctly. RESULTS Statistical analyses showed that acoustic phonetic accuracy and exclusion of babble background were significant factors, beyond the stimulus sentence SNR, in accounting for the number of words a participant recognized. There was also evidence that poorer acoustic phonetic accuracy could occur along with higher words correct scores. This paradoxical result came from a subset of listeners who had also performed subjective accuracy judgments. Their results suggested that they recognized more words while also misallocating acoustic cues from the background into the stimulus, without realizing their errors. Because the Quick Speech in Noise test stimuli are locked to their own babble sample, misallocations of whole words from babble into the responses could be investigated in detail. The high rate of common misallocation errors for some sentences supported the view that the functional stimulus was the combination of the target sentence and its babble. CONCLUSIONS Individual differences among NH listeners arise both in terms of words accurately identified and errors committed during open-set recognition of sentences in babble maskers. Error mining to characterize individual listeners can be done automatically at the levels of acoustic phonetic perception and the misallocation of background babble words into open-set responses. Error mining can increase test information and the efficiency and accuracy of characterizing individual listeners.
Collapse
|
21
|
Macdonald KT, Cirino PT, Miciak J, Grills AE. The Role of Reading Anxiety among Struggling Readers in Fourth and Fifth Grade. READING & WRITING QUARTERLY : OVERCOMING LEARNING DIFFICULTIES 2021; 37:382-394. [PMID: 35400986 PMCID: PMC8993164 DOI: 10.1080/10573569.2021.1874580] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Cognitive predictors of reading are well known, but less is understood about the roles of "noncognitive" factors, including emotional variables such as anxiety. While math anxiety has been a focus of study, its analogue in the reading literature is understudied. We assessed struggling fourth and fifth graders (n = 272) on reading anxiety in the context of general anxiety, cognitive predictors (working memory, verbal knowledge), and demographics. Regressions tested for unique contributions to three reading outcomes: word reading accuracy, oral reading fluency, and reading comprehension. Reading anxiety and general anxiety correlated moderately (r = .63) but were differentially related to reading. Reading anxiety predicted comprehension when all other predictors were considered, and predicted oral reading fluency until word reading accuracy was added to the model. Results offer a more nuanced understanding of the nature of reading anxiety, and its implications for struggling readers.
Collapse
Affiliation(s)
- Kelly T. Macdonald
- Department of Psychology, Texas Institute for Measurement, Evaluation, and Statistics (TIMES), University of Houston, Houston, TX, USA
| | - Paul T. Cirino
- Department of Psychology, Texas Institute for Measurement, Evaluation, and Statistics (TIMES), University of Houston, Houston, TX, USA
| | - Jeremy Miciak
- Department of Psychology, Texas Institute for Measurement, Evaluation, and Statistics (TIMES), University of Houston, Houston, TX, USA
| | | |
Collapse
|
22
|
Krethlow G, Fargier R, Laganaro M. Age-Specific Effects of Lexical-Semantic Networks on Word Production. Cogn Sci 2020; 44:e12915. [PMID: 33164246 PMCID: PMC7685158 DOI: 10.1111/cogs.12915] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 08/08/2020] [Accepted: 08/27/2020] [Indexed: 12/30/2022]
Abstract
The lexical-semantic organization of the mental lexicon is bound to change across the lifespan. Nevertheless, the effects of lexical-semantic factors on word processing are usually based on studies enrolling young adult cohorts. The current study aims to investigate to what extent age-specific semantic organization predicts performance in referential word production over the lifespan, from school-age children to older adults. In Study 1, we conducted a free semantic association task with participants from six age-groups (ranging from 10 to 80 years old) to compute measures that capture age-specific properties of the mental lexicon across the lifespan. These measures relate to lifespan changes in the Available Richness of the mental lexicon and in the lexical-semantic Network Prototypicality of concrete words. In Study 2, we used the collected data to predict performance in a picture-naming task on a new group of participants within the same age-groups as for Study 1. The results show that age-specific semantic Available Richness and Network Prototypicality affect word production speed while the semantic variables collected only in young adults do not. A richer and more prototypical semantic network across subjects from a given age-group is associated with faster word production speed. The current results indicate that age-specific semantic organization is crucial to predict lexical-semantic behaviors across the lifespan. Similarly, these results also provide cues to the understanding of the lexical-semantic properties of the mental lexicon and to lexical selection in referential tasks.
Collapse
Affiliation(s)
- Giulia Krethlow
- Faculty of Psychology and Educational SciencesUniversity of Geneva
| | | | - Marina Laganaro
- Faculty of Psychology and Educational SciencesUniversity of Geneva
| |
Collapse
|
23
|
Simulations with FADE of the effect of impaired hearing on speech recognition performance cast doubt on the role of spectral resolution. Hear Res 2020; 395:107995. [DOI: 10.1016/j.heares.2020.107995] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/06/2020] [Accepted: 05/12/2020] [Indexed: 11/18/2022]
|
24
|
Destoky F, Bertels J, Niesen M, Wens V, Vander Ghinst M, Leybaert J, Lallier M, Ince RAA, Gross J, De Tiège X, Bourguignon M. Cortical tracking of speech in noise accounts for reading strategies in children. PLoS Biol 2020; 18:e3000840. [PMID: 32845876 PMCID: PMC7478533 DOI: 10.1371/journal.pbio.3000840] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/08/2020] [Accepted: 08/12/2020] [Indexed: 11/29/2022] Open
Abstract
Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.
Collapse
Affiliation(s)
- Florian Destoky
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Julie Bertels
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Consciousness, Cognition and Computation group, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Maxime Niesen
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Service d'ORL et de chirurgie cervico-faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marc Vander Ghinst
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Jacqueline Leybaert
- Laboratoire Cognition Langage et Développement, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Robin A. A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignal analysis, University of Muenster, Muenster, Germany
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Laboratoire Cognition Langage et Développement, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| |
Collapse
|
25
|
Impact of Lexical Parameters and Audibility on the Recognition of the Freiburg Monosyllabic Speech Test. Ear Hear 2020; 41:136-142. [DOI: 10.1097/aud.0000000000000737] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Fontan L, Cretin-Maitenaz T, Füllgrabe C. Predicting Speech Perception in Older Listeners with Sensorineural Hearing Loss Using Automatic Speech Recognition. Trends Hear 2020; 24:2331216520914769. [PMID: 32233834 PMCID: PMC7119229 DOI: 10.1177/2331216520914769] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 02/16/2020] [Accepted: 03/02/2020] [Indexed: 11/17/2022] Open
Abstract
The objective of this study was to provide proof of concept that the speech intelligibility in quiet of unaided older hearing-impaired (OHI) listeners can be predicted by automatic speech recognition (ASR). Twenty-four OHI listeners completed three speech-identification tasks using speech materials of varying linguistic complexity and predictability (i.e., logatoms, words, and sentences). An ASR system was first trained on different speech materials and then used to recognize the same speech stimuli presented to the listeners but processed to mimic some of the perceptual consequences of age-related hearing loss experienced by each of the listeners: the elevation of hearing thresholds (by linear filtering), the loss of frequency selectivity (by spectrally smearing), and loudness recruitment (by raising the amplitude envelope to a power). Independently of the size of the lexicon used in the ASR system, strong to very strong correlations were observed between human and machine intelligibility scores. However, large root-mean-square errors (RMSEs) were observed for all conditions. The simulation of frequency selectivity loss had a negative impact on the strength of the correlation and the RMSE. Highest correlations and smallest RMSEs were found for logatoms, suggesting that the prediction system reflects mostly the functioning of the peripheral part of the auditory system. In the case of sentences, the prediction of human intelligibility was significantly improved by taking into account cognitive performance. This study demonstrates for the first time that ASR, even when trained on intact independent speech material, can be used to estimate trends in speech intelligibility of OHI listeners.
Collapse
Affiliation(s)
| | - Tom Cretin-Maitenaz
- Service d’Oto-Rhino-Laryngologie, d’Oto-Neurologie et d’ORL Pédiatrique, Centre Hospitalier Universitaire de Toulouse, France
- Ecole d’Audioprothèse de Cahors, Université Paul Sabatier Toulouse III, France
| | | |
Collapse
|
27
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
28
|
|
29
|
Kidd E, Donnelly S, Christiansen MH. Individual Differences in Language Acquisition and Processing. Trends Cogn Sci 2018; 22:154-169. [PMID: 29277256 DOI: 10.1016/j.tics.2017.11.006] [Citation(s) in RCA: 131] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 11/24/2017] [Accepted: 11/28/2017] [Indexed: 02/06/2023]
|
30
|
Nakeva von Mentzer C, Sundström M, Enqvist K, Hällgren M. Assessing speech perception in Swedish school-aged children: preliminary data on the Listen–Say test. LOGOP PHONIATR VOCO 2017; 43:106-119. [DOI: 10.1080/14015439.2017.1380076] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Martina Sundström
- Department of Neuroscience, Unit for Speech Language Pathology, Uppsala University, Uppsala, Sweden
| | - Karin Enqvist
- Department of Neuroscience, Unit for Speech Language Pathology, Uppsala University, Uppsala, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology/Section of Audiology, Linköping University Hospital, Linköping, Sweden
| |
Collapse
|
31
|
Rosemann S, Gießing C, Özyurt J, Carroll R, Puschmann S, Thiel CM. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults. Front Hum Neurosci 2017. [PMID: 28638329 PMCID: PMC5461255 DOI: 10.3389/fnhum.2017.00294] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Carsten Gießing
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Rebecca Carroll
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Institute of Dutch Studies, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Sebastian Puschmann
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität OldenburgOldenburg, Germany
| |
Collapse
|
32
|
Dryden A, Allen HA, Henshaw H, Heinrich A. The Association Between Cognitive Performance and Speech-in-Noise Perception for Adult Listeners: A Systematic Literature Review and Meta-Analysis. Trends Hear 2017; 21:2331216517744675. [PMID: 29237334 PMCID: PMC5734454 DOI: 10.1177/2331216517744675] [Citation(s) in RCA: 107] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 10/25/2017] [Accepted: 10/31/2017] [Indexed: 11/16/2022] Open
Abstract
Published studies assessing the association between cognitive performance and speech-in-noise (SiN) perception examine different aspects of each, test different listeners, and often report quite variable associations. By examining the published evidence base using a systematic approach, we aim to identify robust patterns across studies and highlight any remaining gaps in knowledge. We limit our assessment to adult unaided listeners with audiometric profiles ranging from normal hearing to moderate hearing loss. A total of 253 articles were independently assessed by two researchers, with 25 meeting the criteria for inclusion. Included articles assessed cognitive measures of attention, memory, executive function, IQ, and processing speed. SiN measures varied by target (phonemes or syllables, words, and sentences) and masker type (unmodulated noise, modulated noise, >2-talker babble, and ≤2-talker babble. The overall association between cognitive performance and SiN perception was r = .31. For component cognitive domains, the association with (pooled) SiN perception was as follows: processing speed ( r = .39), inhibitory control ( r = .34), working memory ( r = .28), episodic memory ( r = .26), and crystallized IQ ( r = .18). Similar associations were shown for the different speech target and masker types. This review suggests a general association of r≈.3 between cognitive performance and speech perception, although some variability in association appeared to exist depending on cognitive domain and SiN target or masker assessed. Where assessed, degree of unaided hearing loss did not play a major moderating role. We identify a number of cognitive performance and SiN perception combinations that have not been tested and whose future investigation would enable further fine-grained analyses of these relationships.
Collapse
Affiliation(s)
- Adam Dryden
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, UK
- School of Psychology, University of Nottingham, UK
| | | | - Helen Henshaw
- National Institute for Health Research Nottingham Biomedical Research Centre, School of Medicine, University of Nottingham, UK
- Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
| | - Antje Heinrich
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, UK
| |
Collapse
|
33
|
Thiel CM, Özyurt J, Nogueira W, Puschmann S. Effects of Age on Long Term Memory for Degraded Speech. Front Hum Neurosci 2016; 10:473. [PMID: 27708570 PMCID: PMC5030220 DOI: 10.3389/fnhum.2016.00473] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 09/07/2016] [Indexed: 12/15/2022] Open
Abstract
Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.
Collapse
Affiliation(s)
- Christiane M Thiel
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| | - Waldo Nogueira
- Cluster of Excellence "Hearing4all", Department of Otolaryngology, Medical University Hannover Hannover, Germany
| | - Sebastian Puschmann
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| |
Collapse
|
34
|
Füllgrabe C, Rosen S. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds. Front Psychol 2016; 7:1268. [PMID: 27625615 PMCID: PMC5003928 DOI: 10.3389/fpsyg.2016.01268] [Citation(s) in RCA: 119] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/09/2016] [Indexed: 12/29/2022] Open
Abstract
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.
Collapse
Affiliation(s)
- Christian Füllgrabe
- Medical Research Council Institute of Hearing Research, The University of NottinghamNottingham, UK
| | - Stuart Rosen
- Speech,Hearing and Phonetic Sciences, University College LondonLondon, UK
| |
Collapse
|