1
|
Rasouli F, Afshari PJ, Bakhshi E. Auditory modulation processing in children with mild to moderate hearing loss. Int J Pediatr Otorhinolaryngol 2025; 192:112330. [PMID: 40179588 DOI: 10.1016/j.ijporl.2025.112330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Revised: 03/12/2025] [Accepted: 03/29/2025] [Indexed: 04/05/2025]
Abstract
BACKGROUND AND OBJECTIVE Children with hearing loss often have difficulty understanding speech in noisy environments like classrooms, leading to educational and communication challenges. Detecting and discriminating auditory spectro-temporal fundamentals is essential for speech comprehension. So, in this study, we investigated how children with mild to moderate hearing loss (MMHL) process these auditory modulations and their relation to speech perception in noise, comparing their performance to that of children with normal hearing. METHODS This cross-sectional study selected 31 children with mild to moderate sensorineural hearing loss (SNHL) and 34 normally hearing (NH) children, aged 8 to 12. After obtaining consent, participants underwent tests, including the Spectral Modulation Ripple Test (SMRT), Amplitude Modulation Detection Tests (AMDTs) at 10, 50, and 200 Hz, and Speech Perception in Noise (SPiN) assessments using Word-in-Noise (WIN) and BKB-SIN tests, conducted monaurally. Results were compared between the two groups, evaluating the effects of hearing loss severity and correlations among the tests, as well as score comparisons from both ears within each group. RESULTS Significant differences were observed between groups (MMHL and NH) in SMRT, AMDTs, and SPiN tests (p < 0.05), with the NH group scoring better. However, no significant differences were observed between mild and moderate hearing loss (p > 0.05). There was no correlation between SMRT and AMDTs with the WIN test (p > 0.05). Notably, significant correlations were found between SMRT and BKB tests in both groups. Sporadic correlations were also identified between AMDTs at higher rates and BKB results for both groups (p < 0.05). Scores between the two ears showed no significant differences across all tests (p > 0.05). CONCLUSION Children with Mild to moderate SNHL have a lesser ability to use spectral and temporal modulation information, making it difficult for them to understand speech in noisy environments. Nonverbal spectral and temporal modulation tests require minimal cognitive effort and are valuable for evaluating perceptual disorders and developing auditory rehabilitation programs for these children.
Collapse
Affiliation(s)
- Ferdous Rasouli
- Department of Audiology, School of Rehabilitation, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Parisa Jalilzadeh Afshari
- Department of Audiology, School of Rehabilitation, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Enayatollah Bakhshi
- Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| |
Collapse
|
2
|
Ponsot E, Devolder P, Dhooge I, Verhulst S. Age-Related Decline in Neural Phase-Locking to Envelope and Temporal Fine Structure Revealed by Frequency Following Responses: A Potential Signature of Cochlear Synaptopathy Impairing Speech Intelligibility. J Assoc Res Otolaryngol 2025:10.1007/s10162-025-00985-2. [PMID: 40259175 DOI: 10.1007/s10162-025-00985-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2024] [Accepted: 03/19/2025] [Indexed: 04/23/2025] Open
Abstract
PURPOSE Assessing the contribution of cochlear synaptopathy (CS) to the variability in speech-in-noise intelligibility among individuals remains a challenge. While several studies have proposed biomarkers for CS based on neural phase-locking to the temporal envelope (ENV), fewer have investigated how CS affects the coding of temporal fine structure (TFS), despite its crucial role in speech-in-noise perception. In this study, we specifically examined whether TFS-based markers of CS could be derived from electrophysiological responses and psychophysical detection thresholds of spectral modulation (SM) in a complex tone, which serves as a parametric model of speech. METHODS We employed an integrated approach, combining psychophysical testing with frequency-following response (FFR) measurements in three groups of participants: young normal-hearing (n = 15, 12 females, age 21 ± 1); older normal-hearing (n = 16, 11 females, age 47 ± 6); and older hearing-impaired (n = 14, 8 females, age 52 ± 6). We expanded on previous work by assessing phase-locking to both ENV, using a 4-kHz rectangular amplitude-modulated (RAM) tone, and TFS, using a low-frequency (< 1.5 kHz) SM complex tone. RESULTS Overall, FFR results showed significant reductions in neural phase-locking to both ENV and TFS components with age and hearing loss. Specifically, the strength of TFS-related FFRs, particularly the component corresponding to the harmonic closest to the peak of the spectral envelope (~ 500 Hz), was negatively correlated with age, even after adjusting for audiometric thresholds. This TFS marker also correlated with ENV-related FFRs derived from the RAM tone, suggesting a shared decline in phase-locking capacity across low and high cochlear frequencies. Computational simulations of the auditory periphery indicated that the observed FFR strength reduction with age is consistent with approximately 50% loss of auditory nerve fibers, aligning with histopathological data. However, the TFS-based FFR marker did not account for variability in speech intelligibility observed in the same participants. Psychophysical measurements showed no age-related effects and were unrelated to the TFS-based FFR marker, highlighting the need for further psychophysical research to establish a behavioral counterpart. CONCLUSION Altogether, our results demonstrate that FFRs to vowel-like stimuli can serve as a complementary electrophysiological marker for assessing neural coding fidelity to stimulus TFS. This approach could provide a valuable tool for better understanding the impact of CS on an important coding dimension for speech-in-noise perception.
Collapse
Affiliation(s)
- Emmanuel Ponsot
- STMS Lab (CNRS/Ircam/Sorbonne Université, Ministère de La Culture), 1 Place Igor Stravinsky, 75004, Paris, France.
- Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 126, 9052, Zwijnaarde, Belgium.
| | - Pauline Devolder
- Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 126, 9052, Zwijnaarde, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium
- Department of Ear, Nose and Throat, Ghent University Hospital, Ghent, Belgium
| | - Sarah Verhulst
- Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 126, 9052, Zwijnaarde, Belgium
| |
Collapse
|
3
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
4
|
Aronoff JM, Duitsman L, Matusik DK, Hussain S, Lippmann E. Examining the Relationship Between Speech Recognition and a Spectral-Temporal Test With a Mixed Group of Hearing Aid and Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1073-1080. [PMID: 33719538 DOI: 10.1044/2020_jslhr-20-00352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Audiology clinics have a need for a nonlinguistic test for assessing speech scores for patients using hearing aids or cochlear implants. One such test, the Spectral-Temporally Modulated Ripple Test Lite for computeRless Measurement (SLRM), has been developed for use in clinics, but it, as well as the related Spectral-Temporally Modulated Ripple Test, has primarily been assessed with cochlear implant users. The main goal of this study was to examine the relationship between SLRM and the Arizona Biomedical Institute Sentence Test (AzBio) for a mixed group of hearing aid and cochlear implant users. Method Adult hearing aid users and cochlear implant users were tested with SLRM, AzBio in quiet, and AzBio in multitalker babble with a +8 dB signal-to-noise ratio. Results SLRM scores correlated with both AzBio recognition scores in quiet and in noise. Conclusions The results indicated that there is a significant relationship between SLRM and AzBio scores when testing a mixed group of cochlear implant and hearing aid users. This suggests that SLRM may be a useful nonlinguistic test for use with individuals with a variety of hearing devices.
Collapse
Affiliation(s)
- Justin M Aronoff
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
| | - Leah Duitsman
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
| | - Deanna K Matusik
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
| | - Senad Hussain
- Department of Medicine, College of Medicine, University of Illinois at Chicago
| | - Elise Lippmann
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Massachusetts Eye and Ear, Boston
| |
Collapse
|
5
|
Ponsot E, Varnet L, Wallaert N, Daoud E, Shamma SA, Lorenzi C, Neri P. Mechanisms of Spectrotemporal Modulation Detection for Normal- and Hearing-Impaired Listeners. Trends Hear 2021; 25:2331216520978029. [PMID: 33620023 PMCID: PMC7905488 DOI: 10.1177/2331216520978029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 10/26/2020] [Accepted: 11/06/2020] [Indexed: 11/20/2022] Open
Abstract
Spectrotemporal modulations (STM) are essential features of speech signals that make them intelligible. While their encoding has been widely investigated in neurophysiology, we still lack a full understanding of how STMs are processed at the behavioral level and how cochlear hearing loss impacts this processing. Here, we introduce a novel methodological framework based on psychophysical reverse correlation deployed in the modulation space to characterize the mechanisms underlying STM detection in noise. We derive perceptual filters for young normal-hearing and older hearing-impaired individuals performing a detection task of an elementary target STM (a given product of temporal and spectral modulations) embedded in other masking STMs. Analyzed with computational tools, our data show that both groups rely on a comparable linear (band-pass)-nonlinear processing cascade, which can be well accounted for by a temporal modulation filter bank model combined with cross-correlation against the target representation. Our results also suggest that the modulation mistuning observed for the hearing-impaired group results primarily from broader cochlear filters. Yet, we find idiosyncratic behaviors that cannot be captured by cochlear tuning alone, highlighting the need to consider variability originating from additional mechanisms. Overall, this integrated experimental-computational approach offers a principled way to assess suprathreshold processing distortions in each individual and could thus be used to further investigate interindividual differences in speech intelligibility.
Collapse
Affiliation(s)
- Emmanuel Ponsot
- Laboratoire des systèmes perceptifs, Département
d′études cognitives, École normale supérieure, Université PSL, CNRS,
Paris, France
- Hearing Technology @ WAVES, Department of Information
Technology, Ghent University, Ghent, Belgium
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, Département
d′études cognitives, École normale supérieure, Université PSL, CNRS,
Paris, France
| | - Nicolas Wallaert
- Laboratoire des systèmes perceptifs, Département
d′études cognitives, École normale supérieure, Université PSL, CNRS,
Paris, France
| | - Elza Daoud
- Aix-Marseille Université, UMR CNRS 7260, Laboratoire
Neurosciences Intégratives et Adaptatives, Centre Saint-Charles,
Marseille, France
| | - Shihab A. Shamma
- Laboratoire des systèmes perceptifs, Département
d′études cognitives, École normale supérieure, Université PSL, CNRS,
Paris, France
| | - Christian Lorenzi
- Laboratoire des systèmes perceptifs, Département
d′études cognitives, École normale supérieure, Université PSL, CNRS,
Paris, France
| | - Peter Neri
- Laboratoire des systèmes perceptifs, Département
d′études cognitives, École normale supérieure, Université PSL, CNRS,
Paris, France
| |
Collapse
|
6
|
Jorgensen EJ, McCreery RW, Kirby BJ, Brennan M. Effect of level on spectral-ripple detection threshold for listeners with normal hearing and hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:908. [PMID: 32873021 PMCID: PMC7443170 DOI: 10.1121/10.0001706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 07/07/2020] [Accepted: 07/20/2020] [Indexed: 06/11/2023]
Abstract
This study investigated the effect of presentation level on spectral-ripple detection for listeners with and without sensorineural hearing loss (SNHL). Participants were 25 listeners with normal hearing and 25 listeners with SNHL. Spectral-ripple detection thresholds (SRDTs) were estimated at three spectral densities (0.5, 2, and 4 ripples per octave, RPO) and three to four sensation levels (SLs) (10, 20, 40, and, when possible, 60 dB SL). Each participant was also tested at 90 dB sound pressure level (SPL). Results indicate that level affected SRDTs. However, the effect of level depended on ripple density and hearing status. For all listeners and all RPO conditions, SRDTs improved from 10 to 40 dB SL. In the 2- and 4-RPO conditions, SRDTs became poorer from the 40 dB SL to the 90 dB SPL condition. The results suggest that audibility likely controls spectral-ripple detection at low SLs for all ripple densities, whereas spectral resolution likely controls spectral-ripple detection at high SLs and ripple densities. For optimal ripple detection across all listeners, clinicians and researchers should use a SL of 40 dB SL. To avoid absolute-level confounds, a presentation level of 80 dB SPL can also be used.
Collapse
Affiliation(s)
- Erik J Jorgensen
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa 52242, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Omaha, Nebraska 68124, USA
| | - Benjamin J Kirby
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas 76203, USA
| | - Marc Brennan
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, Nebraska 68588, USA
| |
Collapse
|
7
|
Resnick JM, Horn DL, Noble AR, Rubinstein JT. Spectral aliasing in an acoustic spectral ripple discrimination task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1054. [PMID: 32113324 PMCID: PMC7112708 DOI: 10.1121/10.0000608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Spectral ripple discrimination tasks are commonly used to probe spectral resolution in cochlear implant (CI), normal-hearing (NH), and hearing-impaired individuals. In addition, these tasks have also been used to examine spectral resolution development in NH and CI children. In this work, stimulus sine-wave carrier density was identified as a critical variable in an example spectral ripple-based task, the Spectro-Temporally Modulated Ripple (SMR) Test, and it was demonstrated that previous uses of it in NH listeners sometimes used values insufficient to represent relevant ripple densities. Insufficient carry densities produced spectral under-sampling that both eliminated ripple cues at high ripple densities and introduced unintended structured interference between the carriers and intended ripples at particular ripple densities. It was found that this effect produced non-monotonic psychometric functions for NH listeners that would cause systematic underestimation of thresholds with adaptive techniques. Studies of spectral ripple detection in CI users probe a density regime below where this source of aliasing occurs, as CI signal processing limits dense ripple representation. While these analyses and experiments focused on the SMR Test, any task in which discrete pure-tone carriers spanning frequency space are modulated to approximate a desired pattern must be designed with the consideration of the described spectral aliasing effect.
Collapse
Affiliation(s)
- Jesse M Resnick
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - David L Horn
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - Anisha R Noble
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - Jay T Rubinstein
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| |
Collapse
|
8
|
Souza P, Gallun F, Wright R. Contributions to Speech-Cue Weighting in Older Adults With Impaired Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:334-344. [PMID: 31940258 PMCID: PMC7213489 DOI: 10.1044/2019_jslhr-19-00176] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Purpose In a previous paper (Souza, Wright, Blackburn, Tatman, & Gallun, 2015), we explored the extent to which individuals with sensorineural hearing loss used different cues for speech identification when multiple cues were available. Specifically, some listeners placed the greatest weight on spectral cues (spectral shape and/or formant transition), whereas others relied on the temporal envelope. In the current study, we aimed to determine whether listeners who relied on temporal envelope did so because they were unable to discriminate the formant information at a level sufficient to use it for identification and the extent to which a brief discrimination test could predict cue weighting patterns. Method Participants were 30 older adults with bilateral sensorineural hearing loss. The first task was to label synthetic speech tokens based on the combined percept of temporal envelope rise time and formant transitions. An individual profile was derived from linear discriminant analysis of the identification responses. The second task was to discriminate differences in either temporal envelope rise time or formant transitions. The third task was to discriminate spectrotemporal modulation in a nonspeech stimulus. Results All listeners were able to discriminate temporal envelope rise time at levels sufficient for the identification task. There was wide variability in the ability to discriminate formant transitions, and that ability predicted approximately one third of the variance in the identification task. There was no relationship between performance in the identification task and either amount of hearing loss or ability to discriminate nonspeech spectrotemporal modulation. Conclusions The data suggest that listeners who rely to a greater extent on temporal cues lack the ability to discriminate fine-grained spectral information. The fact that the amount of hearing loss was not associated with the cue profile underscores the need to characterize individual abilities in a more nuanced way than can be captured by the pure-tone audiogram.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, IL
| | - Frederick Gallun
- Rehabilitation Research and Development National Center for Rehabilitative Auditory Research, VA Portland Health Care System and Oregon Health and Sciences University
| | - Richard Wright
- Department of Linguistics, University of Washington, Seattle
| |
Collapse
|