1
|
Kuokkanen PT, Kraemer I, Köppl C, Carr CE, Kempter R. Single Neuron Contributions to the Auditory Brainstem EEG. J Neurosci 2025; 45:e1139242025. [PMID: 40262897 PMCID: PMC12121712 DOI: 10.1523/jneurosci.1139-24.2025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 03/21/2025] [Accepted: 03/25/2025] [Indexed: 04/24/2025] Open
Abstract
The auditory brainstem response (ABR) is an acoustically evoked EEG potential that is an important diagnostic tool for hearing loss, especially in newborns. The ABR originates from the response sequence of auditory nerve and brainstem nuclei, and a click-evoked ABR typically shows three positive peaks ("waves") within the first six milliseconds. However, an assignment of the waves of the ABR to specific sources is difficult, and a quantification of contributions to the ABR waves is not available. Here, we exploit the large size and physical separation of the barn owl first-order cochlear nucleus magnocellularis (NM) to estimate single-cell contributions to the ABR. We simultaneously recorded NM neurons' spikes and the EEG in owls of both sexes, and found that [Formula: see text] spontaneous single-cell spikes are necessary to isolate a significant spike-triggered average (STA) response at the EEG electrode. An average single-neuron contribution to the ABR was predicted by convolving the STA with the cell's peri-stimulus time histogram. Amplitudes of predicted contributions of single NM cells typically reached 32.9 ± 1.1 nV (mean ± SE, range: 2.5-162.7 nV), or [Formula: see text] (median ± SE; range from 0.01% to 1%) of the ABR amplitude. The time of the predicted peak coincided best with the peak of the ABR wave II, independent of the click sound level. Our results suggest that individual neurons' contributions to an EEG can vary widely, and that wave II of the ABR is shaped by NM units.
Collapse
Affiliation(s)
- Paula T Kuokkanen
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin 10115, Germany
| | - Ira Kraemer
- Department of Biology, University of Maryland College Park, College Park, MD 20742
| | - Christine Köppl
- Department of Neuroscience, School of Medicine and Health Sciences, Research Center for Neurosensory Sciences and Cluster of Excellence "Hearing4all" Carl von Ossietzky Universität, Oldenburg 26129, Germany
| | - Catherine E Carr
- Department of Biology, University of Maryland College Park, College Park, MD 20742
| | - Richard Kempter
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin 10115, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin 10115, Germany
- Einstein Center for Neurosciences Berlin, Berlin 10117, Germany
| |
Collapse
|
2
|
Jennings SG, Chen J, Johansen N, Goodman SS. Evidence for the Auditory Nerve Generating Envelope Following Responses When Measured from Eardrum Electrodes. J Assoc Res Otolaryngol 2025; 26:147-162. [PMID: 40048123 PMCID: PMC11996730 DOI: 10.1007/s10162-025-00979-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 02/18/2025] [Indexed: 04/15/2025] Open
Abstract
Steady-state auditory evoked potentials are useful for studying the human auditory system and diagnosing hearing disorders. Identifying the generators of these potentials is essential for interpretation of data and for determining appropriate clinical and research applications. Here we infer putative generators of a steady-state potential measured from an electrode on the eardrum and compare this potential with the traditional envelope following response (EFR) measured from an electrode on the high forehead (N = 18, 10 female). We hypothesized that responses from the eardrum electrode would be consistent with an auditory nerve (AN) compound action potential (CAP) evoked by each cycle of the stimulus envelope, resulting in a potential we call CAPENV. Steady-state potentials were evoked by a 90 dB peSPL, 3000-Hz puretone carrier whose envelope was modulated by a tone sweep with frequencies from 20 to 160 Hz or 80 to 640 Hz. We calculated group delay to infer potential generators. We also compared the empirically measured CAPENV with simulated CAPENV from a humanized model of AN responses. Response latencies and model simulations support the interpretation that CAPENV is generated by the AN rather than hair cell or brainstem generators for all modulation frequencies tested. Conversely, latencies for the traditional EFR were consistent with a shift from cortical to brainstem generators as the modulation frequency increased from 20 to 200 Hz. We propose that CAPENV may be a fruitful tool for assessing AN function in humans with suspected AN fiber loss and/or temporal coding disorders.
Collapse
Affiliation(s)
- Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City, UT, USA.
| | - Jessica Chen
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City, UT, USA
| | - Nathan Johansen
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City, UT, USA
| | - Shawn S Goodman
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
3
|
Kuokkanen PT, Kraemer I, Koeppl C, Carr CE, Kempter R. Single neuron contributions to the auditory brainstem EEG. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.05.29.596509. [PMID: 38853863 PMCID: PMC11160769 DOI: 10.1101/2024.05.29.596509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
The auditory brainstem response (ABR) is an acoustically evoked EEG potential that is an important diagnostic tool for hearing loss, especially in newborns. The ABR originates from the response sequence of auditory nerve and brainstem nuclei, and a click-evoked ABR typically shows three positive peaks ('waves') within the first six milliseconds. However, an assignment of the waves of the ABR to specific sources is difficult, and a quantification of contributions to the ABR waves is not available. Here, we exploit the large size and physical separation of the barn owl first-order cochlear nucleus magnocellularis (NM) to estimate single-cell contributions to the ABR. We simultaneously recorded NM neurons' spikes and the EEG in owls of both sexes, and found that ≳ 5,000 spontaneous single-cell spikes are necessary to isolate a significant spike-triggered average response at the EEG electrode. An average single-neuron contribution to the ABR was predicted by convolving the spike-triggered average with the cell's peri-stimulus time histogram. Amplitudes of predicted contributions of single NM cells typically reached 32.9 ± 1.1 nV (mean ± SE, range: 2.5 - 162.7 nV), or 0.07 ± 0.02% (median ± SE; range from 0.01% to 1%) of the ABR amplitude. The time of the predicted peak coincided best with the peak of the ABR wave II, independent of the click sound level. Our results suggest that individual neurons' contributions to an EEG can vary widely, and that wave II of the ABR is shaped by NM units. Significance statement The auditory brainstem response (ABR) is a scalp potential used for the diagnosis of hearing loss, both clinically and in research. We investigated the contribution of single action potentials from auditory brainstem neurons to the ABR and provide direct evidence that action potentials recorded in a first order auditory nucleus, and their EEG contribution, coincide with wave II of the ABR. The study also shows that the contribution of single cells varies strongly across the population.
Collapse
Affiliation(s)
- Paula T Kuokkanen
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
| | - Ira Kraemer
- Department of Biology, University of Maryland College Park, College Park, MD 20742
| | - Christine Koeppl
- Department of Neuroscience, School of Medicine and Health Sciences, Research Center for Neurosensory Sciences and Cluster of Excellence "Hearing4all" Carl von Ossietzky University, 26129 Oldenburg, Germany
| | - Catherine E Carr
- Department of Biology, University of Maryland College Park, College Park, MD 20742
| | - Richard Kempter
- Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
- Einstein Center for Neurosciences Berlin, 10117 Berlin, Germany
| |
Collapse
|
4
|
Li BZ, Poleg S, Ridenour M, Tollin D, Lei T, Klug A. Computational model for synthesizing auditory brainstem responses to assess neuronal alterations in aging and autistic animal models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.08.04.606499. [PMID: 39211118 PMCID: PMC11361117 DOI: 10.1101/2024.08.04.606499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Purpose The auditory brainstem response (ABR) is a widely used objective electrophysiology measure for non-invasively assessing auditory function and neural activity in the auditory brainstem, but its ability to reflect detailed neuronal processing is limited due to the averaging nature of the electroencephalogram-type recordings. Method This study addresses this limitation by developing a computational model of the auditory brainstem which is capable of synthesizing ABR traces based on a large, population scale neural extrapolation of a spiking neuronal network of auditory brainstem circuitry. The model was able to recapitulate alterations in ABR waveform morphology that have been shown to be present in two medical conditions: animal models of autism and aging. Moreover, in both of these conditions, these ABR alterations are caused by known distinct changes in auditory brainstem physiology, and the model could recapitulate these changes. Results In the autism model, the simulation revealed myelin deficits and hyperexcitability, which caused a decreased wave III amplitude and a prolonged wave III-V interval, consistent with experimentally recorded ABRs in Fmr1-KO mice. For the aging condition, the model recapitulated ABRs recorded in aged gerbils and indicated a reduction in activity in the medial nucleus of the trapezoid body (MNTB), a finding validated by confocal imaging data. Conclusion These results demonstrate not only the model's accuracy but also its capability of linking features of ABR morphology to underlying neuronal properties and suggesting follow-up physiological experiments.
Collapse
|
5
|
Kipping D, Zhang Y, Nogueira W. A Computational Model of the Electrically or Acoustically Evoked Compound Action Potential in Cochlear Implant Users With Residual Hearing. IEEE Trans Biomed Eng 2024; 71:3192-3203. [PMID: 38843064 DOI: 10.1109/tbme.2024.3410686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
OBJECTIVE In cochlear implant users with residual acoustic hearing, compound action potentials (CAPs) can be evoked by acoustic (aCAP) or electric (eCAP) stimulation and recorded through the electrodes of the implant. We propose a novel computational model to simulate aCAPs and eCAPs in humans, considering the interaction between combined electric-acoustic stimulation that occurs in the auditory nerve. METHODS The model consists of three components: a 3D finite element method model of an implanted cochlea, a phenomenological single-neuron spiking model for electric-acoustic stimulation, and a physiological multi-compartment neuron model to simulate the individual nerve fiber contributions to the CAP. RESULTS The CAP morphologies closely resembled those known from humans. The spread of excitation derived from eCAPs by varying the recording electrode along the cochlear implant electrode array was consistent with published human data. The predicted CAP amplitude growth functions largely resembled human data, with deviations in absolute CAP amplitudes for acoustic stimulation. The model reproduced the suppression of eCAPs by simultaneously presented acoustic tone bursts for different masker frequencies and probe stimulation electrodes. CONCLUSION The proposed model can simulate CAP responses to electric, acoustic, or combined electric-acoustic stimulation. It considers the dependence on stimulation and recording sites in the cochlea, as well as the interaction between electric and acoustic stimulation in the auditory nerve. SIGNIFICANCE The model enhances comprehension of CAPs and peripheral electric-acoustic interaction. It can be used in the future to investigate objective methods, such as hearing threshold assessment or estimation of neural health through aCAPs or eCAPs.
Collapse
|
6
|
Kulasingham JP, Innes-Brown H, Enqvist M, Alickovic E. Level-Dependent Subcortical Electroencephalography Responses to Continuous Speech. eNeuro 2024; 11:ENEURO.0135-24.2024. [PMID: 39142822 DOI: 10.1523/eneuro.0135-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/16/2024] Open
Abstract
The auditory brainstem response (ABR) is a measure of subcortical activity in response to auditory stimuli. The wave V peak of the ABR depends on the stimulus intensity level, and has been widely used for clinical hearing assessment. Conventional methods estimate the ABR average electroencephalography (EEG) responses to short unnatural stimuli such as clicks. Recent work has moved toward more ecologically relevant continuous speech stimuli using linear deconvolution models called temporal response functions (TRFs). Investigating whether the TRF waveform changes with stimulus intensity is a crucial step toward the use of natural speech stimuli for hearing assessments involving subcortical responses. Here, we develop methods to estimate level-dependent subcortical TRFs using EEG data collected from 21 participants listening to continuous speech presented at 4 different intensity levels. We find that level-dependent changes can be detected in the wave V peak of the subcortical TRF for almost all participants, and are consistent with level-dependent changes in click-ABR wave V. We also investigate the most suitable peripheral auditory model to generate predictors for level-dependent subcortical TRFs and find that simple gammatone filterbanks perform the best. Additionally, around 6 min of data may be sufficient for detecting level-dependent effects and wave V peaks above the noise floor for speech segments with higher intensity. Finally, we show a proof-of-concept that level-dependent subcortical TRFs can be detected even for the inherent intensity fluctuations in natural continuous speech.
Collapse
Affiliation(s)
- Joshua P Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, 581 83 Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, DK-3070 Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, 581 83 Linköping, Sweden
| | - Emina Alickovic
- Automatic Control, Department of Electrical Engineering, Linköping University, 581 83 Linköping, Sweden
- Eriksholm Research Centre, DK-3070 Snekkersten, Denmark
| |
Collapse
|
7
|
Temboury-Gutierrez M, Encina-Llamas G, Dau T. Predicting early auditory evoked potentials using a computational model of auditory-nerve processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1799-1812. [PMID: 38445986 DOI: 10.1121/10.0025136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 02/16/2024] [Indexed: 03/07/2024]
Abstract
Non-invasive electrophysiological measures, such as auditory evoked potentials (AEPs), play a crucial role in diagnosing auditory pathology. However, the relationship between AEP morphology and cochlear degeneration remains complex and not well understood. Dau [J. Acoust. Soc. Am. 113, 936-950 (2003)] proposed a computational framework for modeling AEPs that utilized a nonlinear auditory-nerve (AN) model followed by a linear unitary response function. While the model captured some important features of the measured AEPs, it also exhibited several discrepancies in response patterns compared to the actual measurements. In this study, an enhanced AEP modeling framework is presented, incorporating an improved AN model, and the conclusions from the original study were reevaluated. Simulation results with transient and sustained stimuli demonstrated accurate auditory brainstem responses (ABRs) and frequency-following responses (FFRs) as a function of stimulation level, although wave-V latencies remained too short, similar to the original study. When compared to physiological responses in animals, the revised model framework showed a more accurate balance between the contributions of auditory-nerve fibers (ANFs) at on- and off-frequency regions to the predicted FFRs. These findings emphasize the importance of cochlear processing in brainstem potentials. This framework may provide a valuable tool for assessing human AN models and simulating AEPs for various subtypes of peripheral pathologies, offering opportunities for research and clinical applications.
Collapse
Affiliation(s)
- Miguel Temboury-Gutierrez
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, DK-2800, Denmark
| | - Gerard Encina-Llamas
- Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Copenhagen, DK-2100, Denmark
- Faculty of Medicine, University of Vic-Central University of Catalonia (UVic-UCC), Vic, 08500, Catalonia, Spain
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, DK-2800, Denmark
- Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Copenhagen, DK-2100, Denmark
| |
Collapse
|
8
|
Kulasingham JP, Bachmann FL, Eskelund K, Enqvist M, Innes-Brown H, Alickovic E. Predictors for estimating subcortical EEG responses to continuous speech. PLoS One 2024; 19:e0297826. [PMID: 38330068 PMCID: PMC10852227 DOI: 10.1371/journal.pone.0297826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 01/12/2024] [Indexed: 02/10/2024] Open
Abstract
Perception of sounds and speech involves structures in the auditory brainstem that rapidly process ongoing auditory stimuli. The role of these structures in speech processing can be investigated by measuring their electrical activity using scalp-mounted electrodes. However, typical analysis methods involve averaging neural responses to many short repetitive stimuli that bear little relevance to daily listening environments. Recently, subcortical responses to more ecologically relevant continuous speech were detected using linear encoding models. These methods estimate the temporal response function (TRF), which is a regression model that minimises the error between the measured neural signal and a predictor derived from the stimulus. Using predictors that model the highly non-linear peripheral auditory system may improve linear TRF estimation accuracy and peak detection. Here, we compare predictors from both simple and complex peripheral auditory models for estimating brainstem TRFs on electroencephalography (EEG) data from 24 participants listening to continuous speech. We also investigate the data length required for estimating subcortical TRFs, and find that around 12 minutes of data is sufficient for clear wave V peaks (>3 dB SNR) to be seen in nearly all participants. Interestingly, predictors derived from simple filterbank-based models of the peripheral auditory system yield TRF wave V peak SNRs that are not significantly different from those estimated using a complex model of the auditory nerve, provided that the nonlinear effects of adaptation in the auditory system are appropriately modelled. Crucially, computing predictors from these simpler models is more than 50 times faster compared to the complex model. This work paves the way for efficient modelling and detection of subcortical processing of continuous speech, which may lead to improved diagnosis metrics for hearing impairment and assistive hearing technology.
Collapse
Affiliation(s)
- Joshua P. Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Emina Alickovic
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
- Eriksholm Research Centre, Snekkersten, Denmark
| |
Collapse
|
9
|
Bachmann FL, Kulasingham JP, Eskelund K, Enqvist M, Alickovic E, Innes-Brown H. Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field. Trends Hear 2024; 28:23312165241246596. [PMID: 38738341 PMCID: PMC11092544 DOI: 10.1177/23312165241246596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 05/14/2024] Open
Abstract
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
Collapse
Affiliation(s)
| | - Joshua P. Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
10
|
Alamri Y, Jennings SG. Computational modeling of the human compound action potential. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2376. [PMID: 37092943 PMCID: PMC10119875 DOI: 10.1121/10.0017863] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 03/21/2023] [Accepted: 04/04/2023] [Indexed: 05/03/2023]
Abstract
The auditory nerve (AN) compound action potential (CAP) is an important tool for assessing auditory disorders and monitoring the health of the auditory periphery during surgical procedures. The CAP has been mathematically conceptualized as the convolution of a unit response (UR) waveform with the firing rate of a population of AN fibers. Here, an approach for predicting experimentally recorded CAPs in humans is proposed, which involves the use of human-based computational models to simulate AN activity. CAPs elicited by clicks, chirps, and amplitude-modulated carriers were simulated and compared with empirically recorded CAPs from human subjects. In addition, narrowband CAPs derived from noise-masked clicks and tone bursts were simulated. Many morphological, temporal, and spectral aspects of human CAPs were captured by the simulations for all stimuli tested. These findings support the use of model simulations of the human CAP to refine existing human-based models of the auditory periphery, aid in the design and analysis of auditory experiments, and predict the effects of hearing loss, synaptopathy, and other auditory disorders on the human CAP.
Collapse
Affiliation(s)
- Yousef Alamri
- Department of Biomedical Engineering, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, Utah 84112, USA
| | - Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, Utah 84112, USA
| |
Collapse
|
11
|
Osses Vecchi A, Varnet L, Carney LH, Dau T, Bruce IC, Verhulst S, Majdak P. A comparative study of eight human auditory models of monaural processing. ACTA ACUSTICA. EUROPEAN ACOUSTICS ASSOCIATION 2022; 6:17. [PMID: 36325461 PMCID: PMC9625898 DOI: 10.1051/aacus/2022008] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
A number of auditory models have been developed using diverging approaches, either physiological or perceptual, but they share comparable stages of signal processing, as they are inspired by the same constitutive parts of the auditory system. We compare eight monaural models that are openly accessible in the Auditory Modelling Toolbox. We discuss the considerations required to make the model outputs comparable to each other, as well as the results for the following model processing stages or their equivalents: Outer and middle ear, cochlear filter bank, inner hair cell, auditory nerve synapse, cochlear nucleus, and inferior colliculus. The discussion includes a list of recommendations for future applications of auditory models.
Collapse
Affiliation(s)
- Alejandro Osses Vecchi
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École Normale Supérieure, PSL University, CNRS, 75005 Paris, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École Normale Supérieure, PSL University, CNRS, 75005 Paris, France
| | - Laurel H. Carney
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, NY 14642, USA
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Ian C. Bruce
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON L8S 4K1, Canada
| | - Sarah Verhulst
- Hearing Technology group, WAVES, Department of Information Technology, Ghent University, 9000 Ghent, Belgium
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, 1040 Vienna, Austria
| |
Collapse
|
12
|
Harris KC, Bao J. Optimizing non-invasive functional markers for cochlear deafferentation based on electrocochleography and auditory brainstem responses. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:2802. [PMID: 35461487 PMCID: PMC9034896 DOI: 10.1121/10.0010317] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 03/22/2022] [Accepted: 04/08/2022] [Indexed: 06/14/2023]
Abstract
Accumulating evidence suggests that cochlear deafferentation may contribute to suprathreshold deficits observed with or without elevated hearing thresholds, and can lead to accelerated age-related hearing loss. Currently there are no clinical diagnostic tools to detect human cochlear deafferentation in vivo. Preclinical studies using a combination of electrophysiological and post-mortem histological methods clearly demonstrate cochlear deafferentation including myelination loss, mitochondrial damages in spiral ganglion neurons (SGNs), and synaptic loss between inner hair cells and SGNs. Since clinical diagnosis of human cochlear deafferentation cannot include post-mortem histological quantification, various attempts based on functional measurements have been made to detect cochlear deafferentation. So far, those efforts have led to inconclusive results. Two major obstacles to the development of in vivo clinical diagnostics include a lack of standardized methods to validate new approaches and characterize the normative range of repeated measurements. In this overview, we examine strategies from previous studies to detect cochlear deafferentation from electrocochleography and auditory brainstem responses. We then summarize possible approaches to improve these non-invasive functional methods for detecting cochlear deafferentation with a focus on cochlear synaptopathy. We identify conceptual approaches that should be tested to associate unique electrophysiological features with cochlear deafferentation.
Collapse
Affiliation(s)
- Kelly C Harris
- Department of Otolaryngology, Head & Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Charleston, South Carolina 29425, USA
| | - Jianxin Bao
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio 44272, USA
| |
Collapse
|
13
|
Viswanathan V, Shinn-Cunningham BG, Heinz MG. Speech Categorization Reveals the Role of Early-Stage Temporal-Coherence Processing in Auditory Scene Analysis. J Neurosci 2022; 42:240-254. [PMID: 34764159 PMCID: PMC8802934 DOI: 10.1523/jneurosci.1610-21.2021] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 10/18/2021] [Accepted: 10/26/2021] [Indexed: 11/21/2022] Open
Abstract
Temporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions and whether predictions were improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been used in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise and that physiological computations that exist early along the auditory pathway may contribute to this process.SIGNIFICANCE STATEMENT Temporal coherence of sound fluctuations across distinct frequency channels is thought to be important for auditory scene analysis. Prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, and it was unknown whether speech understanding in noise may be shaped by across-channel processing that exists in earlier auditory areas. Using physiologically plausible computational modeling to predict consonant confusions across different listening conditions, we find that across-channel temporal coherence contributes significantly to scene analysis and speech perception and that such processing may arise in the auditory pathway as early as the brainstem. By virtue of providing a richer characterization of error patterns not obtainable with just intelligibility scores, consonant confusions yield unique insight into scene analysis mechanisms.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana 47907
| | | | - Michael G Heinz
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana 47907
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907
| |
Collapse
|
14
|
A convolutional neural-network framework for modelling auditory sensory cells and synapses. Commun Biol 2021; 4:827. [PMID: 34211095 PMCID: PMC8249591 DOI: 10.1038/s42003-021-02341-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 06/09/2021] [Indexed: 12/02/2022] Open
Abstract
In classical computational neuroscience, analytical model descriptions are derived from neuronal recordings to mimic the underlying biological system. These neuronal models are typically slow to compute and cannot be integrated within large-scale neuronal simulation frameworks. We present a hybrid, machine-learning and computational-neuroscience approach that transforms analytical models of sensory neurons and synapses into deep-neural-network (DNN) neuronal units with the same biophysical properties. Our DNN-model architecture comprises parallel and differentiable equations that can be used for backpropagation in neuro-engineering applications, and offers a simulation run-time improvement factor of 70 and 280 on CPU or GPU systems respectively. We focussed our development on auditory neurons and synapses, and show that our DNN-model architecture can be extended to a variety of existing analytical models. We describe how our approach for auditory models can be applied to other neuron and synapse types to help accelerate the development of large-scale brain networks and DNN-based treatments of the pathological system. Drakopoulos et al developed a machine-learning and computational-neuroscience approach that transforms analytical models of sensory neurons and synapses into deep-neural-network (DNN) neuronal units with the same biophysical properties. Focusing on auditory neurons and synapses, they showed that their DNN-model architecture could be extended to a variety of existing analytical models and to other neuron and synapse types, thus potentially assisting the development of large-scale brain networks and DNN-based treatments.
Collapse
|
15
|
Encina-Llamas G, Dau T, Epp B. On the use of envelope following responses to estimate peripheral level compression in the auditory system. Sci Rep 2021; 11:6962. [PMID: 33772043 PMCID: PMC7997911 DOI: 10.1038/s41598-021-85850-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 03/08/2021] [Indexed: 12/22/2022] Open
Abstract
Individual estimates of cochlear compression may provide complementary information to traditional audiometric hearing thresholds in disentangling different types of peripheral cochlear damage. Here we investigated the use of the slope of envelope following response (EFR) magnitude-level functions obtained from four simultaneously presented amplitude modulated tones with modulation frequencies of 80-100 Hz as a proxy of peripheral level compression. Compression estimates in individual normal hearing (NH) listeners were consistent with previously reported group-averaged compression estimates based on psychoacoustical and distortion-product oto-acoustic emission (DPOAE) measures in human listeners. They were also similar to basilar membrane (BM) compression values measured invasively in non-human mammals. EFR-based compression estimates in hearing-impaired listeners were less compressive than those for the NH listeners, consistent with a reduction of BM compression. Cochlear compression was also estimated using DPOAEs in the same NH listeners. DPOAE estimates were larger (less compressive) than EFRs estimates, showing no correlation. Despite the numerical concordance between EFR-based compression estimates and group-averaged estimates from other methods, simulations using an auditory nerve (AN) model revealed that compression estimates based on EFRs might be highly influenced by contributions from off-characteristic frequency (CF) neural populations. This compromises the possibility to estimate on-CF (i.e., frequency-specific or "local") peripheral level compression with EFRs.
Collapse
Affiliation(s)
- Gerard Encina-Llamas
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark (DTU), 2800, Kongens Lyngby, Denmark.
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark (DTU), 2800, Kongens Lyngby, Denmark
| | - Bastian Epp
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark (DTU), 2800, Kongens Lyngby, Denmark
| |
Collapse
|
16
|
Baby D, Van Den Broucke A, Verhulst S. A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications. NAT MACH INTELL 2021; 3:134-143. [PMID: 33629031 PMCID: PMC7116797 DOI: 10.1038/s42256-020-00286-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications.
Collapse
Affiliation(s)
- Deepak Baby
- Hearing Technology @ WAVES, Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
| | - Arthur Van Den Broucke
- Hearing Technology @ WAVES, Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
| | - Sarah Verhulst
- Hearing Technology @ WAVES, Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
| |
Collapse
|
17
|
Enhancing the sensitivity of the envelope-following response for cochlear synaptopathy screening in humans: The role of stimulus envelope. Hear Res 2020; 400:108132. [PMID: 33333426 DOI: 10.1016/j.heares.2020.108132] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 10/25/2020] [Accepted: 11/25/2020] [Indexed: 02/07/2023]
Abstract
Auditory de-afferentation, a permanent reduction in the number of inner-hair-cells and auditory-nerve synapses due to cochlear damage or synaptopathy, can reliably be quantified using temporal bone histology and immunostaining. However, there is an urgent need for non-invasive markers of synaptopathy to study its perceptual consequences in live humans and to develop effective therapeutic interventions. While animal studies have identified candidate auditory-evoked-potential (AEP) markers for synaptopathy, their interpretation in humans has suffered from translational issues related to neural generator differences, unknown hearing-damage histopathologies or lack of measurement sensitivity. To render AEP-based markers of synaptopathy more sensitive and differential to the synaptopathy aspect of sensorineural hearing loss, we followed a combined computational and experimental approach. Starting from the known characteristics of auditory-nerve physiology, we optimized the stimulus envelope to stimulate the available auditory-nerve population optimally and synchronously to generate strong envelope-following-responses (EFRs). We further used model simulations to explore which stimuli evoked a response that was sensitive to synaptopathy, while being maximally insensitive to possible co-existing outer-hair-cell pathologies. We compared the model-predicted trends to AEPs recorded in younger and older listeners (N=44, 24f) who had normal or impaired audiograms with suspected age-related synaptopathy in the older cohort. We conclude that optimal stimulation paradigms for EFR-based quantification of synaptopathy should have sharply rising envelope shapes, a minimal plateau duration of 1.7-2.1 ms for a 120-Hz modulation rate, and inter-peak intervals which contain near-zero amplitudes. From our recordings, the optimal EFR-evoking stimulus had a rectangular envelope shape with a 25% duty cycle and a 95% modulation depth. Older listeners with normal or impaired audiometric thresholds showed significantly reduced EFRs, which were consistent with how (age-induced) synaptopathy affected these responses in the model.
Collapse
|
18
|
Saiz-Alía M, Reichenbach T. Computational modeling of the auditory brainstem response to continuous speech. J Neural Eng 2020; 17:036035. [DOI: 10.1088/1741-2552/ab970d] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
19
|
Jiang Y, Sun S, Li P, Chen S, Li G, Wang D, Liu Z, Tan J, Samuel OW, Deng H, Wang X, Zhu M, Wang X. Comparing Auditory Brainstem Responses evoked by Click and Sweep-Tone in Normal-Hearing Adults. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5237-5240. [PMID: 31947039 DOI: 10.1109/embc.2019.8856452] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Auditory brainstem response (ABR) is an objective method via which hearing loss could be detected. ABR induced by click, a broadband signal, is generally considered as the gold standard. However, due to the inherent delay of the cochlear traveling wave, click cannot excite the entire cochlear basement membrane at the same time, leading to the attenuation of the induced ABR waveform. In order to resolve this limitation, a sweep-tone-based stimulus that reconstructs the arrival time of different frequency components with respect to the delay characteristics of cochlear basement membrane was designed and used to induce ABR in this study. Subsequently, we compared the performance of the proposed sweep-tone-induced ABR method and the commonly adopted click induced ABR at different test levels and different stimulus rates. And the obtained results showed that the waveform morphology of sweep-tone-induced ABR was significantly better than that of click induced ABR across different test levels and stimulus rates. Moreover, compared to the click induced ABR at different sweeps, we found that the proposed sweep-tone-induced ABR effectively induced the ABR waveform at a relatively faster rate. Hence, the proposed sweep-tone-induced ABR approach provides a new method to improve the sensitivity of ABR detection in hearing loss.
Collapse
|
20
|
Altoè A, Shera CA. Nonlinear cochlear mechanics without direct vibration-amplification feedback. PHYSICAL REVIEW RESEARCH 2020; 2:013218. [PMID: 33403361 PMCID: PMC7781069 DOI: 10.1103/physrevresearch.2.013218] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recent in vivo recordings from the mammalian cochlea indicate that although the motion of the basilar membrane appears actively amplified and nonlinear only at frequencies relatively close to the peak of the response, the internal motions of the organ of Corti display these same features over a much wider range of frequencies. These experimental findings are not easily explained by the textbook view of cochlear mechanics, in which cochlear amplification is controlled by the motion of the basilar membrane (BM) in a tight, closed-loop feedback configuration. This study shows that a simple phenomenological model of the cochlea inspired by the work of Zweig [J. Acoust. Soc. Am. 138, 1102 (2015)] can account for recent data in mouse and gerbil. In this model, the active forces are regulated indirectly, through the effect of BM motion on the pressure field across the cochlear partition, rather than via direct coupling between active-force generation and BM vibration. The absence of strong vibration-amplification feedback in the cochlea also provides a compelling explanation for the observed intensity invariance of fine time structure in the BM response to acoustic clicks.
Collapse
Affiliation(s)
| | - Christopher A. Shera
- Auditory Research Center, Caruso Department of Otolaryngology, University of Southern California, Los Angeles, California 90033, USA
- Department of Physics & Astronomy, University of Southern California, California 90089, USA
| |
Collapse
|
21
|
Parthasarathy A, Bartlett EL, Kujawa SG. Age-related Changes in Neural Coding of Envelope Cues: Peripheral Declines and Central Compensation. Neuroscience 2019; 407:21-31. [DOI: 10.1016/j.neuroscience.2018.12.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 11/30/2018] [Accepted: 12/03/2018] [Indexed: 12/22/2022]
|
22
|
Bharadwaj HM, Mai AR, Simpson JM, Choi I, Heinz MG, Shinn-Cunningham BG. Non-Invasive Assays of Cochlear Synaptopathy - Candidates and Considerations. Neuroscience 2019; 407:53-66. [PMID: 30853540 DOI: 10.1016/j.neuroscience.2019.02.031] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Revised: 02/21/2019] [Accepted: 02/25/2019] [Indexed: 12/31/2022]
Abstract
Studies in multiple species, including in post-mortem human tissue, have shown that normal aging and/or acoustic overexposure can lead to a significant loss of afferent synapses innervating the cochlea. Hypothetically, this cochlear synaptopathy can lead to perceptual deficits in challenging environments and can contribute to central neural effects such as tinnitus. However, because cochlear synaptopathy can occur without any measurable changes in audiometric thresholds, synaptopathy can remain hidden from standard clinical diagnostics. To understand the perceptual sequelae of synaptopathy and to evaluate the efficacy of emerging therapies, sensitive and specific non-invasive measures at the individual patient level need to be established. Pioneering experiments in specific mice strains have helped identify many candidate assays. These include auditory brainstem responses, the middle-ear muscle reflex, envelope-following responses, and extended high-frequency audiograms. Unfortunately, because these non-invasive measures can be also affected by extraneous factors other than synaptopathy, their application and interpretation in humans is not straightforward. Here, we systematically examine six extraneous factors through a series of interrelated human experiments aimed at understanding their effects. Using strategies that may help mitigate the effects of such extraneous factors, we then show that these suprathreshold physiological assays exhibit across-individual correlations with each other indicative of contributions from a common physiological source consistent with cochlear synaptopathy. Finally, we discuss the application of these assays to two key outstanding questions, and discuss some barriers that still remain. This article is part of a Special Issue entitled: Hearing Loss, Tinnitus, Hyperacusis, Central Gain.
Collapse
Affiliation(s)
- Hari M Bharadwaj
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN.
| | - Alexandra R Mai
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| | - Jennifer M Simpson
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA
| | - Michael G Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
| | | |
Collapse
|
23
|
Grose JH, Buss E, Elmore H. Age-Related Changes in the Auditory Brainstem Response and Suprathreshold Processing of Temporal and Spectral Modulation. Trends Hear 2019; 23:2331216519839615. [PMID: 30977442 PMCID: PMC6463337 DOI: 10.1177/2331216519839615] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 02/21/2019] [Accepted: 02/22/2019] [Indexed: 01/05/2023] Open
Abstract
The purpose of this study was to determine whether cochlear synaptopathy can be shown to be a viable basis for age-related hearing difficulties in humans and whether it manifests as deficient suprathreshold processing of temporal and spectral modulation. Three experiments were undertaken evaluating the effects of age on (a) the auditory brainstem response as a function of level, (b) temporal modulation detection as a function of level and background noise, and (c) spectral modulation as a function of level. Across the three experiments, a total of 21 older listeners with near-normal audiograms and 29 young listeners with audiometrically normal hearing participated. The auditory brainstem response experiment demonstrated reduced Wave I amplitudes and concomitant reductions in the amplitude ratios of Wave I to Wave V in the older listener group. These findings were interpreted as consistent with an electrophysiological profile of cochlear synaptopathy. The temporal and spectral modulation detection experiments, however, provided no support for the hypothesis of compromised suprathreshold processing in these domains. This pattern of results suggests that even if cochlear synaptopathy can be shown to be a viable basis for age-related hearing difficulties, then temporal and spectral modulation detection paradigms are not sensitive to its presence.
Collapse
Affiliation(s)
- John H. Grose
- Department of Otolaryngology – Head and Neck Surgery, University of North Carolina at Chapel Hill, NC, USA
| | - Emily Buss
- Department of Otolaryngology – Head and Neck Surgery, University of North Carolina at Chapel Hill, NC, USA
| | - Hollis Elmore
- Department of Otolaryngology – Head and Neck Surgery, University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|
24
|
Pieper I, Mauermann M, Oetting D, Kollmeier B, Ewert SD. Physiologically motivated individual loudness model for normal hearing and hearing impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:917. [PMID: 30180690 DOI: 10.1121/1.5050518] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Accepted: 07/27/2018] [Indexed: 06/08/2023]
Abstract
A loudness model with a central gain is suggested to improve individualized predictions of loudness scaling data from normal hearing and hearing impaired listeners. The current approach is based on the loudness model of Pieper et al. [(2016). J. Acoust. Soc. Am. 139, 2896], which simulated the nonlinear inner ear mechanics as transmission-line model in a physical and physiological plausible way. Individual hearing thresholds were simulated by a cochlear gain reduction in the transmission-line model and linear attenuation (damage of inner hair cells) prior to an internal threshold. This and similar approaches of current loudness models that characterize the individual hearing loss were shown to be insufficient to account for individual loudness perception, in particular at high stimulus levels close to the uncomfortable level. An additional parameter, termed "post gain," was introduced to improve upon the previous models. The post gain parameter amplifies the signal parts above the internal threshold and can better account for individual variations in the overall steepness of loudness functions and for variations in the uncomfortable level which are independent of the hearing loss. The post gain can be interpreted as a central gain occurring at higher stages as a result of peripheral deafferentation.
Collapse
Affiliation(s)
- Iko Pieper
- Medical Physics and Cluster of Excellence Hearing4All, Universität Oldenburg, Oldenburg, D-26111, Germany
| | - Manfred Mauermann
- Medical Physics and Cluster of Excellence Hearing4All, Universität Oldenburg, Oldenburg, D-26111, Germany
| | - Dirk Oetting
- HörTech gGmbH and Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Birger Kollmeier
- Medical Physics and Cluster of Excellence Hearing4All, Universität Oldenburg, Oldenburg, D-26111, Germany
| | - Stephan D Ewert
- Medical Physics and Cluster of Excellence Hearing4All, Universität Oldenburg, Oldenburg, D-26111, Germany
| |
Collapse
|
25
|
Felix RA, Gourévitch B, Portfors CV. Subcortical pathways: Towards a better understanding of auditory disorders. Hear Res 2018; 362:48-60. [PMID: 29395615 PMCID: PMC5911198 DOI: 10.1016/j.heares.2018.01.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 12/11/2017] [Accepted: 01/16/2018] [Indexed: 01/13/2023]
Abstract
Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders.
Collapse
Affiliation(s)
- Richard A Felix
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA
| | - Boris Gourévitch
- Unité de Génétique et Physiologie de l'Audition, UMRS 1120 INSERM, Institut Pasteur, Université Pierre et Marie Curie, F-75015, Paris, France; CNRS, France
| | - Christine V Portfors
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA.
| |
Collapse
|
26
|
Altoè A, Pulkki V, Verhulst S. The effects of the activation of the inner-hair-cell basolateral K + channels on auditory nerve responses. Hear Res 2018; 364:68-80. [PMID: 29678326 DOI: 10.1016/j.heares.2018.03.029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 02/23/2018] [Accepted: 03/28/2018] [Indexed: 10/17/2022]
Abstract
The basolateral membrane of the mammalian inner hair cell (IHC) expresses large voltage and Ca2+ gated outward K+ currents. To quantify how the voltage-dependent activation of the K+ channels affects the functionality of the auditory nerve innervating the IHC, this study adopts a model of mechanical-to-neural transduction in which the basolateral K+ conductances of the IHC can be made voltage-dependent or not. The model shows that the voltage-dependent activation of the K+ channels (i) enhances the phase-locking properties of the auditory fiber (AF) responses; (ii) enables the auditory nerve to encode a large dynamic range of sound levels; (iii) enables the AF responses to synchronize precisely with the envelope of amplitude modulated stimuli; and (iv), is responsible for the steep offset responses of the AFs. These results suggest that the basolateral K+ channels play a major role in determining the well-known response properties of the AFs and challenge the classical view that describes the IHC membrane as an electrical low-pass filter. In contrast to previous models of the IHC-AF complex, this study ascribes many of the AF response properties to fairly basic mechanisms in the IHC membrane rather than to complex mechanisms in the synapse.
Collapse
Affiliation(s)
- Alessandro Altoè
- Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University, P.O. Box 13000, FI-00076, Aalto, Finland.
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University, P.O. Box 13000, FI-00076, Aalto, Finland
| | - Sarah Verhulst
- WAVES Department of Information Technology, Technologiepark 15, 9052, Zwijnaarde, Belgium
| |
Collapse
|
27
|
Computational modeling of the human auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss. Hear Res 2018; 360:55-75. [DOI: 10.1016/j.heares.2017.12.018] [Citation(s) in RCA: 90] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 12/17/2017] [Accepted: 12/23/2017] [Indexed: 11/21/2022]
|
28
|
Mehraei G, Gallardo AP, Shinn-Cunningham BG, Dau T. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds. Hear Res 2017; 346:34-44. [PMID: 28159652 PMCID: PMC5402043 DOI: 10.1016/j.heares.2017.01.016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Revised: 01/19/2017] [Accepted: 01/25/2017] [Indexed: 12/17/2022]
Abstract
In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments.
Collapse
Affiliation(s)
- Golbarg Mehraei
- Program in Speech and Hearing Bioscience and Technology, Harvard University-Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA, 02215, USA; Hearing Systems Group, Technical University of Denmark, Ørsteds Plads Building 352, 2800, Kongens Lyngby, Denmark.
| | - Andreu Paredes Gallardo
- Hearing Systems Group, Technical University of Denmark, Ørsteds Plads Building 352, 2800, Kongens Lyngby, Denmark
| | - Barbara G Shinn-Cunningham
- Program in Speech and Hearing Bioscience and Technology, Harvard University-Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA, 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Torsten Dau
- Hearing Systems Group, Technical University of Denmark, Ørsteds Plads Building 352, 2800, Kongens Lyngby, Denmark
| |
Collapse
|
29
|
Raufer S, Verhulst S. Otoacoustic emission estimates of human basilar membrane impulse response duration and cochlear filter tuning. Hear Res 2016; 342:150-160. [DOI: 10.1016/j.heares.2016.10.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 10/20/2016] [Accepted: 10/26/2016] [Indexed: 10/20/2022]
|
30
|
Verhulst S, Jagadeesh A, Mauermann M, Ernst F. Individual Differences in Auditory Brainstem Response Wave Characteristics: Relations to Different Aspects of Peripheral Hearing Loss. Trends Hear 2016; 20:2331216516672186. [PMID: 27837052 PMCID: PMC5117250 DOI: 10.1177/2331216516672186] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 09/08/2016] [Indexed: 11/20/2022] Open
Abstract
Little is known about how outer hair cell loss interacts with noise-induced and age-related auditory nerve degradation (i.e., cochlear synaptopathy) to affect auditory brainstem response (ABR) wave characteristics. Given that listeners with impaired audiograms likely suffer from mixtures of these hearing deficits and that ABR amplitudes have successfully been used to isolate synaptopathy in listeners with normal audiograms, an improved understanding of how different hearing pathologies affect the ABR source generators will improve their sensitivity in hearing diagnostics. We employed a functional model for human ABRs in which different combinations of hearing deficits were simulated and show that high-frequency cochlear gain loss steepens the slope of the ABR Wave-V latency versus intensity and amplitude versus intensity curves. We propose that grouping listeners according to a ratio of these slope metrics (i.e., the ABR growth ratio) might offer a way to factor out the outer hair cell loss deficit and maximally relate individual differences for constant ratios to other peripheral hearing deficits such as cochlear synaptopathy. We compared the model predictions to recorded click-ABRs from 30 participants with normal or high-frequency sloping audiograms and confirm the predicted relationship between the ABR latency growth curve and audiogram slope. Experimental ABR amplitude growth showed large individual differences and was compared with the Wave-I amplitude, Wave-V/I ratio, or the interwaveI-W latency in the same listeners. The model simulations along with the ABR recordings suggest that a hearing loss profile depicting the ABR growth ratio versus the Wave-I amplitude or Wave-V/I ratio might be able to differentiate outer hair cell deficits from cochlear synaptopathy in listeners with mixed pathologies.
Collapse
Affiliation(s)
- Sarah Verhulst
- Cluster of Excellence Hearing4all and Medizinische Physik, Department of Medical Physics and Acoustics, Oldenburg University, Oldenburg, Germany
- Department of Information Technology, Ghent University, Technologiepark, Zwijnaarde, Belgium
| | - Anoop Jagadeesh
- Cluster of Excellence Hearing4all and Medizinische Physik, Department of Medical Physics and Acoustics, Oldenburg University, Oldenburg, Germany
| | - Manfred Mauermann
- Cluster of Excellence Hearing4all and Medizinische Physik, Department of Medical Physics and Acoustics, Oldenburg University, Oldenburg, Germany
| | - Frauke Ernst
- Cluster of Excellence Hearing4all and Medizinische Physik, Department of Medical Physics and Acoustics, Oldenburg University, Oldenburg, Germany
| |
Collapse
|
31
|
Saremi A, Beutelmann R, Dietz M, Ashida G, Kretzberg J, Verhulst S. A comparative study of seven human cochlear filter models. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1618. [PMID: 27914400 DOI: 10.1121/1.4960486] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Auditory models have been developed for decades to simulate characteristics of the human auditory system, but it is often unknown how well auditory models compare to each other or perform in tasks they were not primarily designed for. This study systematically analyzes predictions of seven publicly-available cochlear filter models in response to a fixed set of stimuli to assess their capabilities of reproducing key aspects of human cochlear mechanics. The following features were assessed at frequencies of 0.5, 1, 2, 4, and 8 kHz: cochlear excitation patterns, nonlinear response growth, frequency selectivity, group delays, signal-in-noise processing, and amplitude modulation representation. For each task, the simulations were compared to available physiological data recorded in guinea pigs and gerbils as well as to human psychoacoustics data. The presented results provide application-oriented users with comprehensive information on the advantages, limitations and computation costs of these seven mainstream cochlear filter models.
Collapse
Affiliation(s)
- Amin Saremi
- Computational Neuroscience and Cluster of Excellence "Hearing4all," Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| | - Rainer Beutelmann
- Animal Physiology and Behavior and Cluster of Excellence "Hearing4all," Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| | - Mathias Dietz
- Medizinische Physik and Cluster of Excellence "Hearing4all," Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Go Ashida
- Computational Neuroscience and Cluster of Excellence "Hearing4all," Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| | - Jutta Kretzberg
- Computational Neuroscience and Cluster of Excellence "Hearing4all," Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| | - Sarah Verhulst
- Medizinische Physik and Cluster of Excellence "Hearing4all," Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
32
|
Pieper I, Mauermann M, Kollmeier B, Ewert SD. Physiological motivated transmission-lines as front end for loudness models. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:2896. [PMID: 27250182 DOI: 10.1121/1.4949540] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The perception of loudness is strongly influenced by peripheral auditory processing, which calls for a physiologically correct peripheral auditory processing stage when constructing advanced loudness models. Most loudness models, however, rather follow a functional approach: a parallel auditory filter bank combined with a compression stage, followed by spectral and temporal integration. Such classical loudness models do not allow to directly link physiological measurements like otoacoustic emissions to properties of their auditory filterbank. However, this can be achieved with physiologically motivated transmission-line models (TLMs) of the cochlea. Here two active and nonlinear TLMs were tested as the peripheral front end of a loudness model. The TLMs are followed by a simple generic back end which performs integration of basilar-membrane "excitation" across place and time to yield a loudness estimate. The proposed model approach reaches similar performance as other state-of-the-art loudness models regarding the prediction of loudness in sones, equal-loudness contours (including spectral fine structure), and loudness as a function of bandwidth. The suggested model provides a powerful tool to directly connect objective measures of basilar membrane compression, such as distortion product otoacoustic emissions, and loudness in future studies.
Collapse
Affiliation(s)
- Iko Pieper
- Medizinische Physik and Cluster of Excellence Hearing4All, Universität Oldenburg, D-26111 Oldenburg, Germany
| | - Manfred Mauermann
- Medizinische Physik and Cluster of Excellence Hearing4All, Universität Oldenburg, D-26111 Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence Hearing4All, Universität Oldenburg, D-26111 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4All, Universität Oldenburg, D-26111 Oldenburg, Germany
| |
Collapse
|
33
|
Paredes Gallardo A, Epp B, Dau T. Can place-specific cochlear dispersion be represented by auditory steady-state responses? Hear Res 2016; 335:76-82. [PMID: 26906677 DOI: 10.1016/j.heares.2016.02.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Revised: 01/04/2016] [Accepted: 02/18/2016] [Indexed: 10/22/2022]
Abstract
The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large "within-channel" synchrony of activity. This would lead, in turn, to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by "across-channel" synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion.
Collapse
Affiliation(s)
- Andreu Paredes Gallardo
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads Building 352, 2800 Kongens Lyngby, Denmark.
| | - Bastian Epp
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads Building 352, 2800 Kongens Lyngby, Denmark.
| | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads Building 352, 2800 Kongens Lyngby, Denmark.
| |
Collapse
|