1
|
Li BZ, Poleg S, Ridenour M, Tollin D, Lei T, Klug A. Computational model for synthesizing auditory brainstem responses to assess neuronal alterations in aging and autistic animal models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.08.04.606499. [PMID: 39211118 PMCID: PMC11361117 DOI: 10.1101/2024.08.04.606499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Purpose The auditory brainstem response (ABR) is a widely used objective electrophysiology measure for non-invasively assessing auditory function and neural activity in the auditory brainstem, but its ability to reflect detailed neuronal processing is limited due to the averaging nature of the electroencephalogram-type recordings. Method This study addresses this limitation by developing a computational model of the auditory brainstem which is capable of synthesizing ABR traces based on a large, population scale neural extrapolation of a spiking neuronal network of auditory brainstem circuitry. The model was able to recapitulate alterations in ABR waveform morphology that have been shown to be present in two medical conditions: animal models of autism and aging. Moreover, in both of these conditions, these ABR alterations are caused by known distinct changes in auditory brainstem physiology, and the model could recapitulate these changes. Results In the autism model, the simulation revealed myelin deficits and hyperexcitability, which caused a decreased wave III amplitude and a prolonged wave III-V interval, consistent with experimentally recorded ABRs in Fmr1-KO mice. For the aging condition, the model recapitulated ABRs recorded in aged gerbils and indicated a reduction in activity in the medial nucleus of the trapezoid body (MNTB), a finding validated by confocal imaging data. Conclusion These results demonstrate not only the model's accuracy but also its capability of linking features of ABR morphology to underlying neuronal properties and suggesting follow-up physiological experiments.
Collapse
|
2
|
Polonenko MJ, Maddox RK. Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. JASA EXPRESS LETTERS 2024; 4:114401. [PMID: 39504231 PMCID: PMC11558516 DOI: 10.1121/10.0034329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 10/09/2024] [Indexed: 11/08/2024]
Abstract
Deriving human neural responses to natural speech is now possible, but the responses to male- and female-uttered speech have been shown to differ. These talker differences may complicate interpretations or restrict experimental designs geared toward more realistic communication scenarios. This study found that when a male talker and a female talker had the same fundamental frequency, auditory brainstem responses (ABRs) were very similar. Those responses became smaller and later with increasing fundamental frequency, as did click ABRs with increasing stimulus rates. Modeled responses suggested that the speech and click ABR differences were reasonably predicted by peripheral and brainstem processing of stimulus acoustics.
Collapse
Affiliation(s)
- Melissa J Polonenko
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, New York 14642, USA
| | - Ross K Maddox
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, New York 14642, USA
- Kresge Hearing Research Institute, Department of Otolaryngology Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan 48109, USA
| |
Collapse
|
3
|
Kulasingham JP, Innes-Brown H, Enqvist M, Alickovic E. Level-Dependent Subcortical Electroencephalography Responses to Continuous Speech. eNeuro 2024; 11:ENEURO.0135-24.2024. [PMID: 39142822 DOI: 10.1523/eneuro.0135-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/16/2024] Open
Abstract
The auditory brainstem response (ABR) is a measure of subcortical activity in response to auditory stimuli. The wave V peak of the ABR depends on the stimulus intensity level, and has been widely used for clinical hearing assessment. Conventional methods estimate the ABR average electroencephalography (EEG) responses to short unnatural stimuli such as clicks. Recent work has moved toward more ecologically relevant continuous speech stimuli using linear deconvolution models called temporal response functions (TRFs). Investigating whether the TRF waveform changes with stimulus intensity is a crucial step toward the use of natural speech stimuli for hearing assessments involving subcortical responses. Here, we develop methods to estimate level-dependent subcortical TRFs using EEG data collected from 21 participants listening to continuous speech presented at 4 different intensity levels. We find that level-dependent changes can be detected in the wave V peak of the subcortical TRF for almost all participants, and are consistent with level-dependent changes in click-ABR wave V. We also investigate the most suitable peripheral auditory model to generate predictors for level-dependent subcortical TRFs and find that simple gammatone filterbanks perform the best. Additionally, around 6 min of data may be sufficient for detecting level-dependent effects and wave V peaks above the noise floor for speech segments with higher intensity. Finally, we show a proof-of-concept that level-dependent subcortical TRFs can be detected even for the inherent intensity fluctuations in natural continuous speech.
Collapse
Affiliation(s)
- Joshua P Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, 581 83 Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, DK-3070 Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, 581 83 Linköping, Sweden
| | - Emina Alickovic
- Automatic Control, Department of Electrical Engineering, Linköping University, 581 83 Linköping, Sweden
- Eriksholm Research Centre, DK-3070 Snekkersten, Denmark
| |
Collapse
|
4
|
Gao T, Deng B, Wang J, Yi G. A linearized modeling framework for the frequency selectivity in neurons postsynaptic to vibration receptors. Cogn Neurodyn 2024; 18:2061-2075. [PMID: 39104690 PMCID: PMC11297856 DOI: 10.1007/s11571-024-10070-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 12/11/2023] [Accepted: 01/16/2024] [Indexed: 08/07/2024] Open
Abstract
Vibration is an indispensable part of the tactile perception, which is encoded to oscillatory synaptic currents by receptors and transferred to neurons in the brain. The A2 and B1 neurons in the drosophila brain postsynaptic to the vibration receptors exhibit selective preferences for oscillatory synaptic currents with different frequencies, which is caused by the specific voltage-gated Na+ and K+ currents that both oppose the variations in membrane potential. To understand the peculiar role of the Na+ and K+ currents in shaping the filtering property of A2 and B1 neurons, we develop a linearized modeling framework that allows to systematically change the activation properties of these ionic channels. A data-driven conductance-based biophysical model is used to reproduce the frequency filtering of oscillatory synaptic inputs. Then, this data-driven model is linearized at the resting potential and its frequency response is calculated based on the transfer function, which is described by the magnitude-frequency curve. When we regulate the activation properties of the Na+ and K+ channels by changing the biophysical parameters, the dominant pole of the transfer function is found to be highly correlated with the fluctuation of the active current, which represents the strength of suppression of slow voltage variation. Meanwhile, the dominant pole also shapes the magnitude-frequency curve and further qualitatively determines the filtering property of the model. The transfer function provides a parsimonious description of how the biophysical parameters in Na+ and K+ channels change the inhibition of slow variations in membrane potential by Na+ and K+ currents, and further illustrates the relationship between the filtering properties and the activation properties of Na+ and K+ channels. This computational framework with the data-driven conductance-based biophysical model and its linearized model contributes to understanding the transmission and filtering of vibration stimulus in the tactile system.
Collapse
Affiliation(s)
- Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Guosheng Yi
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| |
Collapse
|
5
|
Polonenko MJ, Maddox RK. Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.12.603125. [PMID: 39026858 PMCID: PMC11257598 DOI: 10.1101/2024.07.12.603125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Deriving human neural responses to natural speech is now possible, but the responses to male- and female-uttered speech have been shown to differ. These talker differences may complicate interpretations or restrict experimental designs geared toward more realistic communication scenarios. This study found that when a male and female talker had the same fundamental frequency, auditory brainstem responses (ABRs) were very similar. Those responses became smaller and later with increasing fundamental frequency, as did click ABRs with increasing stimulus rates. Modeled responses suggested that the speech and click ABR differences were reasonably predicted by peripheral and brainstem processing of stimulus acoustics.
Collapse
Affiliation(s)
- Melissa J. Polonenko
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, 55455, USA
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, NY, 14642
| | - Ross K. Maddox
- Kresge Hearing Research Institute, Department of Otolaryngology – Head and Neck Surgery, University of Michigan, Ann Arbor, MI, 48109, USA
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, NY, 14642
| |
Collapse
|
6
|
Bolt E, Giroud N. Auditory Encoding of Natural Speech at Subcortical and Cortical Levels Is Not Indicative of Cognitive Decline. eNeuro 2024; 11:ENEURO.0545-23.2024. [PMID: 38658138 PMCID: PMC11082929 DOI: 10.1523/eneuro.0545-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/27/2024] [Accepted: 03/29/2024] [Indexed: 04/26/2024] Open
Abstract
More and more patients worldwide are diagnosed with dementia, which emphasizes the urgent need for early detection markers. In this study, we built on the auditory hypersensitivity theory of a previous study-which postulated that responses to auditory input in the subcortex as well as cortex are enhanced in cognitive decline-and examined auditory encoding of natural continuous speech at both neural levels for its indicative potential for cognitive decline. We recruited study participants aged 60 years and older, who were divided into two groups based on the Montreal Cognitive Assessment, one group with low scores (n = 19, participants with signs of cognitive decline) and a control group (n = 25). Participants completed an audiometric assessment and then we recorded their electroencephalography while they listened to an audiobook and click sounds. We derived temporal response functions and evoked potentials from the data and examined response amplitudes for their potential to predict cognitive decline, controlling for hearing ability and age. Contrary to our expectations, no evidence of auditory hypersensitivity was observed in participants with signs of cognitive decline; response amplitudes were comparable in both cognitive groups. Moreover, the combination of response amplitudes showed no predictive value for cognitive decline. These results challenge the proposed hypothesis and emphasize the need for further research to identify reliable auditory markers for the early detection of cognitive decline.
Collapse
Affiliation(s)
- Elena Bolt
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich 8050, Switzerland
- International Max Planck Research School on the Life Course (IMPRS LIFE), University of Zurich, Zurich 8050, Switzerland
| | - Nathalie Giroud
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich 8050, Switzerland
- International Max Planck Research School on the Life Course (IMPRS LIFE), University of Zurich, Zurich 8050, Switzerland
- Language & Medicine Centre Zurich, Competence Centre of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich 8050, Switzerland
| |
Collapse
|
7
|
Kulasingham JP, Bachmann FL, Eskelund K, Enqvist M, Innes-Brown H, Alickovic E. Predictors for estimating subcortical EEG responses to continuous speech. PLoS One 2024; 19:e0297826. [PMID: 38330068 PMCID: PMC10852227 DOI: 10.1371/journal.pone.0297826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 01/12/2024] [Indexed: 02/10/2024] Open
Abstract
Perception of sounds and speech involves structures in the auditory brainstem that rapidly process ongoing auditory stimuli. The role of these structures in speech processing can be investigated by measuring their electrical activity using scalp-mounted electrodes. However, typical analysis methods involve averaging neural responses to many short repetitive stimuli that bear little relevance to daily listening environments. Recently, subcortical responses to more ecologically relevant continuous speech were detected using linear encoding models. These methods estimate the temporal response function (TRF), which is a regression model that minimises the error between the measured neural signal and a predictor derived from the stimulus. Using predictors that model the highly non-linear peripheral auditory system may improve linear TRF estimation accuracy and peak detection. Here, we compare predictors from both simple and complex peripheral auditory models for estimating brainstem TRFs on electroencephalography (EEG) data from 24 participants listening to continuous speech. We also investigate the data length required for estimating subcortical TRFs, and find that around 12 minutes of data is sufficient for clear wave V peaks (>3 dB SNR) to be seen in nearly all participants. Interestingly, predictors derived from simple filterbank-based models of the peripheral auditory system yield TRF wave V peak SNRs that are not significantly different from those estimated using a complex model of the auditory nerve, provided that the nonlinear effects of adaptation in the auditory system are appropriately modelled. Crucially, computing predictors from these simpler models is more than 50 times faster compared to the complex model. This work paves the way for efficient modelling and detection of subcortical processing of continuous speech, which may lead to improved diagnosis metrics for hearing impairment and assistive hearing technology.
Collapse
Affiliation(s)
- Joshua P. Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Emina Alickovic
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
- Eriksholm Research Centre, Snekkersten, Denmark
| |
Collapse
|
8
|
Shan T, Cappelloni MS, Maddox RK. Subcortical responses to music and speech are alike while cortical responses diverge. Sci Rep 2024; 14:789. [PMID: 38191488 PMCID: PMC10774448 DOI: 10.1038/s41598-023-50438-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.
Collapse
Affiliation(s)
- Tong Shan
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Madeline S Cappelloni
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ross K Maddox
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
9
|
Bachmann FL, Kulasingham JP, Eskelund K, Enqvist M, Alickovic E, Innes-Brown H. Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field. Trends Hear 2024; 28:23312165241246596. [PMID: 38738341 PMCID: PMC11092544 DOI: 10.1177/23312165241246596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 05/14/2024] Open
Abstract
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
Collapse
Affiliation(s)
| | - Joshua P. Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
10
|
Spirou GA, Kersting M, Carr S, Razzaq B, Yamamoto Alves Pinto C, Dawson M, Ellisman MH, Manis PB. High-resolution volumetric imaging constrains compartmental models to explore synaptic integration and temporal processing by cochlear nucleus globular bushy cells. eLife 2023; 12:e83393. [PMID: 37288824 PMCID: PMC10435236 DOI: 10.7554/elife.83393] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 06/07/2023] [Indexed: 06/09/2023] Open
Abstract
Globular bushy cells (GBCs) of the cochlear nucleus play central roles in the temporal processing of sound. Despite investigation over many decades, fundamental questions remain about their dendrite structure, afferent innervation, and integration of synaptic inputs. Here, we use volume electron microscopy (EM) of the mouse cochlear nucleus to construct synaptic maps that precisely specify convergence ratios and synaptic weights for auditory nerve innervation and accurate surface areas of all postsynaptic compartments. Detailed biophysically based compartmental models can help develop hypotheses regarding how GBCs integrate inputs to yield their recorded responses to sound. We established a pipeline to export a precise reconstruction of auditory nerve axons and their endbulb terminals together with high-resolution dendrite, soma, and axon reconstructions into biophysically detailed compartmental models that could be activated by a standard cochlear transduction model. With these constraints, the models predict auditory nerve input profiles whereby all endbulbs onto a GBC are subthreshold (coincidence detection mode), or one or two inputs are suprathreshold (mixed mode). The models also predict the relative importance of dendrite geometry, soma size, and axon initial segment length in setting action potential threshold and generating heterogeneity in sound-evoked responses, and thereby propose mechanisms by which GBCs may homeostatically adjust their excitability. Volume EM also reveals new dendritic structures and dendrites that lack innervation. This framework defines a pathway from subcellular morphology to synaptic connectivity, and facilitates investigation into the roles of specific cellular features in sound encoding. We also clarify the need for new experimental measurements to provide missing cellular parameters, and predict responses to sound for further in vivo studies, thereby serving as a template for investigation of other neuron classes.
Collapse
Affiliation(s)
- George A Spirou
- Department of Medical Engineering, University of South FloridaTampaUnited States
| | - Matthew Kersting
- Department of Medical Engineering, University of South FloridaTampaUnited States
| | - Sean Carr
- Department of Medical Engineering, University of South FloridaTampaUnited States
| | - Bayan Razzaq
- Department of Otolaryngology, Head and Neck Surgery, West Virginia UniversityMorgantownUnited States
| | | | - Mariah Dawson
- Department of Otolaryngology, Head and Neck Surgery, West Virginia UniversityMorgantownUnited States
| | - Mark H Ellisman
- Department of Neurosciences, University of California, San DiegoSan DiegoUnited States
- National Center for Microscopy and Imaging Research,University of California, San DiegoSan DiegoUnited States
| | - Paul B Manis
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel HillChapel HillUnited States
- Department of Cell Biology and Physiology, University of North CarolinaChapel HillUnited States
| |
Collapse
|
11
|
Stoll TJ, Maddox RK. Enhanced Place Specificity of the Parallel Auditory Brainstem Response: A Modeling Study. Trends Hear 2023; 27:23312165231205719. [PMID: 37807857 PMCID: PMC10563492 DOI: 10.1177/23312165231205719] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 09/13/2023] [Accepted: 09/19/2023] [Indexed: 10/10/2023] Open
Abstract
While each place on the cochlea is most sensitive to a specific frequency, it will generally respond to a sufficiently high-level stimulus over a wide range of frequencies. This spread of excitation can introduce errors in clinical threshold estimation during a diagnostic auditory brainstem response (ABR) exam. Off-frequency cochlear excitation can be mitigated through the addition of masking noise to the test stimuli, but introducing a masker increases the already long test times of the typical ABR exam. Our lab has recently developed the parallel ABR (pABR) paradigm to speed up test times by utilizing randomized stimulus timing to estimate the thresholds for multiple frequencies simultaneously. There is reason to believe parallel presentation of multiple frequencies provides masking effects and improves place specificity while decreasing test times. Here, we use two computational models of the auditory periphery to characterize the predicted effect of parallel presentation on place specificity in the auditory nerve. We additionally examine the effect of stimulus rate and level. Both models show the pABR is at least as place specific as standard methods, with an improvement in place specificity for parallel presentation (vs. serial) at high levels, especially at high stimulus rates. When simulating hearing impairment in one of the models, place specificity was also improved near threshold. Rather than a tradeoff, this improved place specificity would represent a secondary benefit to the pABR's faster test times.
Collapse
Affiliation(s)
- Thomas J. Stoll
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
| | - Ross K. Maddox
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Department of Neuroscience, University of Rochester, Rochester, NY, USA
| |
Collapse
|
12
|
Subbulakshmi Radhakrishnan S, Chakrabarti S, Sen D, Das M, Schranghamer TF, Sebastian A, Das S. A Sparse and Spike-Timing-Based Adaptive Photoencoder for Augmenting Machine Vision for Spiking Neural Networks. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2202535. [PMID: 35674268 DOI: 10.1002/adma.202202535] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/31/2022] [Indexed: 06/15/2023]
Abstract
The representation of external stimuli in the form of action potentials or spikes constitutes the basis of energy efficient neural computation that emerging spiking neural networks (SNNs) aspire to imitate. With recent evidence suggesting that information in the brain is more often represented by explicit firing times of the neurons rather than mean firing rates, it is imperative to develop novel hardware that can accelerate sparse and spike-timing-based encoding. Here a medium-scale integrated circuit composed of two cascaded three-stage inverters and one XOR logic gate fabricated using a total of 21 memtransistors based on photosensitive 2D monolayer MoS2 for spike-timing-based encoding of visual information, is introduced. It is shown that different illumination intensities can be encoded into sparse spiking with time-to-first-spike representing the illumination information, that is, higher intensities invoke earlier spikes and vice versa. In addition, non-volatile and analog programmability in the photoencoder is exploited for adaptive photoencoding that allows expedited spiking under scotopic (low-light) and deferred spiking under photopic (bright-light) conditions, respectively. Finally, low energy expenditure of less than 1 µJ by the 2D-memtransistor-based photoencoder highlights the benefits of in-sensor and bioinspired design that can be transformative for the acceleration of SNNs.
Collapse
Affiliation(s)
| | - Shakya Chakrabarti
- Electrical Engineering and Computer Science, Penn State University, University Park, PA, 16802, USA
| | - Dipanjan Sen
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | - Mayukh Das
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | - Thomas F Schranghamer
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | - Amritanand Sebastian
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | - Saptarshi Das
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
- Electrical Engineering and Computer Science, Penn State University, University Park, PA, 16802, USA
- Materials Science and Engineering, Penn State University, University Park, PA, 16802, USA
- Materials Research Institute, Penn State University, University Park, PA, 16802, USA
| |
Collapse
|
13
|
Osses Vecchi A, Varnet L, Carney LH, Dau T, Bruce IC, Verhulst S, Majdak P. A comparative study of eight human auditory models of monaural processing. ACTA ACUSTICA. EUROPEAN ACOUSTICS ASSOCIATION 2022; 6:17. [PMID: 36325461 PMCID: PMC9625898 DOI: 10.1051/aacus/2022008] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
A number of auditory models have been developed using diverging approaches, either physiological or perceptual, but they share comparable stages of signal processing, as they are inspired by the same constitutive parts of the auditory system. We compare eight monaural models that are openly accessible in the Auditory Modelling Toolbox. We discuss the considerations required to make the model outputs comparable to each other, as well as the results for the following model processing stages or their equivalents: Outer and middle ear, cochlear filter bank, inner hair cell, auditory nerve synapse, cochlear nucleus, and inferior colliculus. The discussion includes a list of recommendations for future applications of auditory models.
Collapse
Affiliation(s)
- Alejandro Osses Vecchi
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École Normale Supérieure, PSL University, CNRS, 75005 Paris, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École Normale Supérieure, PSL University, CNRS, 75005 Paris, France
| | - Laurel H. Carney
- Departments of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, NY 14642, USA
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Ian C. Bruce
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON L8S 4K1, Canada
| | - Sarah Verhulst
- Hearing Technology group, WAVES, Department of Information Technology, Ghent University, 9000 Ghent, Belgium
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, 1040 Vienna, Austria
| |
Collapse
|
14
|
Li BZ, Pun SH, Vai MI, Lei TC, Klug A. Predicting the Influence of Axon Myelination on Sound Localization Precision Using a Spiking Neural Network Model of Auditory Brainstem. Front Neurosci 2022; 16:840983. [PMID: 35360169 PMCID: PMC8964079 DOI: 10.3389/fnins.2022.840983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 02/18/2022] [Indexed: 01/12/2023] Open
Abstract
Spatial hearing allows animals to rapidly detect and localize auditory events in the surrounding environment. The auditory brainstem plays a central role in processing and extracting binaural spatial cues through microsecond-precise binaural integration, especially for detecting interaural time differences (ITDs) of low-frequency sounds at the medial superior olive (MSO). A series of mechanisms exist in the underlying neural circuits for preserving accurate action potential timing across multiple fibers, synapses and nuclei along this pathway. One of these is the myelination of afferent fibers that ensures reliable and temporally precise action potential propagation in the axon. There are several reports of fine-tuned myelination patterns in the MSO circuit, but how specifically myelination influences the precision of sound localization remains incompletely understood. Here we present a spiking neural network (SNN) model of the Mongolian gerbil auditory brainstem with myelinated axons to investigate whether different axon myelination thicknesses alter the sound localization process. Our model demonstrates that axon myelin thickness along the contralateral pathways can substantially modulate ITD detection. Furthermore, optimal ITD sensitivity is reached when the MSO receives contralateral inhibition via thicker myelinated axons compared to contralateral excitation, a result that is consistent with previously reported experimental observations. Our results suggest specific roles of axon myelination for extracting temporal dynamics in ITD decoding, especially in the pathway of the contralateral inhibition.
Collapse
Affiliation(s)
- Ben-Zheng Li
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, United States,Department of Electrical Engineering, University of Colorado, Denver, Denver, CO, United States,State Key Laboratory of Analog and Mixed Signal Very-Large-Scale Integration (VLSI), University of Macau, Taipa, Macau SAR, China,Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - Sio Hang Pun
- State Key Laboratory of Analog and Mixed Signal Very-Large-Scale Integration (VLSI), University of Macau, Taipa, Macau SAR, China
| | - Mang I. Vai
- State Key Laboratory of Analog and Mixed Signal Very-Large-Scale Integration (VLSI), University of Macau, Taipa, Macau SAR, China,Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - Tim C. Lei
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, United States,Department of Electrical Engineering, University of Colorado, Denver, Denver, CO, United States
| | - Achim Klug
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, United States,*Correspondence: Achim Klug,
| |
Collapse
|
15
|
Guest DR, Oxenham AJ. Human discrimination and modeling of high-frequency complex tones shed light on the neural codes for pitch. PLoS Comput Biol 2022; 18:e1009889. [PMID: 35239639 PMCID: PMC8923464 DOI: 10.1371/journal.pcbi.1009889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 03/15/2022] [Accepted: 02/02/2022] [Indexed: 11/24/2022] Open
Abstract
Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.
Collapse
Affiliation(s)
- Daniel R. Guest
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| |
Collapse
|
16
|
Li BZ, Pun SH, Vai MI, Klug A, Lei TC. Axonal Conduction Delay Shapes the Precision of the Spatial Hearing in A Spiking Neural Network Model of Auditory Brainstem. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4238-4241. [PMID: 34892159 DOI: 10.1109/embc46164.2021.9629932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
One method by which the mammalian sound localization pathway localizes sound sources is by analyzing the microsecond-level difference between the arrival times of a sound at the two ears. However, how the neural circuits in the auditory brainstem precisely integrate signals from the two ears, and what the underlying mechanisms are, remains to be understood. Recent studies have reported that variations of axon myelination in the auditory brainstem produces various axonal conduction velocities and sophisticated temporal dynamics, which have not been well characterized in most existing models of sound localization circuits. Here, we present a spiking neural network model of the auditory brainstem to investigate how axon myelinations affect the precision of sound localization. Sound waves with different interaural time differences (ITDs) are encoded and used as stimuli, and the axon properties in the network are adjusted, and the corresponding axonal conduction delays are computed with a multi-compartment axon model. Through the simulation, the sensitivity of ITD perception varies with the myelin thickness of axons in the contralateral input pathways to the medial superior olive (MSO). The ITD perception becomes more precise when the contralateral inhibitory input propagates faster than the contralateral excitatory input. These results indicate that axon myelination and contralateral spike timing influence spatial hearing perception.
Collapse
|
17
|
Mathematical framework for place coding in the auditory system. PLoS Comput Biol 2021; 17:e1009251. [PMID: 34339409 PMCID: PMC8360601 DOI: 10.1371/journal.pcbi.1009251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 08/12/2021] [Accepted: 07/06/2021] [Indexed: 11/18/2022] Open
Abstract
In the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a space with elements that represent individual neurons and clusters of neurons. A mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The resulting outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show how both frequency and intensity can be encoded with a purely place code, without the need for rate or temporal coding schemes. The algebraic operations are used to describe loudness summation and suggest a mechanism for the critical band. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.
Collapse
|
18
|
Subbulakshmi Radhakrishnan S, Sebastian A, Oberoi A, Das S, Das S. A biomimetic neural encoder for spiking neural network. Nat Commun 2021; 12:2143. [PMID: 33837210 PMCID: PMC8035177 DOI: 10.1038/s41467-021-22332-8] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 03/09/2021] [Indexed: 02/07/2023] Open
Abstract
Spiking neural networks (SNNs) promise to bridge the gap between artificial neural networks (ANNs) and biological neural networks (BNNs) by exploiting biologically plausible neurons that offer faster inference, lower energy expenditure, and event-driven information processing capabilities. However, implementation of SNNs in future neuromorphic hardware requires hardware encoders analogous to the sensory neurons, which convert external/internal stimulus into spike trains based on specific neural algorithm along with inherent stochasticity. Unfortunately, conventional solid-state transducers are inadequate for this purpose necessitating the development of neural encoders to serve the growing need of neuromorphic computing. Here, we demonstrate a biomimetic device based on a dual gated MoS2 field effect transistor (FET) capable of encoding analog signals into stochastic spike trains following various neural encoding algorithms such as rate-based encoding, spike timing-based encoding, and spike count-based encoding. Two important aspects of neural encoding, namely, dynamic range and encoding precision are also captured in our demonstration. Furthermore, the encoding energy was found to be as frugal as ≈1-5 pJ/spike. Finally, we show fast (≈200 timesteps) encoding of the MNIST data set using our biomimetic device followed by more than 91% accurate inference using a trained SNN.
Collapse
Affiliation(s)
| | - Amritanand Sebastian
- Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, PA, USA
| | - Aaryan Oberoi
- Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, PA, USA
| | - Sarbashis Das
- Department of Electrical Engineering, Pennsylvania State University, University Park, PA, USA
| | - Saptarshi Das
- Department of Engineering Science and Mechanics, Pennsylvania State University, University Park, PA, USA.
- Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA, USA.
- Materials Research Institute, Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
19
|
Polonenko MJ, Maddox RK. Exposing distinct subcortical components of the auditory brainstem response evoked by continuous naturalistic speech. eLife 2021; 10:62329. [PMID: 33594974 PMCID: PMC7946424 DOI: 10.7554/elife.62329] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 02/16/2021] [Indexed: 12/21/2022] Open
Abstract
Speech processing is built upon encoding by the auditory nerve and brainstem, yet we know very little about how these processes unfold in specific subcortical structures. These structures are deep and respond quickly, making them difficult to study during ongoing speech. Recent techniques have begun to address this problem, but yield temporally broad responses with consequently ambiguous neural origins. Here, we describe a method that pairs re-synthesized ‘peaky’ speech with deconvolution analysis of electroencephalography recordings. We show that in adults with normal hearing the method quickly yields robust responses whose component waves reflect activity from distinct subcortical structures spanning auditory nerve to rostral brainstem. We further demonstrate the versatility of peaky speech by simultaneously measuring bilateral and ear-specific responses across different frequency bands and discuss the important practical considerations such as talker choice. The peaky speech method holds promise as a tool for investigating speech encoding and processing, and for clinical applications.
Collapse
Affiliation(s)
- Melissa J Polonenko
- Department of Neuroscience, University of Rochester, Rochester, United States.,Del Monte Institute for Neuroscience, University of Rochester, Rochester, United States.,Center for Visual Science, University of Rochester, Rochester, United States
| | - Ross K Maddox
- Department of Neuroscience, University of Rochester, Rochester, United States.,Del Monte Institute for Neuroscience, University of Rochester, Rochester, United States.,Center for Visual Science, University of Rochester, Rochester, United States.,Department of Biomedical Engineering, University of Rochester, Rochester, United States
| |
Collapse
|
20
|
Koert E, Kuenzel T. Small dendritic synapses enhance temporal coding in a model of cochlear nucleus bushy cells. J Neurophysiol 2021; 125:915-937. [PMID: 33471627 DOI: 10.1152/jn.00331.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Spherical bushy cells (SBCs) in the anteroventral cochlear nucleus receive a single or very few powerful axosomatic inputs from the auditory nerve. However, SBCs are also contacted by small regular bouton synapses of the auditory nerve, located in their dendritic tree. The function of these small inputs is unknown. It was speculated that the interaction of axosomatic inputs with small dendritic inputs improved temporal precision, but direct evidence for this is missing. In a compartment model of spherical bushy cells with a stylized or realistic three-dimensional (3-D) representation of the bushy dendrite, we explored this hypothesis. Phase-locked dendritic inputs caused both tonic depolarization and a modulation of the model SBC membrane potential at the frequency of the stimulus. For plausible model parameters, dendritic inputs were subthreshold. Instead, the tonic depolarization increased the excitability of the SBC model and the modulation of the membrane potential caused a phase-dependent increase in the efficacy of the main axosomatic input. This improved response rate and entrainment for low-input frequencies and temporal precision of output at and above the characteristic frequency. A careful exploration of morphological and biophysical parameters of the bushy dendrite suggested a functional explanation for the peculiar shape of the bushy dendrite. Our model for the first time directly implied a role for the small excitatory dendritic inputs in auditory processing: they modulate the efficacy of the main input and are thus a plausible mechanism for the improvement of temporal precision and fidelity in these central auditory neurons.NEW & NOTEWORTHY We modeled dendritic inputs from the auditory nerve that spherical bushy cells of the cochlear nucleus receive. Dendritic inputs caused both tonic depolarization and modulation of the membrane potential at the input frequency. This improved the rate, entrainment, and temporal precision of output action potentials. Our simulations suggest a role for small dendritic inputs in auditory processing: they modulate the efficacy of the main input supporting temporal precision and fidelity in these central auditory neurons.
Collapse
Affiliation(s)
- Elisabeth Koert
- Auditory Neurophysiology Group, Department of Chemosensation, RWTH Aachen University, Aachen, Germany
| | - Thomas Kuenzel
- Auditory Neurophysiology Group, Department of Chemosensation, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
21
|
Saiz-Alía M, Reichenbach T. Computational modeling of the auditory brainstem response to continuous speech. J Neural Eng 2020; 17:036035. [DOI: 10.1088/1741-2552/ab970d] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
22
|
Lenk C, Ekinci A, Rangelow IW, Gutschmidt S. Active, artificial hair cells for biomimetic sound detection based on active cantilever technology. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:4488-4491. [PMID: 30441348 DOI: 10.1109/embc.2018.8513210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We aim at building and studying artificial hair cells (AHC) based on MEMS technology to understand the extraordinary sound perception of the human ear and build a sensor system with similar properties. These perception properties, i.e. detecting six orders of sound pressure level and simultaneously frequency differences of only 3-5 Hz, are obtained mainly due to the sophisticated biological sensors in the inner ear, called hair cells, which convert the acoustic waves into electric signals. They amplify weak inputs and compress larger ones, known as compressive nonlinearity, thus enabling this impressive dynamic range, typically not captured by current engineering solutions. We tackle this demand by building artificial hair cells on the basis of smart, self-actuated and self-sensing mechanical resonator beams with suitable actuation feedback. Thereby, we take advantage of the fact that the compressive nonlinearity arises naturally in dynamical systems tuned to a bifurcation point. This tuning is achieved by an appropriate feedback loop inspired by physiological models. Initial results on the detection properties of a single AHC will be shown demonstrating amplification and a decreased width of the resonance peak.
Collapse
|
23
|
Guerreiro J, Reid A, Jackson JC, Windmill JFC. Active Hearing Mechanisms Inspire Adaptive Amplification in an Acoustic Sensor System. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:655-664. [PMID: 29877828 DOI: 10.1109/tbcas.2018.2827461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Over many millions of years of evolution, nature has developed some of the most adaptable sensors and sensory systems possible, capable of sensing, conditioning and processing signals in a very power- and size-effective manner. By looking into biological sensors and systems as a source of inspiration, this paper presents the study of a bioinspired concept of signal processing at the sensor level. By exploiting a feedback control mechanism between a front-end acoustic receiver and back-end neuronal based computation, a nonlinear amplification with hysteretic behavior is created. Moreover, the transient response of the front-end acoustic receiver can also be controlled and enhanced. A theoretical model is proposed and the concept is prototyped experimentally through an embedded system setup that can provide dynamic adaptations of a sensory system comprising a MEMS microphone placed in a closed-loop feedback system. It faithfully mimics the mosquito's active hearing response as a function of the input sound intensity. This is an adaptive acoustic sensor system concept that can be exploited by sensor and system designers within acoustics and ultrasonic engineering fields.
Collapse
|
24
|
Encke J, Hemmert W. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive. Front Neurosci 2018; 12:140. [PMID: 29559886 PMCID: PMC5845713 DOI: 10.3389/fnins.2018.00140] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Accepted: 02/21/2018] [Indexed: 11/13/2022] Open
Abstract
The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.
Collapse
Affiliation(s)
- Jörg Encke
- Bioanaloge-Informationsverarbeitung, Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Werner Hemmert
- Bioanaloge-Informationsverarbeitung, Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| |
Collapse
|
25
|
Manis PB, Campagnola L. A biophysical modelling platform of the cochlear nucleus and other auditory circuits: From channels to networks. Hear Res 2017; 360:76-91. [PMID: 29331233 DOI: 10.1016/j.heares.2017.12.017] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 11/27/2017] [Accepted: 12/23/2017] [Indexed: 12/12/2022]
Abstract
Models of the auditory brainstem have been an invaluable tool for testing hypotheses about auditory information processing and for highlighting the most important gaps in the experimental literature. Due to the complexity of the auditory brainstem, and indeed most brain circuits, the dynamic behavior of the system may be difficult to predict without a detailed, biologically realistic computational model. Despite the sensitivity of models to their exact construction and parameters, most prior models of the cochlear nucleus have incorporated only a small subset of the known biological properties. This confounds the interpretation of modelling results and also limits the potential future uses of these models, which require a large effort to develop. To address these issues, we have developed a general purpose, biophysically detailed model of the cochlear nucleus for use both in testing hypotheses about cochlear nucleus function and also as an input to models of downstream auditory nuclei. The model implements conductance-based Hodgkin-Huxley representations of cells using a Python-based interface to the NEURON simulator. Our model incorporates most of the quantitatively characterized intrinsic cell properties, synaptic properties, and connectivity available in the literature, and also aims to reproduce the known response properties of the canonical cochlear nucleus cell types. Although we currently lack the empirical data to completely constrain this model, our intent is for the model to continue to incorporate new experimental results as they become available.
Collapse
Affiliation(s)
- Paul B Manis
- Dept. of Otolaryngology/Head and Neck Surgery, B027 Marsico Hall, 125 Mason Farm Road, UNC Chapel Hill, Chapel Hill, NC 27599-7070, USA.
| | - Luke Campagnola
- Dept. of Otolaryngology/Head and Neck Surgery, B027 Marsico Hall, 125 Mason Farm Road, UNC Chapel Hill, Chapel Hill, NC 27599-7070, USA
| |
Collapse
|
26
|
Encke J, Kreh J, Völk F, Hemmert W. [Conversion of sound into auditory nerve action potentials]. HNO 2016; 64:808-814. [PMID: 27785535 DOI: 10.1007/s00106-016-0258-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Outer hair cells play a major role in the hearing process: they amplify the motion of the basilar membrane up to a 1000-fold and at the same time sharpen the excitation patterns. These patterns are converted by inner hair cells into action potentials of the auditory nerve. Outer hair cells are delicate structures and easily damaged, e. g., by overexposure to noise. Hearing aids can amplify the amplitude of the excitation patterns, but they cannot restore their degraded frequency selectivity. Noise overexposure also leads to delayed degeneration of auditory nerve fibers, particularly those with low a spontaneous rate, which are important for the coding of sound in noise. However, this loss cannot be diagnosed by pure-tone audiometry.
Collapse
Affiliation(s)
- J Encke
- Bioanaloge Informationsverarbeitung, Zentralinstitut für Medizintechnik, Technische Universität München, Boltzmannstr. 11, 85748, Garching, Deutschland
| | - J Kreh
- Bioanaloge Informationsverarbeitung, Zentralinstitut für Medizintechnik, Technische Universität München, Boltzmannstr. 11, 85748, Garching, Deutschland
| | - F Völk
- Bioanaloge Informationsverarbeitung, Zentralinstitut für Medizintechnik, Technische Universität München, Boltzmannstr. 11, 85748, Garching, Deutschland
| | - W Hemmert
- Bioanaloge Informationsverarbeitung, Zentralinstitut für Medizintechnik, Technische Universität München, Boltzmannstr. 11, 85748, Garching, Deutschland.
| |
Collapse
|
27
|
Weiss RS, Voss A, Hemmert W. Optogenetic stimulation of the cochlea-A review of mechanisms, measurements, and first models. NETWORK (BRISTOL, ENGLAND) 2016; 27:212-236. [PMID: 27644125 DOI: 10.1080/0954898x.2016.1224944] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This review evaluates the potential of optogenetic methods for the stimulation of the auditory nerve and assesses the feasability of optogenetic cochlear implants (CIs). It provides an overview of all critical steps like opsin targeting strategies, how opsins work, how their function can be modeled and included in neuronal models and the properties of light sources available for optical stimulation. From these foundations, quantitative estimates for the number of independent stimulation channels and the temporal precision of optogenetic stimulation of the auditory nerve are derived and compared with state-of-the-art electrical CIs. We conclude that optogenetic CIs have the potential to increase the number of independent stimulation channels by up to one order of magnitude to about 100, but only if light sources are able to deliver confined illumination patterns independently and parallelly. Already now, opsin variants like ChETA and Chronos enable driving of the auditory nerve up to rates of 200 spikes/s, close to the physiological value of their maximum sustained firing rate. Apart from requiring 10 times more energy than electrical stimulation, optical CIs still face major hurdles concerning the safety of gene transfection and optrode array implantation, for example, before becoming an option to replace electrical CIs.
Collapse
Affiliation(s)
- Robin S Weiss
- a Bio-Inspired Information Processing, Faculty of Electrical and Computer Engineering , Technical University of Munich , Garching , Germany
| | - Andrej Voss
- a Bio-Inspired Information Processing, Faculty of Electrical and Computer Engineering , Technical University of Munich , Garching , Germany
| | - Werner Hemmert
- a Bio-Inspired Information Processing, Faculty of Electrical and Computer Engineering , Technical University of Munich , Garching , Germany
| |
Collapse
|
28
|
Moezzi B, Iannella N, McDonnell MD. Ion channel noise can explain firing correlation in auditory nerves. J Comput Neurosci 2016; 41:193-206. [DOI: 10.1007/s10827-016-0613-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2016] [Revised: 06/18/2016] [Accepted: 06/22/2016] [Indexed: 01/13/2023]
|
29
|
|