1
|
Bakay WMH, Cervantes B, Lao-Rodríguez AB, Johannesen PT, Lopez-Poveda EA, Furness DN, Malmierca MS. How 'hidden hearing loss' noise exposure affects neural coding in the inferior colliculus of rats. Hear Res 2024; 443:108963. [PMID: 38308936 DOI: 10.1016/j.heares.2024.108963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 01/15/2024] [Accepted: 01/20/2024] [Indexed: 02/05/2024]
Abstract
Exposure to brief, intense sound can produce profound changes in the auditory system, from the internal structure of inner hair cells to reduced synaptic connections between the auditory nerves and the inner hair cells. Moreover, noisy environments can also lead to alterations in the auditory nerve or to processing changes in the auditory midbrain, all without affecting hearing thresholds. This so-called hidden hearing loss (HHL) has been shown in tinnitus patients and has been posited to account for hearing difficulties in noisy environments. However, much of the neuronal research thus far has investigated how HHL affects the response characteristics of individual fibres in the auditory nerve, as opposed to higher stations in the auditory pathway. Human models show that the auditory nerve encodes sound stochastically. Therefore, a sufficient reduction in nerve fibres could result in lowering the sampling of the acoustic scene below the minimum rate necessary to fully encode the scene, thus reducing the efficacy of sound encoding. Here, we examine how HHL affects the responses to frequency and intensity of neurons in the inferior colliculus of rats, and the duration and firing rate of those responses. Finally, we examined how shorter stimuli are encoded less effectively by the auditory midbrain than longer stimuli, and how this could lead to a clinical test for HHL.
Collapse
Affiliation(s)
- Warren M H Bakay
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain
| | - Blanca Cervantes
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; School of Medicine, University Anáhuac Puebla, Mexico
| | - Ana B Lao-Rodríguez
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain
| | - Peter T Johannesen
- Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain; Department of Surgery, Faculty of Medicine, University of Salamanca, Spain
| | - David N Furness
- School of Life Sciences, Keele University, Keele, United Kingdom
| | - Manuel S Malmierca
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute of Neuroscience of Castilla y León (INCYL), University of Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain; Department of Biology and Pathology, Faculty of Medicine, University of Salamanca, Spain.
| |
Collapse
|
2
|
Fumero MJ, Marrufo-Pérez MI, Eustaquio-Martín A, Lopez-Poveda EA. Factors that can affect divided speech intelligibility. Hear Res 2024; 441:108917. [PMID: 38061268 DOI: 10.1016/j.heares.2023.108917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 11/10/2023] [Accepted: 11/16/2023] [Indexed: 12/27/2023]
Abstract
Previous studies have shown that in challenging listening situations, people find it hard to equally divide their attention between two simultaneous talkers and tend to favor one talker over the other. The aim here was to investigate whether talker onset/offset, sex and location determine the favored talker. Fifteen people with normal hearing were asked to recognize as many words as possible from two sentences uttered by two talkers located at 45° and +45° azimuth, respectively. The sentences were from the same corpus, were time-centered and had equal sound level. In Conditions 1 and 2, the talkers had different sexes (male at +45°), sentence duration was not controlled for, and sentences were presented at 65 and 35 dB SPL, respectively. Listeners favored the male over the female talker, even more so at 35 dB SPL (62 % vs 43 % word recognition, respectively) than at 65 dB SPL (74 % vs 64 %, respectively). The greater asymmetry in intelligibility at the lower level supports that divided listening is harder and more 'asymmetric' in challenging acoustic scenarios. Listeners continued to favor the male talker when the experiment was repeated with sentences of equal average duration for the two talkers (Condition 3). This suggests that the earlier onset or later offset of male sentences (52 ms on average) was not the reason for the asymmetric intelligibility in Conditions 1 or 2. When the location of the talkers was switched (Condition 4) or the two talkers were the same woman (Condition 5), listeners continued to favor the talker to their right albeit non-significantly. Altogether, results confirm that in hard divided listening situations, listeners tend to favor the talker to their right. This preference is not affected by talker onset/offset delays less than 52 ms on average. Instead, the preference seems to be modulated by the voice characteristics of the talkers.
Collapse
Affiliation(s)
- Milagros J Fumero
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007, Spain.
| |
Collapse
|
3
|
Liu J, Stohl J, Lopez-Poveda EA, Overath T. Quantifying the Impact of Auditory Deafferentation on Speech Perception. Trends Hear 2024; 28:23312165241227818. [PMID: 38291713 PMCID: PMC10832414 DOI: 10.1177/23312165241227818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 12/22/2023] [Accepted: 01/05/2024] [Indexed: 02/01/2024] Open
Abstract
The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.
Collapse
Affiliation(s)
- Jiayue Liu
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Joshua Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, NC, USA
| | - Enrique A. Lopez-Poveda
- Instituto de Neurociencias de Castilla y Leon, University of Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, University of Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|
4
|
San-Victoriano FM, Eustaquio-Martín A, Lopez-Poveda EA. Binaural pre-processing for contralateral sound field attenuation can improve speech-in-noise intelligibility for bilateral hearing-aid users. Hear Res 2023; 432:108743. [PMID: 37003080 DOI: 10.1016/j.heares.2023.108743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 01/21/2023] [Accepted: 03/24/2023] [Indexed: 03/29/2023]
Abstract
We have recently proposed a binaural sound pre-processing method to attenuate sounds contralateral to each ear and shown that it can improve speech intelligibility for normal-hearing (NH) people in simulated "cocktail party" listening situations (Lopez-Poveda et al., 2022, Hear Res 418:108,469). The aim here was to evaluate if this benefit remains for hearing-impaired listeners when the method is combined with two independently functioning hearing aids, one per ear. Twelve volunteers participated in the experiments; five of them had bilateral sensorineural hearing loss and seven were NH listeners with simulated bilateral conductive hearing loss. Speech reception thresholds (SRTs) for sentences in competition with a source of steady, speech-shaped noise were measured in unilateral and bilateral listening, and for (target, masker) azimuthal angles of (0°, 0°), (270°, 45°), and (270°, 90°). Stimuli were processed through a pair of software-based multichannel, fast-acting, wide dynamic range compressors, with and without binaural pre-processing. For spatially collocated target and masker sources at 0° azimuth, the pre-processing did not affect SRTs. For spatially separated target and masker sources, the pre-processing improved SRTs when listening bilaterally (improvements up to 10.7 dB) or unilaterally with the acoustically better ear (improvements up to 13.9 dB), while it worsened SRTs when listening unilaterally with the acoustically worse ear (decrements of up to 17.0 dB). Results show that binaural pre-processing for contralateral sound attenuation can improve speech-in-noise intelligibility in laboratory tests also for bilateral hearing-aid users.
Collapse
Affiliation(s)
- Fernando M San-Victoriano
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain; Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain
| | - Almudena Eustaquio-Martín
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain; Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain; Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, 37007 Salamanca, Spain.
| |
Collapse
|
5
|
Leclère T, Johannesen PT, Wijetillake A, Segovia-Martínez M, Lopez-Poveda EA. A computational modelling framework for assessing information transmission with cochlear implants. Hear Res 2023; 432:108744. [PMID: 37004271 DOI: 10.1016/j.heares.2023.108744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 03/05/2023] [Accepted: 03/24/2023] [Indexed: 03/28/2023]
Abstract
Computational models are useful tools to investigate scientific questions that would be complicated to address using an experimental approach. In the context of cochlear-implants (CIs), being able to simulate the neural activity evoked by these devices could help in understanding their limitations to provide natural hearing. Here, we present a computational modelling framework to quantify the transmission of information from sound to spikes in the auditory nerve of a CI user. The framework includes a model to simulate the electrical current waveform sensed by each auditory nerve fiber (electrode-neuron interface), followed by a model to simulate the timing at which a nerve fiber spikes in response to a current waveform (auditory nerve fiber model). Information theory is then applied to determine the amount of information transmitted from a suitable reference signal (e.g., the acoustic stimulus) to a simulated population of auditory nerve fibers. As a use case example, the framework is applied to simulate published data on modulation detection by CI users obtained using direct stimulation via a single electrode. Current spread as well as the number of fibers were varied independently to illustrate the framework capabilities. Simulations reasonably matched experimental data and suggested that the encoded modulation information is proportional to the total neural response. They also suggested that amplitude modulation is well encoded in the auditory nerve for modulation rates up to 1000 Hz and that the variability in modulation sensitivity across CI users is partly because different CI users use different references for detecting modulation.
Collapse
Affiliation(s)
- Thibaud Leclère
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Peter T Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | | | | | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007, Spain.
| |
Collapse
|
6
|
Gómez-Álvarez M, Johannesen PT, Coelho-de-Sousa SL, Klump GM, Lopez-Poveda EA. The Relative Contribution of Cochlear Synaptopathy and Reduced Inhibition to Age-Related Hearing Impairment for People With Normal Audiograms. Trends Hear 2023; 27:23312165231213191. [PMID: 37956654 PMCID: PMC10644751 DOI: 10.1177/23312165231213191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 10/18/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023] Open
Abstract
Older people often show auditory temporal processing deficits and speech-in-noise intelligibility difficulties even when their audiogram is clinically normal. The causes of such problems remain unclear. Some studies have suggested that for people with normal audiograms, age-related hearing impairments may be due to a cognitive decline, while others have suggested that they may be caused by cochlear synaptopathy. Here, we explore an alternative hypothesis, namely that age-related hearing deficits are associated with decreased inhibition. For human adults (N = 30) selected to cover a reasonably wide age range (25-59 years), with normal audiograms and normal cognitive function, we measured speech reception thresholds in noise (SRTNs) for disyllabic words, gap detection thresholds (GDTs), and frequency modulation detection thresholds (FMDTs). We also measured the rate of growth (slope) of auditory brainstem response wave-I amplitude with increasing level as an indirect indicator of cochlear synaptopathy, and the interference inhibition score in the Stroop color and word test (SCWT) as a proxy for inhibition. As expected, performance in the auditory tasks worsened (SRTNs, GDTs, and FMDTs increased), and wave-I slope and SCWT inhibition scores decreased with ageing. Importantly, SRTNs, GDTs, and FMDTs were not related to wave-I slope but worsened with decreasing SCWT inhibition. Furthermore, after partialling out the effect of SCWT inhibition, age was no longer related to SRTNs or GDTs and became less strongly related to FMDTs. Altogether, results suggest that for people with normal audiograms, age-related deficits in auditory temporal processing and speech-in-noise intelligibility are mediated by decreased inhibition rather than cochlear synaptopathy.
Collapse
Affiliation(s)
- Marcelo Gómez-Álvarez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Peter T. Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Sónia L. Coelho-de-Sousa
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Georg M. Klump
- Department of Neuroscience and Cluster of Excellence “Hearing4all”, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Enrique A. Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
7
|
Marrufo-Pérez MI, Lopez-Poveda EA. Adaptation to noise in normal and impaired hearing. J Acoust Soc Am 2022; 151:1741. [PMID: 35364964 DOI: 10.1121/10.0009802] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 02/26/2022] [Indexed: 06/14/2023]
Abstract
Many aspects of hearing function are negatively affected by background noise. Listeners, however, have some ability to adapt to background noise. For instance, the detection of pure tones and the recognition of isolated words embedded in noise can improve gradually as tones and words are delayed a few hundred milliseconds in the noise. While some evidence suggests that adaptation to noise could be mediated by the medial olivocochlear reflex, adaptation can occur for people who do not have a functional reflex. Since adaptation can facilitate hearing in noise, and hearing in noise is often harder for hearing-impaired than for normal-hearing listeners, it is conceivable that adaptation is impaired with hearing loss. It remains unclear, however, if and to what extent this is the case, or whether impaired adaptation contributes to the greater difficulties experienced by hearing-impaired listeners understanding speech in noise. Here, we review adaptation to noise, the mechanisms potentially contributing to this adaptation, and factors that might reduce the ability to adapt to background noise, including cochlear hearing loss, cochlear synaptopathy, aging, and noise exposure. The review highlights few knowns and many unknowns about adaptation to noise, and thus paves the way for further research on this topic.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
| |
Collapse
|
8
|
Lopez-Poveda EA, Eustaquio-Martín A, Victoriano FMS. Binaural pre-processing for contralateral sound field attenuation and improved speech-in-noise recognition. Hear Res 2022; 418:108469. [DOI: 10.1016/j.heares.2022.108469] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 02/11/2022] [Accepted: 02/16/2022] [Indexed: 11/04/2022]
|
9
|
Lopez-Poveda EA, Eustaquio-Martín A, Fumero MJ, Gorospe JM, Polo López R, Gutiérrez Revilla MA, Schatzer R, Nopp P, Stohl JS. Speech-in-Noise Recognition With More Realistic Implementations of a Binaural Cochlear-Implant Sound Coding Strategy Inspired by the Medial Olivocochlear Reflex. Ear Hear 2021; 41:1492-1510. [PMID: 33136626 PMCID: PMC7722463 DOI: 10.1097/aud.0000000000000880] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 03/24/2020] [Indexed: 12/15/2022]
Abstract
OBJECTIVES Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.
Collapse
Affiliation(s)
- Enrique A. Lopez-Poveda
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
| | - Almudena Eustaquio-Martín
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Milagros J. Fumero
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - José M. Gorospe
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
- Unidad de Foniatría, Logopedia y Audiología, Servicio de Otorrinolaringología, Hospital Universitario de Salamanca, Salamanca, Spain
| | - Rubén Polo López
- Servicio de Otorrinolaringología, Hospital Universitario Ramón y Cajal, Madrid, Spain
| | | | | | | | - Joshua S. Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, North Carolina, USA
| |
Collapse
|
10
|
Fumero MJ, Eustaquio-Martín A, Gorospe JM, Polo López R, Gutiérrez Revilla MA, Lassaletta L, Schatzer R, Nopp P, Stohl JS, Lopez-Poveda EA. A state-of-the-art implementation of a binaural cochlear-implant sound coding strategy inspired by the medial olivocochlear reflex. Hear Res 2021; 409:108320. [PMID: 34348202 DOI: 10.1016/j.heares.2021.108320] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/13/2021] [Accepted: 07/21/2021] [Indexed: 11/30/2022]
Abstract
Cochlear implant (CI) users find it hard and effortful to understand speech in noise with current devices. Binaural CI sound processing inspired by the contralateral medial olivocochlear (MOC) reflex (an approach termed the 'MOC strategy') can improve speech-in-noise recognition for CI users. All reported evaluations of this strategy, however, disregarded automatic gain control (AGC) and fine-structure (FS) processing, two standard features in some current CI devices. To better assess the potential of implementing the MOC strategy in contemporary CIs, here, we compare intelligibility with and without MOC processing in combination with linked AGC and FS processing. Speech reception thresholds (SRTs) were compared for an FS and a MOC-FS strategy for sentences in steady and fluctuating noises, for various speech levels, in bilateral and unilateral listening modes, and for multiple spatial configurations of the speech and noise sources. Word recall scores and verbal response times in a word recognition test (two proxies for listening effort) were also compared for the two strategies in quiet and in steady noise at 5 dB signal-to-noise ratio (SNR) and the individual SRT. In steady noise, mean SRTs were always equal or better with the MOC-FS than with the standard FS strategy, both in bilateral (the mean and largest improvement across spatial configurations and speech levels were 0.8 and 2.2 dB, respectively) and unilateral listening (mean and largest improvement of 1.7 and 2.1 dB, respectively). In fluctuating noise and in bilateral listening, SRTs were equal for the two strategies. Word recall scores and verbal response times were not significantly affected by the test SNR or the processing strategy. Results show that MOC processing can be combined with linked AGC and FS processing. Compared to using FS processing alone, combined MOC-FS processing can improve speech intelligibility in noise without affecting word recall scores or verbal response times.
Collapse
Affiliation(s)
- Milagros J Fumero
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain
| | - José M Gorospe
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain; Servicio de Otorrinolaringología, Hospital Universitario de Salamanca, Salamanca 37007 Spain
| | - Rubén Polo López
- Servicio de Otorrinolaringología, Hospital Universitario Ramón y Cajal, Madrid 28034 Spain
| | | | - Luis Lassaletta
- Servicio de Otorrinolaringología, Hospital Universitario La Paz, Madrid 28046 Spain; IdiPAZ Research Institute, Madrid, Spain; Biomedical Research Networking Centre on Rare Diseases (CIBERER-U761), Institute of Health Carlos III, Madrid, Spain
| | | | | | - Joshua S Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, NC, USA
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007 Spain.
| |
Collapse
|
11
|
Johannesen PT, Lopez-Poveda EA. Age-related central gain compensation for reduced auditory nerve output for people with normal audiograms, with and without tinnitus. iScience 2021; 24:102658. [PMID: 34151241 PMCID: PMC8192693 DOI: 10.1016/j.isci.2021.102658] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 03/24/2021] [Accepted: 05/25/2021] [Indexed: 11/30/2022] Open
Abstract
Central gain compensation for reduced auditory nerve output has been hypothesized as a mechanism for tinnitus with a normal audiogram. Here, we investigate if gain compensation occurs with aging. For 94 people (aged 12-68 years, 64 women, 7 tinnitus) with normal or close-to-normal audiograms, the amplitude of wave I of the auditory brainstem response decreased with increasing age but was not correlated with wave V amplitude after accounting for age-related subclinical hearing loss and cochlear damage, a result indicative of age-related gain compensation. The correlations between age and wave I/III or III/V amplitude ratios suggested that compensation occurs at the wave III generator site. For each one of the seven participants with non-pulsatile tinnitus, the amplitude of wave I, wave V, and the wave I/V amplitude ratio were well within the confidence limits of the non-tinnitus participants. We conclude that increased central gain occurs with aging and is not specific to tinnitus.
Collapse
Affiliation(s)
- Peter T Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain.,Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, 37007 Salamanca, Spain
| |
Collapse
|
12
|
Marrufo-Pérez MI, Araquistain-Serrat L, Eustaquio-Martín A, Lopez-Poveda EA. On the importance of interaural noise coherence and the medial olivocochlear reflex for binaural unmasking in free-field listening. Hear Res 2021; 405:108246. [PMID: 33872834 DOI: 10.1016/j.heares.2021.108246] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 03/25/2021] [Accepted: 03/31/2021] [Indexed: 11/15/2022]
Abstract
For speech in competition with a noise source in the free field, normal-hearing (NH) listeners recognize speech better when listening binaurally than when listening monaurally with the ear that has the better acoustic signal-to-noise ratio (SNR). This benefit from listening binaurally is known as binaural unmasking and indicates that the brain combines information from the two ears to improve intelligibility. Here, we address three questions pertaining to binaural unmasking for NH listeners. First, we investigate if binaural unmasking results from combining the speech and/or the noise from the two ears. In a simulated acoustic free field with speech and noise sources at 0° and 270°azimuth, respectively, we found comparable unmasking regardless of whether the speech was present or absent in the ear with the worse SNR. This indicates that binaural unmasking probably involves combining only the noise at the two ears. Second, we investigate if having binaurally coherent location cues for the noise signal is sufficient for binaural unmasking to occur. We found no unmasking when location cues were coherent but noise signals were generated incoherent or were processed unilaterally through a hearing aid with linear, minimal amplification. This indicates that binaural unmasking requires interaurally coherent noise signals, source location cues, and processing. Third, we investigate if the hypothesized antimasking benefits of the medial olivocochlear reflex (MOCR) contribute to binaural unmasking. We found comparable unmasking regardless of whether speech tokens (words) were sufficiently delayed from the noise onset to fully activate the MOCR or not. Moreover, unmasking was absent when the noise was binaurally incoherent whereas the physiological antimasking effects of the MOCR are similar for coherent and incoherent noises. This indicates that the MOCR is unlikely involved in binaural unmasking.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Leire Araquistain-Serrat
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007, Spain.
| |
Collapse
|
13
|
Marrufo-Pérez MI, Johannesen PT, Lopez-Poveda EA. Correlation and Reliability of Behavioral and Otoacoustic-Emission Estimates of Contralateral Medial Olivocochlear Reflex Strength in Humans. Front Neurosci 2021; 15:640127. [PMID: 33664649 PMCID: PMC7921326 DOI: 10.3389/fnins.2021.640127] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 01/26/2021] [Indexed: 11/18/2022] Open
Abstract
The roles of the medial olivocochlear reflex (MOCR) in human hearing have been widely investigated but remain controversial. We reason that this may be because the effects of MOCR activation on cochlear mechanical responses can be assessed only indirectly in healthy humans, and the different methods used to assess those effects possibly yield different and/or unreliable estimates. One aim of this study was to investigate the correlation between three methods often employed to assess the strength of MOCR activation by contralateral acoustic stimulation (CAS). We measured tone detection thresholds (N = 28), click-evoked otoacoustic emission (CEOAE) input/output (I/O) curves (N = 18), and distortion-product otoacoustic emission (DPOAE) I/O curves (N = 18) for various test frequencies in the presence and the absence of CAS (broadband noise of 60 dB SPL). As expected, CAS worsened tone detection thresholds, suppressed CEOAEs and DPOAEs, and horizontally shifted CEOAE and DPOAE I/O curves to higher levels. However, the CAS effect on tone detection thresholds was not correlated with the horizontal shift of CEOAE or DPOAE I/O curves, and the CAS-induced CEOAE suppression was not correlated with DPOAE suppression. Only the horizontal shifts of CEOAE and DPOAE I/O functions were correlated with each other at 1.5, 2, and 3 kHz. A second aim was to investigate which of the methods is more reliable. The test–retest variability of the CAS effect was high overall but smallest for tone detection thresholds and CEOAEs, suggesting that their use should be prioritized over the use of DPOAEs. Many factors not related with the MOCR, including the limited parametric space studied, the low resolution of the I/O curves, and the reduced numbers of observations due to data exclusion likely contributed to the weak correlations and the large test–retest variability noted. These findings can help us understand the inconsistencies among past studies and improve our understanding of the functional significance of the MOCR.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Peter T Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain.,Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
14
|
Lopez-Poveda EA, Eustaquio-Martín A, Fumero MJ, Stohl JS, Schatzer R, Nopp P, Wolford RD, Gorospe JM, Polo R, Revilla AG, Wilson BS. Lateralization of virtual sound sources with a binaural cochlear-implant sound coding strategy inspired by the medial olivocochlear reflex. Hear Res 2019; 379:103-116. [DOI: 10.1016/j.heares.2019.05.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 04/30/2019] [Accepted: 05/17/2019] [Indexed: 10/26/2022]
|
15
|
Marrufo-Pérez MI, Eustaquio-Martín A, Lopez-Poveda EA. Speech predictability can hinder communication in difficult listening conditions. Cognition 2019; 192:103992. [PMID: 31254890 DOI: 10.1016/j.cognition.2019.06.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 05/27/2019] [Accepted: 06/03/2019] [Indexed: 11/15/2022]
Abstract
In difficult listening situations, such as in noisy environments, one would expect speech intelligibility to improve over time thanks to noise adaptation and/or to speech predictability facilitating the recognition of upcoming words. We tested this possibility by presenting normal-hearing human listeners (N = 100; 70 women) with sentences and measuring word recognition as a function of word position in a sentence. Sentences were presented in quiet and in competition with various masker sounds at individualized levels where listeners had 50% probability of recognizing a full sentence. Contrary to expectations, recognition was best for the first word and gradually deteriorated with increasing word position along the sentence. The worsening in recognition was unlikely due to differences in word audibility or word type and was uncorrelated with age or working memory capacity. Using a probabilistic model of word recognition, we show that the worsening effect probably occurs because misunderstandings generate inaccurate predictions that outweigh the benefits from accurate predictions. Analyses also revealed that predictions overruled the potential benefits from noise adaptation. We conclude that although speech predictability can facilitate sentence recognition, it can also result in declines in word recognition as the sentence unfolds because of inaccuracies in prediction.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, 37007 Salamanca, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, 37007 Salamanca, Spain.
| |
Collapse
|
16
|
Marrufo-Pérez MI, Eustaquio-Martín A, Fumero MJ, Gorospe JM, Polo R, Gutiérrez Revilla A, Lopez-Poveda EA. Adaptation to noise in amplitude modulation detection without the medial olivocochlear reflex. Hear Res 2019; 377:133-141. [DOI: 10.1016/j.heares.2019.03.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Revised: 03/05/2019] [Accepted: 03/19/2019] [Indexed: 10/27/2022]
|
17
|
Bramhall N, Beach EF, Epp B, Le Prell CG, Lopez-Poveda EA, Plack CJ, Schaette R, Verhulst S, Canlon B. The search for noise-induced cochlear synaptopathy in humans: Mission impossible? Hear Res 2019; 377:88-103. [DOI: 10.1016/j.heares.2019.02.016] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 02/25/2019] [Accepted: 02/28/2019] [Indexed: 10/27/2022]
|
18
|
Abstract
Over 360 million people worldwide suffer from disabling hearing loss. Most of them can be treated with hearing aids. Unfortunately, performance with hearing aids and the benefit obtained from using them vary widely across users. Here, we investigate the reasons for such variability. Sixty-eight hearing-aid users or candidates were fitted bilaterally with nonlinear hearing aids using standard procedures. Treatment outcome was assessed by measuring aided speech intelligibility in a time-reversed two-talker background and self-reported improvement in hearing ability. Statistical predictive models of these outcomes were obtained using linear combinations of 19 predictors, including demographic and audiological data, indicators of cochlear mechanical dysfunction and auditory temporal processing skills, hearing-aid settings, working memory capacity, and pretreatment self-perceived hearing ability. Aided intelligibility tended to be better for younger hearing-aid users with good unaided intelligibility in quiet and with good temporal processing abilities. Intelligibility tended to improve by increasing amplification for low-intensity sounds and by using more linear amplification for high-intensity sounds. Self-reported improvement in hearing ability was hard to predict but tended to be smaller for users with better working memory capacity. Indicators of cochlear mechanical dysfunction, alone or in combination with hearing settings, did not affect outcome predictions. The results may be useful for improving hearing aids and setting patients’ expectations.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- 1 Instituto de Neurociencias de Castilla y León, University of Salamanca, Spain.,2 Instituto de Investigación Biomédica de Salamanca, University of Salamanca, Spain.,3 Departamento de Cirugía, Facultad de Medicina, University of Salamanca, Spain
| | - Peter T Johannesen
- 1 Instituto de Neurociencias de Castilla y León, University of Salamanca, Spain.,2 Instituto de Investigación Biomédica de Salamanca, University of Salamanca, Spain
| | - Patricia Pérez-González
- 1 Instituto de Neurociencias de Castilla y León, University of Salamanca, Spain.,2 Instituto de Investigación Biomédica de Salamanca, University of Salamanca, Spain
| | - José L Blanco
- 1 Instituto de Neurociencias de Castilla y León, University of Salamanca, Spain
| | | | - Brent Edwards
- 4 Starkey Hearing Research Center, Berkeley, CA, USA
| |
Collapse
|
19
|
Marrufo-Pérez MI, Eustaquio-Martín A, Lopez-Poveda EA. Adaptation to Noise in Human Speech Recognition Unrelated to the Medial Olivocochlear Reflex. J Neurosci 2018; 38:4138-4145. [PMID: 29593051 PMCID: PMC6596031 DOI: 10.1523/jneurosci.0024-18.2018] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 02/26/2018] [Accepted: 03/24/2018] [Indexed: 11/21/2022] Open
Abstract
Sensory systems constantly adapt their responses to the current environment. In hearing, adaptation may facilitate communication in noisy settings, a benefit frequently (but controversially) attributed to the medial olivocochlear reflex (MOCR) enhancing the neural representation of speech. Here, we show that human listeners (N = 14; five male) recognize more words presented monaurally in ipsilateral, contralateral, and bilateral noise when they are given some time to adapt to the noise. This finding challenges models and theories that claim that speech intelligibility in noise is invariant over time. In addition, we show that this adaptation to the noise occurs also for words processed to maintain the slow-amplitude modulations in speech (the envelope) disregarding the faster fluctuations (the temporal fine structure). This demonstrates that noise adaptation reflects an enhancement of amplitude modulation speech cues and is unaffected by temporal fine structure cues. Last, we show that cochlear implant users (N = 7; four male) show normal monaural adaptation to ipsilateral noise. Because the electrical stimulation delivered by cochlear implants is independent from the MOCR, this demonstrates that noise adaptation does not require the MOCR. We argue that noise adaptation probably reflects adaptation of the dynamic range of auditory neurons to the noise level statistics.SIGNIFICANCE STATEMENT People find it easier to understand speech in noisy environments when they are given some time to adapt to the noise. This benefit is frequently but controversially attributed to the medial olivocochlear efferent reflex enhancing the representation of speech cues in the auditory nerve. Here, we show that the adaptation to noise reflects an enhancement of the slow fluctuations in amplitude over time that are present in speech. In addition, we show that adaptation to noise for cochlear implant users is not statistically different from that for listeners with normal hearing. Because the electrical stimulation delivered by cochlear implants is independent from the medial olivocochlear efferent reflex, this demonstrates that adaptation to noise does not require this reflex.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León
- Instituto de Investigación Biomédica de Salamanca, and
| | | | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León,
- Instituto de Investigación Biomédica de Salamanca, and
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, 37007 Salamanca, Spain
| |
Collapse
|
20
|
Lopez-Poveda EA, Eustaquio-Martín A. Objective speech transmission improvements with a binaural cochlear implant sound-coding strategy inspired by the contralateral medial olivocochlear reflex. J Acoust Soc Am 2018; 143:2217. [PMID: 29716283 DOI: 10.1121/1.5031028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
It has been recently shown that cochlear implant users could enjoy better speech reception in noise and enhanced spatial unmasking with binaural audio processing inspired by the inhibitory effects of the contralateral medial olivocochlear (MOC) reflex on compression [Lopez-Poveda, Eustaquio-Martin, Stohl, Wolford, Schatzer, and Wilson (2016). Ear Hear. 37, e138-e148]. The perceptual evidence supporting those benefits, however, is limited to a few target-interferer spatial configurations and to a particular implementation of contralateral MOC inhibition. Here, the short-term objective intelligibility index is used to (1) objectively demonstrate potential benefits over many more spatial configurations, and (2) investigate if the predicted benefits may be enhanced by using more realistic MOC implementations. Results corroborate the advantages and drawbacks of MOC processing indicated by the previously published perceptual tests. The results also suggest that the benefits may be enhanced and the drawbacks overcome by using longer time constants for the activation and deactivation of inhibition and, to a lesser extent, by using a comparatively greater inhibition in the lower than in the higher frequency channels. Compared to using two functionally independent processors, the better MOC processor improved the signal-to-noise ratio in the two ears between 1 and 6 decibels by enhancing head-shadow effects, and was advantageous for all tested target-interferer spatial configurations.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain
| |
Collapse
|
21
|
Abstract
Olivocochlear efferents allow the central auditory system to adjust the functioning of the inner ear during active and passive listening. While many aspects of efferent anatomy, physiology and function are well established, others remain controversial. This article reviews the current knowledge on olivocochlear efferents, with emphasis on human medial efferents. The review covers (1) the anatomy and physiology of olivocochlear efferents in animals; (2) the methods used for investigating this auditory feedback system in humans, their limitations and best practices; (3) the characteristics of medial-olivocochlear efferents in humans, with a critical analysis of some discrepancies across human studies and between animal and human studies; (4) the possible roles of olivocochlear efferents in hearing, discussing the evidence in favor and against their role in facilitating the detection of signals in noise and in protecting the auditory system from excessive acoustic stimulation; and (5) the emerging association between abnormal olivocochlear efferent function and several health conditions. Finally, we summarize some open issues and introduce promising approaches for investigating the roles of efferents in human hearing using cochlear implants.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.,Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
22
|
Marrufo-Pérez MI, Eustaquio-Martín A, López-Bascuas LE, Lopez-Poveda EA. Temporal Effects on Monaural Amplitude-Modulation Sensitivity in Ipsilateral, Contralateral and Bilateral Noise. J Assoc Res Otolaryngol 2018; 19:147-161. [PMID: 29508100 DOI: 10.1007/s10162-018-0656-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2017] [Accepted: 02/05/2018] [Indexed: 10/17/2022] Open
Abstract
The amplitude modulations (AMs) in speech signals are useful cues for speech recognition. Several adaptation mechanisms may make the detection of AM in noisy backgrounds easier when the AM carrier is presented later rather than earlier in the noise. The aim of the present study was to characterize temporal adaptation to noise in AM detection. AM detection thresholds were measured for monaural (50 ms, 1.5 kHz) pure-tone carriers presented at the onset ('early' condition) and 300 ms after the onset ('late' condition) of ipsilateral, contralateral, and bilateral (diotic) broadband noise, as well as in quiet. Thresholds were 2-4 dB better in the late than in the early condition for the three noise lateralities. The temporal effect held for carriers at equal sensation levels, confirming that it was not due to overshoot on carrier audibility. The temporal effect was larger for broadband than for low-band contralateral noises. Many aspects in the results were consistent with the noise activating the medial olivocochlear reflex (MOCR) and enhancing AM depth in the peripheral auditory response. Other aspects, however, indicate that central masking and adaptation unrelated to the MOCR also affect both carrier-tone and AM detection and are involved in the temporal effects.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007, Salamanca, Spain.,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Luis E López-Bascuas
- Departamento de Psicología Básica I (Procesos Básicos), Universidad Complutense de Madrid, Madrid, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007, Salamanca, Spain. .,Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain. .,Departamento de Cirugía, Universidad de Salamanca, Salamanca, Spain.
| |
Collapse
|
23
|
Johannesen PT, Pérez-González P, Kalluri S, Blanco JL, Lopez-Poveda EA. The Influence of Cochlear Mechanical Dysfunction, Temporal Processing Deficits, and Age on the Intelligibility of Audible Speech in Noise for Hearing-Impaired Listeners. Trends Hear 2016; 20:2331216516641055. [PMID: 27604779 PMCID: PMC5017567 DOI: 10.1177/2331216516641055] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2015] [Revised: 03/01/2016] [Accepted: 03/01/2016] [Indexed: 12/01/2022] Open
Abstract
The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN) and a time-reversed two-talker masker (R2TM). Behavioral estimates of cochlear gain loss and residual compression were available from a previous study and were used as indicators of cochlear mechanical dysfunction. Temporal processing abilities were assessed using frequency modulation detection thresholds. Age, audiometric thresholds, and the difference between audiometric threshold and cochlear gain loss were also included in the analyses. Stepwise multiple linear regression models were used to assess the relative importance of the various factors for intelligibility. Results showed that (a) cochlear gain loss was unrelated to intelligibility, (b) residual cochlear compression was related to intelligibility in SSN but not in a R2TM, (c) temporal processing was strongly related to intelligibility in a R2TM and much less so in SSN, and (d) age per se impaired intelligibility. In summary, all factors affected intelligibility, but their relative importance varied across maskers.
Collapse
Affiliation(s)
- Peter T Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Spain Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Spain
| | - Patricia Pérez-González
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Spain Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Spain
| | | | - José L Blanco
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Spain Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Spain Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Spain
| |
Collapse
|
24
|
Lopez-Poveda EA, Eustaquio-Martín A, Stohl JS, Wolford RD, Schatzer R, Wilson BS. Roles of the Contralateral Efferent Reflex in Hearing Demonstrated with Cochlear Implants. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2016; 894:105-114. [DOI: 10.1007/978-3-319-25474-6_12] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
25
|
Marmel F, Rodríguez-Mendoza MA, Lopez-Poveda EA. Stochastic undersampling steepens auditory threshold/duration functions: implications for understanding auditory deafferentation and aging. Front Aging Neurosci 2015; 7:63. [PMID: 26029098 PMCID: PMC4432715 DOI: 10.3389/fnagi.2015.00063] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Accepted: 04/11/2015] [Indexed: 12/03/2022] Open
Abstract
It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013) to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds (<20 ms). The detection of long sounds (>50 ms) did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near) normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that the auditory system of audiometrically normal older listeners might not be “slower than normal”, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time.
Collapse
Affiliation(s)
- Frédéric Marmel
- Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca Salamanca, Spain ; Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca, Spain
| | - Medardo A Rodríguez-Mendoza
- Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca Salamanca, Spain ; Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca, Spain ; Facultad de Medicina, Departamento de Cirugía, Universidad de Salamanca Salamanca, Spain
| |
Collapse
|
26
|
Aguilar E, Johannesen PT, Lopez-Poveda EA. Contralateral efferent suppression of human hearing sensitivity. Front Syst Neurosci 2015; 8:251. [PMID: 25642172 PMCID: PMC4295548 DOI: 10.3389/fnsys.2014.00251] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Accepted: 12/21/2014] [Indexed: 11/13/2022] Open
Abstract
The present study aimed at characterizing the suppressing effect of contralateral medial olivocochlear (MOC) efferents on human auditory sensitivity and mechanical cochlear responses at sound levels near behavioral thresholds. Absolute thresholds for pure tones of 500 and 4000 Hz with durations between 10-500 ms were measured in the presence and in the absence of a contralateral broadband noise. The intensity of the noise was fixed at 60 dB SPL to evoke the contralateral MOC reflex without evoking the middle-ear muscle reflex. In agreement with previously reported findings, thresholds measured without the contralateral noise decreased with increasing tone duration, and the rate of decrease was faster at 500 than at 4000 Hz. Contralateral stimulation increased thresholds by 1.07 and 1.72 dB at 500 and 4000 Hz, respectively. The mean increase (1.4 dB) just missed statistical significance (p = 0.08). Importantly, the across-frequency mean threshold increase was significantly greater for long than for short probes. This effect was more obvious at 4000 Hz than at 500 Hz. Assuming that thresholds depend on the MOC-dependent cochlear mechanical response followed by an MOC-independent, post-mechanical detection mechanism, the present results at 4000 Hz suggest that MOC efferent activation suppresses cochlear mechanical responses more at lower than at higher intensities across the range of intensities near threshold, while the results at 500 Hz suggest comparable mechanical suppression across the threshold intensity range. The results are discussed in the context of central masking and of auditory models of efferent suppression of cochlear mechanical responses.
Collapse
Affiliation(s)
- Enzo Aguilar
- Auditory Computation and Psychoacoustics, Instituto de Neurociencias de Castilla y León, Universidad de SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
| | - Peter T. Johannesen
- Auditory Computation and Psychoacoustics, Instituto de Neurociencias de Castilla y León, Universidad de SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
| | - Enrique A. Lopez-Poveda
- Auditory Computation and Psychoacoustics, Instituto de Neurociencias de Castilla y León, Universidad de SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
27
|
Pérez-González P, Johannesen PT, Lopez-Poveda EA. Forward-masking recovery and the assumptions of the temporal masking curve method of inferring cochlear compression. Trends Hear 2014; 19:19/0/2331216514564253. [PMID: 25534365 PMCID: PMC4299367 DOI: 10.1177/2331216514564253] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The temporal masking curve (TMC) method is a behavioral technique for inferring human cochlear compression. The method relies on the assumptions that in the absence of compression, forward-masking recovery is independent of masker level and probe frequency. The present study aimed at testing the validity of these assumptions. Masking recovery was investigated for eight listeners with sensorineural hearing loss carefully selected to have absent or nearly absent distortion product otoacoustic emissions. It is assumed that for these listeners basilar membrane responses are linear, hence that masking recovery is independent of basilar membrane compression. TMCs for probe frequencies of 0.5, 1, 2, 4, and 6 kHz were available for these listeners from a previous study. The dataset included TMCs for masker frequencies equal to the probe frequencies plus reference TMCs measured using a high-frequency probe and a low, off-frequency masker. All of the TMCs were fitted using linear regression, and the resulting slope and intercept values were taken as indicative of masking recovery and masker level, respectively. Results for on-frequency TMCs suggest that forward-masking recovery is generally independent of probe frequency and of masker level and hence that it would be reasonable to use a reference TMC for a high-frequency probe to infer cochlear compression at lower frequencies. Results further show, however, that reference TMCs were sometimes shallower than corresponding on-frequency TMCs for identical probe frequencies, hence that compression could be overestimated in these cases. We discuss possible reasons for this result and the conditions when it might occur.
Collapse
Affiliation(s)
- Patricia Pérez-González
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Salamanca, Spain
| | - Peter T Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Salamanca, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Salamanca, Spain Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
28
|
Lopez-Poveda EA. Why do I hear but not understand? Stochastic undersampling as a model of degraded neural encoding of speech. Front Neurosci 2014; 8:348. [PMID: 25400543 PMCID: PMC4214224 DOI: 10.3389/fnins.2014.00348] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 10/12/2014] [Indexed: 11/13/2022] Open
Abstract
Hearing impairment is a serious disease with increasing prevalence. It is defined based on increased audiometric thresholds but increased thresholds are only partly responsible for the greater difficulty understanding speech in noisy environments experienced by some older listeners or by hearing-impaired listeners. Identifying the additional factors and mechanisms that impair intelligibility is fundamental to understanding hearing impairment but these factors remain uncertain. Traditionally, these additional factors have been sought in the way the speech spectrum is encoded in the pattern of impaired mechanical cochlear responses. Recent studies, however, are steering the focus toward impaired encoding of the speech waveform in the auditory nerve. In our recent work, we gave evidence that a significant factor might be the loss of afferent auditory nerve fibers, a pathology that comes with aging or noise overexposure. Our approach was based on a signal-processing analogy whereby the auditory nerve may be regarded as a stochastic sampler of the sound waveform and deafferentation may be described in terms of waveform undersampling. We showed that stochastic undersampling simultaneously degrades the encoding of soft and rapid waveform features, and that this degrades speech intelligibility in noise more than in quiet without significant increases in audiometric thresholds. Here, we review our recent work in a broader context and argue that the stochastic undersampling analogy may be extended to study the perceptual consequences of various different hearing pathologies and their treatment.
Collapse
Affiliation(s)
- Enrique A. Lopez-Poveda
- Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
29
|
Johannesen PT, Pérez-González P, Lopez-Poveda EA. Across-frequency behavioral estimates of the contribution of inner and outer hair cell dysfunction to individualized audiometric loss. Front Neurosci 2014; 8:214. [PMID: 25100940 PMCID: PMC4108034 DOI: 10.3389/fnins.2014.00214] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Accepted: 07/02/2014] [Indexed: 12/02/2022] Open
Abstract
Identifying the multiple contributors to the audiometric loss of a hearing impaired (HI) listener at a particular frequency is becoming gradually more useful as new treatments are developed. Here, we infer the contribution of inner (IHC) and outer hair cell (OHC) dysfunction to the total audiometric loss in a sample of 68 hearing aid candidates with mild-to-severe sensorineural hearing loss, and for test frequencies of 0.5, 1, 2, 4, and 6 kHz. It was assumed that the audiometric loss (HLTOTAL) at each test frequency was due to a combination of cochlear gain loss, or OHC dysfunction (HLOHC), and inefficient IHC processes (HLIHC), all of them in decibels. HLOHC and HLIHC were estimated from cochlear I/O curves inferred psychoacoustically using the temporal masking curve (TMC) method. 325 I/O curves were measured and 59% of them showed a compression threshold (CT). The analysis of these I/O curves suggests that (1) HLOHC and HLIHC account on average for 60-70 and 30-40% of HLTOTAL, respectively; (2) these percentages are roughly constant across frequencies; (3) across-listener variability is large; (4) residual cochlear gain is negatively correlated with hearing loss while residual compression is not correlated with hearing loss. Altogether, the present results support the conclusions from earlier studies and extend them to a wider range of test frequencies and hearing-loss ranges. Twenty-four percent of I/O curves were linear and suggested total cochlear gain loss. The number of linear I/O curves increased gradually with increasing frequency. The remaining 17% I/O curves suggested audiometric losses due mostly to IHC dysfunction and were more frequent at low (≤1 kHz) than at high frequencies. It is argued that in a majority of listeners, hearing loss is due to a common mechanism that concomitantly alters IHC and OHC function and that IHC processes may be more labile in the apex than in the base.
Collapse
Affiliation(s)
- Peter T. Johannesen
- Auditory Computation and Psychoacoustics, Instituto de Neurociencias de Castilla y León, University of SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, University of SalamancaSalamanca, Spain
| | - Patricia Pérez-González
- Auditory Computation and Psychoacoustics, Instituto de Neurociencias de Castilla y León, University of SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, University of SalamancaSalamanca, Spain
| | - Enrique A. Lopez-Poveda
- Auditory Computation and Psychoacoustics, Instituto de Neurociencias de Castilla y León, University of SalamancaSalamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, University of SalamancaSalamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Facultad de Medicina, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
30
|
Alves-Pinto A, Palmer AR, Lopez-Poveda EA. Perception and coding of high-frequency spectral notches: potential implications for sound localization. Front Neurosci 2014; 8:112. [PMID: 24904258 PMCID: PMC4034511 DOI: 10.3389/fnins.2014.00112] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 04/29/2014] [Indexed: 11/13/2022] Open
Abstract
The interaction of sound waves with the human pinna introduces high-frequency notches (5-10 kHz) in the stimulus spectrum that are thought to be useful for vertical sound localization. A common view is that these notches are encoded as rate profiles in the auditory nerve (AN). Here, we review previously published psychoacoustical evidence in humans and computer-model simulations of inner hair cell responses to noises with and without high-frequency spectral notches that dispute this view. We also present new recordings from guinea pig AN and "ideal observer" analyses of these recordings that suggest that discrimination between noises with and without high-frequency spectral notches is probably based on the information carried in the temporal pattern of AN discharges. The exact nature of the neural code involved remains nevertheless uncertain: computer model simulations suggest that high-frequency spectral notches are encoded in spike timing patterns that may be operant in the 4-7 kHz frequency regime, while "ideal observer" analysis of experimental neural responses suggest that an effective cue for high-frequency spectral discrimination may be based on sampling rates of spike arrivals of AN fibers using non-overlapping time binwidths of between 4 and 9 ms. Neural responses show that sensitivity to high-frequency notches is greatest for fibers with low and medium spontaneous rates than for fibers with high spontaneous rates. Based on this evidence, we conjecture that inter-subject variability at high-frequency spectral notch detection and, consequently, at vertical sound localization may partly reflect individual differences in the available number of functional medium- and low-spontaneous-rate fibers.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Klinikum rechts der Isar, Technische Universität MünchenMunich, Germany
| | - Alan R. Palmer
- Medical Research Council Institute of Hearing Research, University ParkNottingham, UK
| | - Enrique A. Lopez-Poveda
- Departamento de Cirugía, Facultad de Medicina, Instituto de Neurociencias de Castilla y León, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
31
|
Lopez-Poveda EA, Barrios P. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation. Front Neurosci 2013; 7:124. [PMID: 23882176 PMCID: PMC3712141 DOI: 10.3389/fnins.2013.00124] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2013] [Accepted: 06/26/2013] [Indexed: 11/25/2022] Open
Abstract
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca Salamanca, Spain ; Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca Salamanca, Spain ; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca Salamanca, Spain
| | | |
Collapse
|
32
|
Aguilar E, Eustaquio-Martin A, Lopez-Poveda EA. Contralateral efferent reflex effects on threshold and suprathreshold psychoacoustical tuning curves at low and high frequencies. J Assoc Res Otolaryngol 2013; 14:341-57. [PMID: 23423559 PMCID: PMC3642277 DOI: 10.1007/s10162-013-0373-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2012] [Accepted: 01/21/2013] [Indexed: 11/28/2022] Open
Abstract
Medial olivocochlear efferent neurons can control cochlear frequency selectivity and may be activated in a reflexive manner by contralateral sounds. The present study investigated the significance of the contralateral medial olivocochlear reflex (MOCR) on human psychoacoustical tuning curves (PTCs), a behavioral correlate of cochlear tuning curves. PTCs were measured using forward masking in the presence and in the absence of a contralateral white noise, assumed to elicit the MOCR. To assess MOCR effects on apical and basal cochlear regions over a wide range of sound levels, PTCs were measured for probe frequencies of 500 Hz and 4 kHz and for near- and suprathreshold conditions. Results show that the contralateral noise affected the PTCs predominantly at 500 Hz. At near-threshold levels, its effect was obvious only for frequencies in the tails of the PTCs; at suprathreshold levels, its effects were obvious for all frequencies. It was verified that the effects were not due to the contralateral noise activating the middle-ear muscle reflex or changing the postmechanical rate of recovery from forward masking. A phenomenological computer model of forward masking with efferent control was used to explain the data. The model supports the hypothesis that the behavioral results were due to the contralateral noise reducing apical cochlear gain in a frequency- and level-dependent manner consistent with physiological evidence. Altogether, this shows that the contralateral MOCR may be changing apical cochlear responses in natural, binaural listening situations.
Collapse
Affiliation(s)
- Enzo Aguilar
- />Instituto de Neurociencias de Castilla y León and Instituto de Investigaciones Biomédicas de Salamanca, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
| | - Almudena Eustaquio-Martin
- />Instituto de Neurociencias de Castilla y León and Instituto de Investigaciones Biomédicas de Salamanca, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
| | - Enrique A. Lopez-Poveda
- />Instituto de Neurociencias de Castilla y León and Instituto de Investigaciones Biomédicas de Salamanca, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007 Salamanca, Spain
- />Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, 37007 Salamanca, Spain
| |
Collapse
|
33
|
Lopez-Poveda EA, Eustaquio-Martin A. On the controversy about the sharpness of human cochlear tuning. J Assoc Res Otolaryngol 2013; 14:673-86. [PMID: 23690279 DOI: 10.1007/s10162-013-0397-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2012] [Accepted: 05/03/2013] [Indexed: 11/26/2022] Open
Abstract
In signal processing terms, the operation of the mammalian cochlea in the inner ear may be likened to a bank of filters. Based on otoacoustic emission evidence, it has been recently claimed that cochlear tuning is sharper for human than for other mammals. The claim was corroborated with a behavioral method that involves the masking of pure tones with forward notched noises (NN). Using this method, it has been further claimed that human cochlear tuning is sharper than suggested by earlier behavioral studies. These claims are controversial. Here, we contribute to the controversy by theoretically assessing the accuracy of the NN method at inferring the bandwidth (BW) of nonlinear cochlear filters. Behavioral forward masking was mimicked using a computer model of the squared basilar membrane response followed by a temporal integrator. Isoresponse and isolevel versions of the forward masking NN method were applied to infer the already known BW of the cochlear filter used in the model. We show that isolevel methods were overall more accurate than isoresponse methods. We also show that BWs for NNs and sinusoids equate only for isolevel methods and when the levels of the two stimuli are appropriately scaled. Lastly, we show that the inferred BW depends on the method version (isolevel BW was twice as broad as isoresponse BW at 40 dB SPL) and on the stimulus level (isoresponse and isolevel BW decreased and increased, respectively, with increasing level over the level range where cochlear responses went from linear to compressive). We suggest that the latter may contribute to explaining the reported differences in cochlear tuning across behavioral studies and species. We further suggest that given the well-established nonlinear nature of cochlear responses, even greater care must be exercised when using a single BW value to describe and compare cochlear tuning.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007, Salamanca, Spain,
| | | |
Collapse
|
34
|
Lopez-Poveda EA, Aguilar E, Johannesen PT, Eustaquio-Martín A. Contralateral efferent regulation of human cochlear tuning: behavioural observations and computer model simulations. Adv Exp Med Biol 2013; 787:47-54. [PMID: 23716208 DOI: 10.1007/978-1-4614-1590-9_6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
In binaural listening, the two cochleae do not act as independent sound receptors; their functioning is linked via the contralateral medial olivo-cochlear reflex (MOCR), which can be activated by contralateral sounds. The present study aimed at characterizing the effect of a contralateral white noise (CWN) on psychophysical tuning curves (PTCs). PTCs were measured in forward masking for probe frequencies of 500 Hz and 4 kHz, with and without CWN. The sound pressure level of the probe was fixed across conditions. PTCs for different response criteria were measured by using various masker-probe time gaps. The CWN had no significant effects on PTCs at 4 kHz. At 500 Hz, by contrast, PTCs measured with CWN appeared broader, particularly for short gaps, and they showed a decrease in the masker level. This decrease was greater the longer the masker-probe time gap. A computer model of forward masking with efferent control of cochlear gain was used to explain the data. The model accounted for the data based on the assumption that the sole effect of the CWN was to reduce the cochlear gain by ∼6.5 dB at 500 Hz for low and moderate levels. It also suggested that the pattern of data at 500 Hz is the result of combined broad bandwidth of compression and off-frequency listening. Results are discussed in relation with other physiological and psychoacoustical studies on the effect of activation of MOCR on cochlear function.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.
| | | | | | | |
Collapse
|
35
|
Lopez-Poveda EA, Johannesen PT. Behavioral estimates of the contribution of inner and outer hair cell dysfunction to individualized audiometric loss. J Assoc Res Otolaryngol 2012; 13:485-504. [PMID: 22526735 DOI: 10.1007/s10162-012-0327-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2011] [Accepted: 03/26/2012] [Indexed: 10/28/2022] Open
Abstract
Differentiating the relative importance of the various contributors to the audiometric loss (HL(TOTAL)) of a given hearing impaired listener and frequency region is becoming critical as more specific treatments are being developed. The aim of the present study was to assess the relative contribution of inner (IHC) and outer hair cell (OHC) dysfunction (HL(IHC) and HL(OHC), respectively) to the audiometric loss of patients with mild to moderate cochlear hearing loss. It was assumed that HL(TOTAL) = HL(OHC) + HL(IHC) (all in decibels) and that HL(OHC) may be estimated as the reduction in maximum cochlear gain. It is argued that the latter may be safely estimated from compression threshold shifts of cochlear input/output (I/O) curves relative to normal hearing references. I/O curves were inferred behaviorally using forward masking for 26 test frequencies in 18 hearing impaired listeners. Data suggested that the audiometric loss for six of these 26 test frequencies was consistent with pure OHC dysfunction, one was probably consistent with pure IHC dysfunction, 13 were indicative of mixed IHC and OHC dysfunction, and five were uncertain (one more was excluded from the analysis). HL(OHC) and HL(IHC) contributed on average 60 and 40 %, respectively, to the audiometric loss, but variability was large across cases. Indeed, in some cases, HL(IHC) was up to 63 % of HL(TOTAL), even for moderate losses. The repeatability of the results is assessed using Monte Carlo simulations and potential sources of bias are discussed.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León IBSAL, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007, Salamanca, Spain.
| | | |
Collapse
|
36
|
Eustaquio-Martín A, Lopez-Poveda EA. Isoresponse versus isoinput estimates of cochlear filter tuning. J Assoc Res Otolaryngol 2010; 12:281-99. [PMID: 21104288 DOI: 10.1007/s10162-010-0252-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2010] [Accepted: 11/05/2010] [Indexed: 10/18/2022] Open
Abstract
The tuning of a linear filter may be inferred from the filter's isoresponse (e.g., tuning curves) or isoinput (e.g., isolevel curves) characteristics. This paper provides a theoretical demonstration that for nonlinear filters with compressive response characteristics like those of the basilar membrane, isoresponse measures can suggest strikingly sharper tuning than isoinput measures. The practical significance of this phenomenon is demonstrated by inferring the 3-dB-down bandwidths (BW(3dB)) of human auditory filters at 500 and 4,000 Hz from behavioral isoresponse and isoinput measures obtained with sinusoidal and notched noise forward maskers. Inferred cochlear responses were compressive for the two types of maskers. Consistent with expectations, low-level BW(3dB) estimates obtained from isoresponse conditions were considerably narrower than those obtained from isolevel conditions: 69 vs. 174 Hz, respectively, at 500 Hz, and 280 vs. 464 Hz, respectively, at 4,000 Hz. Furthermore, isoresponse BW(3dB) decreased with increasing level while corresponding isolevel estimates remained approximately constant at 500 Hz or increased slightly at 4 kHz. It is suggested that comparisons between isoresponse supra-threshold human tuning and threshold animal neural tuning should be made with caution.
Collapse
Affiliation(s)
- Almudena Eustaquio-Martín
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca, Spain
| | | |
Collapse
|
37
|
Johannesen PT, Lopez-Poveda EA. Correspondence between behavioral and individually "optimized" otoacoustic emission estimates of human cochlear input/output curves. J Acoust Soc Am 2010; 127:3602-3613. [PMID: 20550260 DOI: 10.1121/1.3377087] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Previous studies have shown a high within-subject correspondence between distortion product otoacoustic emission (DPOAE) input/output (I/O) curves and behaviorally inferred basilar membrane (BM) I/O curves for frequencies above approximately 2 kHz. For lower frequencies, DPOAE I/O curves contained notches and plateaus that did not have a counterpart in corresponding behavioral curves. It was hypothesized that this might improve by using individualized optimal DPOAE primary levels. Here, data from previous studies are re-analyzed to test this hypothesis by comparing behaviorally inferred BM I/O curves and DPOAE I/O curves measured with well-established group-average primary levels and two individualized primary level rules: one optimized to maximize DPOAE levels and one intended for primaries to evoke comparable BM responses at the f(2) cochlear region. Test frequencies were 0.5, 1, and 4 kHz. Behavioral I/O curves were obtained from temporal (forward) masking curves. Results showed high within-subject correspondence between behavioral and DPOAE I/O curves at 4 kHz only, regardless of the primary level rule. Plateaus and notches were equally common in low-frequency DPOAE I/O curves for individualized and group-average DPOAE primary levels at 0.5 and 1 kHz. Results are discussed in terms of the adequacy of DPOAE I/O curves for inferring individual cochlear nonlinearity characteristics.
Collapse
Affiliation(s)
- Peter T Johannesen
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain
| | | |
Collapse
|
38
|
Lopez-Poveda EA, Johannesen PT, Merchán MA. Estimation of the degree of inner and outer hair cell dysfunction from distortion product otoacoustic emission input/output functions. ACTA ACUST UNITED AC 2009. [DOI: 10.1080/16513860802622491] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
39
|
Johannesen PT, Lopez-Poveda EA. Cochlear nonlinearity in normal-hearing subjects as inferred psychophysically and from distortion-product otoacoustic emissions. J Acoust Soc Am 2008; 124:2149-2163. [PMID: 19062855 DOI: 10.1121/1.2968692] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The aim was to investigate the correlation between compression exponent, compression threshold, and cochlear gain for normal-hearing subjects as inferred from temporal masking curves (TMCs) and distortion-product otoacoustic emission (DPOAEs) input-output (I/O) curves. Care was given to reduce the influence of DPOAE fine structure on the DPOAE I/O curves. A high correlation between compression exponent estimates obtained with the two methods was found at 4 kHz but not at 0.5 and 1 kHz. One reason is that the DPOAE I/O curves show plateaus or notches that result in unexpectedly high compression estimates. Moderately high correlation was found between compression threshold estimates obtained with the two methods, although DPOAE-based values were around 7 dB lower than those based on TMCs. Both methods show that compression exponent and threshold are approximately constant across the frequency range from 0.5 to 4 kHz. Cochlear gain as estimated from TMCs was found to be approximately 16 dB greater at 4 than at 0.5 kHz. In conclusion, DPOAEs and TMCs may be used interchangeably to infer precise individual nonlinear cochlear characteristics at 4 kHz, but it remains unclear that the same applies to lower frequencies.
Collapse
Affiliation(s)
- Peter T Johannesen
- Unidad de Audicion Computacional y Psicoacustica, Instituto de Neurociencias de Castilla y Leon, Universidad de Salamanca, 37007 Salamanca, Spain
| | | |
Collapse
|
40
|
Alves-Pinto A, Lopez-Poveda EA. Psychophysical assessment of the level-dependent representation of high-frequency spectral notches in the peripheral auditory system. J Acoust Soc Am 2008; 124:409-421. [PMID: 18646986 DOI: 10.1121/1.2920957] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
To discriminate between broadband noises with and without a high-frequency spectral notch is more difficult at 70-80 dB sound pressure level than at lower or higher levels [Alves-Pinto, A. and Lopez-Poveda, E. A. (2005). "Detection of high-frequency spectral notches as a function of level," J. Acoust. Soc. Am. 118, 2458-2469]. One possible explanation is that the notch is less clearly represented internally at 70-80 dB SPL than at any other level. To test this hypothesis, forward-masking patterns were measured for flat-spectrum and notched noise maskers for masker levels of 50, 70, 80, and 90 dB SPL. Masking patterns were measured in two conditions: (1) fixing the masker-probe time interval at 2 ms and (2) varying the interval to achieve similar masked thresholds for different masker levels. The depth of the spectral notch remained approximately constant in the fixed-interval masking patterns and gradually decreased with increasing masker level in the variable-interval masking patterns. This difference probably reflects the effects of peripheral compression. These results are inconsistent with the nonmonotonic level-dependent performance in spectral discrimination. Assuming that a forward-masking pattern is a reasonable psychoacoustical correlate of the auditory-nerve rate-profile representation of the stimulus spectrum, these results undermine the common view that high-frequency spectral notches must be encoded in the rate-profile of auditory-nerve fibers.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X "El Sabio" s/n, 37007 Salamanca, Spain.
| | | |
Collapse
|
41
|
Abstract
Recent studies have suggested that the degree of on-frequency peripheral auditory compression is similar for apical and basal cochlear sites and that compression extends to a wider range of frequencies in apical than in basal sites. These conclusions were drawn from the analysis of the slopes of temporal masking curves (TMCs) on the assumption that forward masking decays at the same rate for all probe and masker frequencies. The aim here was to verify this conclusion using a different assumption. TMCs for normal hearing listeners were measured for probe frequencies (f(P)) of 500 and 4000 Hz and for masker frequencies (f(M)) of 0.4, 0.55, and 1.0 times the probe frequency. TMCs were measured for probes of 9 and 15 dB sensation level. The assumption was that given a 6 dB increase in probe level, linear cochlear responses to the maskers should lead to a 6 dB vertical shift of the corresponding TMCs, while compressive responses should lead to bigger shifts. Results were consistent with the conclusions from earlier studies. It is argued that this supports the assumptions of the standard TMC method for inferring compression, at least in normal-hearing listeners.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, 37007 Salamanca, Spain.
| | | |
Collapse
|
42
|
Lopez-Poveda EA, Alves-Pinto A, Palmer AR, Eustaquio-Martín A. Rate versus time representation of high-frequency spectral notches in the peripheral auditory system: A computational modeling study. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.07.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
43
|
Lopez-Najera A, Lopez-Poveda EA, Meddis R. Further studies on the dual-resonance nonlinear filter model of cochlear frequency selectivity: responses to tones. J Acoust Soc Am 2007; 122:2124-34. [PMID: 17902850 DOI: 10.1121/1.2769627] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
A number of phenomenological models that simulate the response of the basilar membrane motion can reproduce a range of complex features observed in animal measurements over different sites along its cochlea. The present report shows a detailed analysis of the responses to tones of an improved model based on a dual-resonance nonlinear filter. The improvement consists in adding a third path formed by a linear gain and an all-pass filter. This improvement allows the model to reproduce the gain and phase plateaus observed empirically at frequencies above the best frequency. The middle ear was simulated by using a digital filter based on the empirical impulse response of the chinchilla stapes. The improved algorithm is evaluated against observations of basilar membrane responses to tones at seven different sites along the chinchilla cochlear partition. This is the first time that a whole set of animal observations using the same technique has been available in one species for modeling. The resulting model was able to simulate amplitude and phase responses to tones from basal to apical sites. Linear regression across the optimized parameters for seven different sites was used to generate a complete filterbank.
Collapse
Affiliation(s)
- Alberto Lopez-Najera
- Facultad de Medicina, Universidad de Castilla-La Mancha, C/ Almansa, No. 14, 02006 Albacete, Spain.
| | | | | |
Collapse
|
44
|
Lopez-Poveda EA, Barrios LF, Alves-Pinto A. Psychophysical estimates of level-dependent best-frequency shifts in the apical region of the human basilar membrane. J Acoust Soc Am 2007; 121:3646-54. [PMID: 17552716 DOI: 10.1121/1.2722046] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
It is now undisputed that the best frequency (BF) of basal basilar-membrane (BM) sites shifts downwards as the stimulus level increases. The direction of the shift for apical sites is, by contrast, less well established. Auditory nerve studies suggest that the BF shifts in opposite directions for apical and basal BM sites with increasing stimulus level. This study attempts to determine if this is the case in humans. Psychophysical tuning curves (PTCs) were measured using forward masking for probe frequencies of 125, 250, 500, and 6000 Hz. The level of a masker tone required to just mask a fixed low-level probe tone was measured for different masker-probe time intervals. The duration of the intervals was adjusted as necessary to obtain PTCs for the widest possible range of masker levels. The BF was identified from function fits to the measured PTCs and it almost always decreased with increasing level. This result is inconsistent with most auditory-nerve observations obtained from other mammals. Several explanations are discussed, including that it may be erroneous to assume that low-frequency PTCs reflect the tuning of apical BM sites exclusively and that the inherent frequency response of the inner hair cell may account for the discrepancy.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Av. Alfonso X El Sabio s/n, 37007 Salamanca, Spain.
| | | | | |
Collapse
|
45
|
Lopez-Poveda EA, Eustaquio-Martín A. A biophysical model of the inner hair cell: the contribution of potassium currents to peripheral auditory compression. J Assoc Res Otolaryngol 2006; 7:218-35. [PMID: 16718614 PMCID: PMC2504609 DOI: 10.1007/s10162-006-0037-8] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2006] [Accepted: 04/02/2006] [Indexed: 11/30/2022] Open
Abstract
The term peripheral auditory compression refers to the fact that the whole range of audible sound pressure levels is mapped into a narrower range of auditory nerve responses. Peripheral compression is the by-product of independent compressive processes occurring at the level of the basilar membrane, the inner hair cell (IHC), and the auditory nerve synapse. Here, an electrical-circuit equivalent of an IHC is used to look into the compression contributed by the IHC. The model includes a mechanically driven transducer potassium (K(+)) conductance and two time- and voltage-dependent basolateral K(+) conductances: one with fast and one with slow kinetics. Special attention is paid to faithfully implement the activation kinetics of these basolateral conductances. Optimum model parameters are provided to account for previously reported in vitro observations that demonstrate the compression associated with the gating of the transducer and of the basolateral channels. Without having to readjust its parameters, the model also accounts for the in vivo nonlinear IHC transfer characteristics. Model simulations are then used to investigate the relative contribution of the transducer and basolateral K(+) currents to the nonlinear IHC input/output functions in vivo. The simulations suggest that the voltage-dependent activation of the basolateral currents compresses the DC potential for stereocilia displacements above approximately 5 nm. The degree of compression exceeds 2-to-1 and is similar for all stimulation frequencies. The AC potential is compressed in a similar way, but only for frequencies below 800 Hz. The simulations further suggest that the nonlinear gating of the transducer current is responsible for the expansive growth of the DC potential with increasing sound level (slope of 2 dB/dB) at low sound pressure levels.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X "El Sabio" s/n, 37007 Salamanca, Spain.
| | | |
Collapse
|
46
|
Abstract
Psychophysical estimates of basilar membrane (BM) responses suggest that normal-hearing (NH) listeners exhibit constant compression for tones at the characteristic frequency (CF) across the CF range from 250 to 8000 Hz. The frequency region over which compression occurs is broadest for low CFs. This study investigates the extent that these results differ for three hearing-impaired (HI) listeners with sensorineural hearing loss. Temporal masking curves (TMCs) were measured over a wide range of probe (500-8000 Hz) and masker frequencies (0.5-1.2 times the probe frequency). From these, estimated BM response functions were derived and compared with corresponding functions for NH listeners. Compressive responses for tones both at and below CF occur for the three HI ears across the CF range tested. The maximum amount of compression was uncorrelated with absolute threshold. It was close to normal for two of the three HI ears, but was either slightly (at CFs < or =1000 Hz) or considerably (at CFs > or =4000 Hz) reduced for the third ear. Results are interpreted in terms of the relative damage to inner and outer hair cells affecting each of the HI ears. Alternative interpretations for the results are also discussed, some of which cast doubts on the assumptions of the TMC-based method and other behavioral methods for estimating human BM compression.
Collapse
MESH Headings
- Acoustic Stimulation
- Adult
- Aged
- Auditory Threshold/physiology
- Basilar Membrane/physiopathology
- Female
- Hair Cells, Auditory, Inner/pathology
- Hair Cells, Auditory, Inner/physiopathology
- Hair Cells, Auditory, Outer/pathology
- Hair Cells, Auditory, Outer/physiopathology
- Hearing Loss, Sensorineural/pathology
- Hearing Loss, Sensorineural/physiopathology
- Humans
- Loudness Perception/physiology
- Male
- Middle Aged
- Perceptual Masking
- Psychometrics
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X El Sabio s/n, 37007 Salamanca, Spain.
| | | | | | | |
Collapse
|
47
|
Merchán M, Aguilar LA, Lopez-Poveda EA, Malmierca MS. The inferior colliculus of the rat: quantitative immunocytochemical study of GABA and glycine. Neuroscience 2006; 136:907-25. [PMID: 16344160 DOI: 10.1016/j.neuroscience.2004.12.030] [Citation(s) in RCA: 97] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2004] [Revised: 12/22/2004] [Accepted: 12/30/2004] [Indexed: 11/22/2022]
Abstract
Both GABA and glycine (Gly) containing neurons send inhibitory projections to the inferior colliculus (IC), whereas inhibitory neurons within the IC are primarily GABAergic. To date, however, a quantitative description of the topographic distribution of GABAergic neurons in the rat's IC and their GABAergic or glycinergic inputs is lacking. Accordingly, here we present detailed maps of GABAergic and glycinergic neurons and terminals in the rat's IC. Semithin serial sections of the IC were obtained and stained for GABA and Gly. Images of the tissue were digitized and used for a quantitative densitometric analysis of GABA immunostaining. The optical density, perimeter, and number of GABA- and Gly immunoreactive boutons apposed to the somata were measured. Data analysis included comparisons across IC subdivisions and across frequency regions within the central nucleus of the IC. The results show that: 1) 25% of the IC neurons are GABAergic; 2) there are more GABAergic neurons in the central nucleus of the IC than previously estimated; 3) GABAergic neurons are larger than non-GABAergic; 4) GABAergic neurons receive less GABA and glycine puncta than non-GABAergic; 5) differences across frequency regions are minor, except that the non-GABAergic neurons from high frequency regions are larger than their counterparts in low frequency regions; 6) differences within the laminae are greater along the dorsomedial-ventrolateral axis than along the rostrocaudal axis; 7) GABA and non-GABAergic neurons receive different numbers of puncta in different IC subdivisions; and 8) GABAergic puncta are both apposed to the somata and in the neuropil, glycinergic puncta are mostly confined to the neuropil.
Collapse
Affiliation(s)
- M Merchán
- Laboratory for the Neurobiology of Hearing, Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Salamanca, Spain
| | | | | | | |
Collapse
|
48
|
Abstract
High-frequency spectral notches are important cues for sound localization. Our ability to detect them must depend on their representation as auditory nerve (AN) rate profiles. Because of the low threshold and the narrow dynamic range of most AN fibers, these rate profiles deteriorate at high levels. The system may compensate by using onset rate profiles whose dynamic range is wider, or by using low-spontaneous-rate fibers, whose threshold is higher. To test these hypotheses, the threshold notch depth necessary to discriminate between a flat spectrum broadband noise and a similar noise with a spectral notch centered at 8 kHz was measured at levels from 32 to 100 dB SPL. The importance of the onset rate-profile representation of the notch was estimated by varying the stimulus duration and its rise time. For a large proportion of listeners, threshold notch depth varied nonmonotonically with level, increasing for levels up to 70-80 dB SPL and decreasing thereafter. The nonmonotonic aspect of the function was independent of notch bandwidth and stimulus duration. Thresholds were independent of stimulus rise time but increased for the shorter noise bursts. Results are discussed in terms of the ability of the AN to convey spectral notch information at different levels.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Unidad de Computación Auditiva y Psicoacústica: Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X El Sabio, Salamanca, Spain
| | | |
Collapse
|
49
|
Abstract
Two new approaches to the design of speech processors for cochlear implants are described. The first aims to represent "fine structure" or "fine frequency" information in a way that it can be perceived and used by patients, and the second aims to provide a closer mimicking than was previously possible of the signal processing that occurs in the normal cochlea.
Collapse
Affiliation(s)
- Blake S Wilson
- RTI International, Research Triangle Park, North Carolina 27709, USA
| | | | | | | | | | | |
Collapse
|
50
|
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain
| |
Collapse
|