1
|
Fischer BJ, Shadron K, Ferger R, Peña JL. Single trial Bayesian inference by population vector readout in the barn owl's sound localization system. PLoS One 2024; 19:e0303843. [PMID: 38771860 PMCID: PMC11108143 DOI: 10.1371/journal.pone.0303843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 05/01/2024] [Indexed: 05/23/2024] Open
Abstract
Bayesian models have proven effective in characterizing perception, behavior, and neural encoding across diverse species and systems. The neural implementation of Bayesian inference in the barn owl's sound localization system and behavior has been previously explained by a non-uniform population code model. This model specifies the neural population activity pattern required for a population vector readout to match the optimal Bayesian estimate. While prior analyses focused on trial-averaged comparisons of model predictions with behavior and single-neuron responses, it remains unknown whether this model can accurately approximate Bayesian inference on single trials under varying sensory reliability, a fundamental condition for natural perception and behavior. In this study, we utilized mathematical analysis and simulations to demonstrate that decoding a non-uniform population code via a population vector readout approximates the Bayesian estimate on single trials for varying sensory reliabilities. Our findings provide additional support for the non-uniform population code model as a viable explanation for the barn owl's sound localization pathway and behavior.
Collapse
Affiliation(s)
- Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Roland Ferger
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - José L. Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
2
|
Shadron K, Peña JL. Development of frequency tuning shaped by spatial cue reliability in the barn owl's auditory midbrain. eLife 2023; 12:e84760. [PMID: 37166099 PMCID: PMC10238092 DOI: 10.7554/elife.84760] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/10/2023] [Indexed: 05/12/2023] Open
Abstract
Sensory systems preferentially strengthen responses to stimuli based on their reliability at conveying accurate information. While previous reports demonstrate that the brain reweighs cues based on dynamic changes in reliability, how the brain may learn and maintain neural responses to sensory statistics expected to be stable over time is unknown. The barn owl's midbrain features a map of auditory space where neurons compute horizontal sound location from the interaural time difference (ITD). Frequency tuning of midbrain map neurons correlates with the most reliable frequencies for the neurons' preferred ITD (Cazettes et al., 2014). Removal of the facial ruff led to a specific decrease in the reliability of high frequencies from frontal space. To directly test whether permanent changes in ITD reliability drive frequency tuning, midbrain map neurons were recorded from adult owls, with the facial ruff removed during development, and juvenile owls, before facial ruff development. In both groups, frontally tuned neurons were tuned to frequencies lower than in normal adult owls, consistent with the change in ITD reliability. In addition, juvenile owls exhibited more heterogeneous frequency tuning, suggesting normal developmental processes refine tuning to match ITD reliability. These results indicate causality of long-term statistics of spatial cues in the development of midbrain frequency tuning properties, implementing probabilistic coding for sound localization.
Collapse
Affiliation(s)
- Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of MedicineBronxUnited States
| | - José Luis Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of MedicineBronxUnited States
| |
Collapse
|
3
|
Ferger R, Shadron K, Fischer BJ, Peña JL. Barn Owl's Auditory Space Map Activity Matching Conditions for a Population Vector Readout to Drive Adaptive Sound-Localizing Behavior. J Neurosci 2021; 41:10305-10315. [PMID: 34764158 PMCID: PMC8672686 DOI: 10.1523/jneurosci.1061-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 11/21/2022] Open
Abstract
Space-specific neurons in the owl's midbrain form a neural map of auditory space, which supports sound-orienting behavior. Previous work proposed that a population vector (PV) readout of this map, implementing statistical inference, predicts the owl's sound localization behavior. This model also predicts the frontal localization bias normally observed and how sound-localizing behavior changes when the signal-to-noise ratio varies, based on the spread of activity across the map. However, the actual distribution of population activity and whether this pattern is consistent with premises of the PV readout model on a trial-by-trial basis remains unknown. To answer these questions, we investigated whether the population response profile across the midbrain map in the optic tectum of the barn owl matches these predictions using in vivo multielectrode array recordings. We found that response profiles of recorded subpopulations are sufficient for estimating the stimulus interaural time difference using responses from single trials. Furthermore, this decoder matches the expected differences in trial-by-trial variability and frontal bias between stimulus conditions of low and high signal-to-noise ratio. These results support the hypothesis that a PV readout of the midbrain map can mediate statistical inference in sound-localizing behavior of barn owls.SIGNIFICANCE STATEMENT While the tuning of single neurons in the owl's midbrain map of auditory space has been considered predictive of the highly specialized sound-localizing behavior of this species, response properties across the population remain largely unknown. For the first time, this study analyzed the spread of population responses across the map using multielectrode recordings and how it changes with signal-to-noise ratio. The observed responses support the hypothesis concerning the ability of a population vector readout to predict biases in orienting behaviors and mediate uncertainty-dependent behavioral commands. The results are of significance for understanding potential mechanisms for the implementation of optimal behavioral commands across species.
Collapse
Affiliation(s)
- Roland Ferger
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, 10461
| | - Keanu Shadron
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, 10461
| | - Brian J Fischer
- Department of Mathematics, Seattle University, Seattle, Washington 98122
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, 10461
| |
Collapse
|
4
|
Gorman JC, Tufte OL, Miller AVR, DeBello WM, Peña JL, Fischer BJ. Diverse processing underlying frequency integration in midbrain neurons of barn owls. PLoS Comput Biol 2021; 17:e1009569. [PMID: 34762650 PMCID: PMC8610287 DOI: 10.1371/journal.pcbi.1009569] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/23/2021] [Accepted: 10/16/2021] [Indexed: 11/18/2022] Open
Abstract
Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization. Neurons at higher stages of sensory pathways often display selectivity for properties of sensory stimuli that result from computations performed within the nervous system. These emergent response properties can be produced by patterns of neural connectivity and processing that occur within individual cells. Here we investigated whether neural connectivity and single-neuron computation may contribute to the emergence of spatial selectivity in auditory neurons in the barn owl’s midbrain. We used data from in vivo intracellular recordings to test the hypothesis from previous modeling work that these cells function as point neurons that perform a linear sum of their inputs in their subthreshold responses. Results indicate that while some neurons show responses consistent with the point neuron hypothesis, others match predictions of nonlinear integration, indicating a diversity of frequency integration properties across neurons. Modeling further showed that varied connectivity patterns and forms of single-neuron computation may underlie observed responses. These results demonstrate that neurons with complex morphologies may implement diverse integration of synaptic inputs, relevant for adaptive coding and learning.
Collapse
Affiliation(s)
- Julia C. Gorman
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Oliver L. Tufte
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Anna V. R. Miller
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - William M. DeBello
- Center for Neuroscience, University of California - Davis, Davis, California, United States of America
| | - José L. Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, New York, New York, United States of America
| | - Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
5
|
Dehaene GP, Coen-Cagli R, Pouget A. Investigating the representation of uncertainty in neuronal circuits. PLoS Comput Biol 2021; 17:e1008138. [PMID: 33577553 PMCID: PMC7880493 DOI: 10.1371/journal.pcbi.1008138] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 07/09/2020] [Indexed: 11/24/2022] Open
Abstract
Skilled behavior often displays signatures of Bayesian inference. In order for the brain to implement the required computations, neuronal activity must carry accurate information about the uncertainty of sensory inputs. Two major approaches have been proposed to study neuronal representations of uncertainty. The first one, the Bayesian decoding approach, aims primarily at decoding the posterior probability distribution of the stimulus from population activity using Bayes’ rule, and indirectly yields uncertainty estimates as a by-product. The second one, which we call the correlational approach, searches for specific features of neuronal activity (such as tuning-curve width and maximum firing-rate) which correlate with uncertainty. To compare these two approaches, we derived a new normative model of sound source localization by Interaural Time Difference (ITD), that reproduces a wealth of behavioral and neural observations. We found that several features of neuronal activity correlated with uncertainty on average, but none provided an accurate estimate of uncertainty on a trial-by-trial basis, indicating that the correlational approach may not reliably identify which aspects of neuronal responses represent uncertainty. In contrast, the Bayesian decoding approach reveals that the activity pattern of the entire population was required to reconstruct the trial-to-trial posterior distribution with Bayes’ rule. These results suggest that uncertainty is unlikely to be represented in a single feature of neuronal activity, and highlight the importance of using a Bayesian decoding approach when exploring the neural basis of uncertainty. In order to optimize their behavior, animals must continuously represent the uncertainty associated with their beliefs. Understanding the neural code for this uncertainty is a pressing and critical issue in neuroscience. Following a long tradition, some studies have investigated this code by measuring how average statistics of neural responses (like the tuning curves) correlate with uncertainty as stimulus characteristics are varied. We show that this approach can be very misleading. An alternative consists in decoding the neuronal responses to recover the posterior distribution over the encoded sensory variables and using the variance of this distribution as the measure of uncertainty. We demonstrate that this decoding approach can indeed avoid the pitfalls of the traditional approach, while leading to more accurate estimates of uncertainty.
Collapse
Affiliation(s)
- Guillaume P Dehaene
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland
| | - Ruben Coen-Cagli
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland.,Albert Einstein College of Medicine, Bronx, Department of Systems & Computational Biology & Department of Neuroscience, New York, United States of America
| | - Alexandre Pouget
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland.,Gatsby Computational Neuroscience Unit, London, United Kingdom
| |
Collapse
|
6
|
Wess JM, Spencer NJ, Bernstein JGW. Counting or discriminating the number of voices to assess binaural fusion with single-sided vocoders. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:446. [PMID: 32006956 PMCID: PMC7043860 DOI: 10.1121/10.0000511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 11/27/2019] [Accepted: 12/06/2019] [Indexed: 06/10/2023]
Abstract
For single-sided deafness cochlear-implant (SSD-CI) listeners, different peripheral representations for electric versus acoustic stimulation, combined with interaural frequency mismatch, might limit the ability to perceive bilaterally presented speech as a single voice. The assessment of binaural fusion often relies on subjective report, which requires listeners to have some understanding of the perceptual phenomenon of object formation. Two experiments explored whether binaural fusion could instead be assessed using judgments of the number of voices in a mixture. In an SSD-CI simulation, normal-hearing listeners were presented with one or two "diotic" voices (i.e., unprocessed in one ear and noise-vocoded in the other) in a mixture with additional monaural voices. In experiment 1, listeners reported how many voices they heard. Listeners generally counted the diotic speech as two separate voices, regardless of interaural frequency mismatch. In experiment 2, listeners identified which of two mixtures contained diotic speech. Listeners performed significantly better with interaurally frequency-matched than with frequency-mismatched stimuli. These contrasting results suggest that listeners experienced partial fusion: not enough to count the diotic speech as one voice, but enough to detect its presence. The diotic-speech detection task (experiment 2) might provide a tool to evaluate fusion and optimize frequency mapping for SSD-CI patients.
Collapse
Affiliation(s)
- Jessica M Wess
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Nathaniel J Spencer
- Air Force Research Laboratory, Wright Patterson Air Force Base, Ohio 45433, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
7
|
Emergence of an Adaptive Command for Orienting Behavior in Premotor Brainstem Neurons of Barn Owls. J Neurosci 2018; 38:7270-7279. [PMID: 30012694 DOI: 10.1523/jneurosci.0947-18.2018] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Revised: 06/28/2018] [Accepted: 07/04/2018] [Indexed: 11/21/2022] Open
Abstract
The midbrain map of auditory space commands sound-orienting responses in barn owls. Owls precisely localize sounds in frontal space but underestimate the direction of peripheral sound sources. This bias for central locations was proposed to be adaptive to the decreased reliability in the periphery of sensory cues used for sound localization by the owl. Understanding the neural pathway supporting this biased behavior provides a means to address how adaptive motor commands are implemented by neurons. Here we find that the sensory input for sound direction is weighted by its reliability in premotor neurons of the midbrain tegmentum of owls (male and female), such that the mean population firing rate approximates the head-orienting behavior. We provide evidence that this coding may emerge through convergence of upstream projections from the midbrain map of auditory space. We further show that manipulating the sensory input yields changes predicted by the convergent network in both premotor neural responses and behavior. This work demonstrates how a topographic sensory representation can be linearly read out to adjust behavioral responses by the reliability of the sensory input.SIGNIFICANCE STATEMENT This research shows how statistics of the sensory input can be integrated into a behavioral command by readout of a sensory representation. The firing rate of midbrain premotor neurons receiving sensory information from a topographic representation of auditory space is weighted by the reliability of sensory cues. We show that these premotor responses are consistent with a weighted convergence from the topographic sensory representation. This convergence was also tested behaviorally, where manipulation of stimulus properties led to bidirectional changes in sound localization errors. Thus a topographic representation of auditory space is translated into a premotor command for sound localization that is modulated by sensory reliability.
Collapse
|
8
|
Lateralization of Interaural Level Differences with Multiple Electrode Stimulation in Bilateral Cochlear-Implant Listeners. Ear Hear 2018; 38:e22-e38. [PMID: 27579987 DOI: 10.1097/aud.0000000000000360] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE There is currently no accepted method of mapping bilateral cochlear-implant (BiCI) users to maximize binaural performance, but the current approach of mapping one ear at a time could produce spatial perceptions that are not consistent with a sound's physical location in space. The goal of this study was to investigate the perceived intracranial lateralization of bilaterally synchronized electrical stimulation with a range of interaural level differences (ILDs) and to determine a method to produce relatively more centered auditory images when provided multielectrode stimulation. DESIGN Using direct stimulation, lateralization curves were measured in nine BiCI listeners using 1000-pulses per second (pps), 500-msec constant-amplitude pulse trains with ILDs that ranged from -20 to +20 clinical current units (CUs). The stimuli were presented bilaterally at 70 to 80% of the dynamic range on single or multiple electrode pairs. For the multielectrode pairs, the ILD was applied consistently across all the pairs. The lateralization response range and the bias magnitude at 0 CU ILD (i.e., the number of CUs needed to produce a centered auditory image) were computed. Then the levels that elicit a centered auditory image with single-electrode stimulation were used with multielectrode stimulation to determine if this produced fewer significant biases at 0 CU ILD. Lastly, a multichannel ILD processing model was used to predict lateralization for the multielectrode stimulation from the single-electrode stimulation. RESULTS BiCI listeners often perceived both single- and multielectrode stimulation at 0-CU ILD as not intracranially centered. For single-electrode stimulation, 44% of the lateralization curves had relatively large (≥5 CU) bias magnitudes. For the multielectrode stimulation, 25% of the lateralization curves had large bias magnitudes. After centering the single-electrode pairs, the percentage of multielectrode combinations that produced large biases significantly decreased to only 4% (p < 0.001, McNemar's test). The lateralization with multielectrode stimulation was well predicted by a model that used unweighted or weighted average single-electrode lateralization percepts across electrode pairs (87 or 90%, respectively). CONCLUSION Current BiCI mapping procedures can produce an inconsistent association between a physical ILD and the perceived location across electrodes for both single- and multielectrode stimulation. Explicit centering of single-electrode pairs using the perceived centered intracranial location almost entirely corrects this problem and such an approach is supported by our understanding and model of across-frequency ILD processing. Such adjustments might be achieved by clinicians using single-electrode binaural comparisons. Binaural abilities, like sound localization and understanding speech in noise, may be improved if these across-electrode perceptual inconsistencies are removed.
Collapse
|
9
|
Combination of Interaural Level and Time Difference in Azimuthal Sound Localization in Owls. eNeuro 2018; 4:eN-NWR-0238-17. [PMID: 29379866 PMCID: PMC5779116 DOI: 10.1523/eneuro.0238-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 11/21/2017] [Accepted: 11/21/2017] [Indexed: 11/21/2022] Open
Abstract
A function of the auditory system is to accurately determine the location of a sound source. The main cues for sound location are interaural time (ITD) and level (ILD) differences. Humans use both ITD and ILD to determine the azimuth. Thus far, the conception of sound localization in barn owls was that their facial ruff and asymmetrical ears generate a two-dimensional grid of ITD for azimuth and ILD for elevation. We show that barn owls also use ILD for azimuthal sound localization when ITDs are ambiguous. For high-frequency narrowband sounds, midbrain neurons can signal multiple locations, leading to the perception of an auditory illusion called a phantom source. Owls respond to such an illusory percept by orienting toward it instead of the true source. Acoustical measurements close to the eardrum reveal a small ILD component that changes with azimuth, suggesting that ITD and ILD information could be combined to eliminate the illusion. Our behavioral data confirm that perception was robust against ambiguities if ITD and ILD information was combined. Electrophysiological recordings of ILD sensitivity in the owl’s midbrain support the behavioral findings indicating that rival brain hemispheres drive the decision to orient to either true or phantom sources. Thus, the basis for disambiguation, and reliable detection of sound source azimuth, relies on similar cues across species as similar response to combinations of ILD and narrowband ITD has been observed in humans.
Collapse
|
10
|
Affiliation(s)
| | - Jose L Peña
- University of Maryland .,Albert Einstein College of Medicine
| |
Collapse
|
11
|
Ursino M, Crisafulli A, di Pellegrino G, Magosso E, Cuppini C. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study. Front Comput Neurosci 2017; 11:89. [PMID: 29046631 PMCID: PMC5633019 DOI: 10.3389/fncom.2017.00089] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Accepted: 09/20/2017] [Indexed: 11/17/2022] Open
Abstract
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | | | | | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| |
Collapse
|
12
|
Tellers P, Lehmann J, Führ H, Wagner H. Envelope contributions to the representation of interaural time difference in the forebrain of barn owls. J Neurophysiol 2017; 118:1871-1887. [PMID: 28679844 DOI: 10.1152/jn.01166.2015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Revised: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 11/22/2022] Open
Abstract
Birds and mammals use the interaural time difference (ITD) for azimuthal sound localization. While barn owls can use the ITD of the stimulus carrier frequency over nearly their entire hearing range, mammals have to utilize the ITD of the stimulus envelope to extend the upper frequency limit of ITD-based sound localization. ITD is computed and processed in a dedicated neural circuit that consists of two pathways. In the barn owl, ITD representation is more complex in the forebrain than in the midbrain pathway because of the combination of two inputs that represent different ITDs. We speculated that one of the two inputs includes an envelope contribution. To estimate the envelope contribution, we recorded ITD response functions for correlated and anticorrelated noise stimuli in the barn owl's auditory arcopallium. Our findings indicate that barn owls, like mammals, represent both carrier and envelope ITDs of overlapping frequency ranges, supporting the hypothesis that carrier and envelope ITD-based localization are complementary beyond a mere extension of the upper frequency limit.NEW & NOTEWORTHY The results presented in this study show for the first time that the barn owl is able to extract and represent the interaural time difference (ITD) information conveyed by the envelope of a broadband acoustic signal. Like mammals, the barn owl extracts the ITD of the envelope and the carrier of a signal from the same frequency range. These results are of general interest, since they reinforce a trend found in neural signal processing across different species.
Collapse
Affiliation(s)
- Philipp Tellers
- Institute of Biology II, RWTH Aachen University, Aachen, Germany; and
| | - Jessica Lehmann
- Lehrstuhl A für Mathematik, RWTH Aachen University, Aachen, Germany
| | - Hartmut Führ
- Lehrstuhl A für Mathematik, RWTH Aachen University, Aachen, Germany
| | - Hermann Wagner
- Institute of Biology II, RWTH Aachen University, Aachen, Germany; and
| |
Collapse
|
13
|
Egger K, Majdak P, Laback B. Binaural timing information in electric hearing at low rates: Effects of inaccurate encoding and loudness. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:3164. [PMID: 28599571 DOI: 10.1121/1.4982888] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Stimulation strategies for cochlear implants potentially impose timing limitations that may hinder the correct encoding and representation of interaural time differences (ITDs) in realistic bilateral signals. This study aimed to specify the tolerable room for inaccurate encoding of ITDs at low rates by investigating the perceptual degradation due to the removal of individual pulses at various levels of loudness. Unmodulated, 100-pulses-per-second pulse trains were presented at a single, interaurally pitch-matched electrode pair. In experiment I, ITD thresholds were measured applying different degrees of bilateral, interaurally-uncorrelated pulse removal. The ITD sensitivity deteriorated with increasing degree of pulse removal, with significant deterioration for degrees of 16% or greater. In experiment II, the interaction between loudness and pulse removal was investigated. Louder stimuli yielded better ITD sensitivity, however, no further improvement was found for stimuli louder than "medium." When removing 8% of the pulses, the ITD sensitivity deteriorated significantly across the entire loudness range tested. A loudness-induced compensation for the deterioration of ITD sensitivity due to pulse removal seems to be feasible for soft stimuli but not for medium or loud stimuli. Overall, our findings suggest that the degree of pulse removal employed in low-rate channels within coding strategies should not exceed 8%.
Collapse
Affiliation(s)
- Katharina Egger
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| | - Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| |
Collapse
|
14
|
Ursino M, Cuppini C, Magosso E. Multisensory Bayesian Inference Depends on Synapse Maturation during Training: Theoretical Analysis and Neural Modeling Implementation. Neural Comput 2017; 29:735-782. [DOI: 10.1162/neco_a_00935] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding—the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| |
Collapse
|
15
|
Fischer BJ, Peña JL. Optimal nonlinear cue integration for sound localization. J Comput Neurosci 2017; 42:37-52. [PMID: 27714569 PMCID: PMC5253079 DOI: 10.1007/s10827-016-0626-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 08/10/2016] [Accepted: 09/06/2016] [Indexed: 10/20/2022]
Abstract
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl's sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl's localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University, 901 12th Ave, Seattle, WA, 98122, USA.
| | - Jose Luis Peña
- Department of Neuroscience, Albert Einstein College of Medicine, 1410 Pelham Parkway South, Bronx, NY, 10461, USA
| |
Collapse
|
16
|
Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System. J Neurosci 2016; 36:2101-10. [PMID: 26888922 DOI: 10.1523/jneurosci.3753-15.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. SIGNIFICANCE STATEMENT In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior.
Collapse
|
17
|
Lüddemann H, Kollmeier B, Riedel H. Electrophysiological and psychophysical asymmetries in sensitivity to interaural correlation gaps and implications for binaural integration time. Hear Res 2015; 332:170-187. [PMID: 26526276 DOI: 10.1016/j.heares.2015.10.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Revised: 10/14/2015] [Accepted: 10/19/2015] [Indexed: 11/26/2022]
Abstract
Brief deviations of interaural correlation (IAC) can provide valuable cues for detection, segregation and localization of acoustic signals. This study investigated the processing of such "binaural gaps" in continuously running noise (100-2000 Hz), in comparison to silent "monaural gaps", by measuring late auditory evoked potentials (LAEPs) and perceptual thresholds with novel, iteratively optimized stimuli. Mean perceptual binaural gap duration thresholds exhibited a major asymmetry: they were substantially shorter for uncorrelated gaps in correlated and anticorrelated reference noise (1.75 ms and 4.1 ms) than for correlated and anticorrelated gaps in uncorrelated reference noise (26.5 ms and 39.0 ms). The thresholds also showed a minor asymmetry: they were shorter in the positive than in the negative IAC range. The mean behavioral threshold for monaural gaps was 5.5 ms. For all five gap types, the amplitude of LAEP components N1 and P2 increased linearly with the logarithm of gap duration. While perceptual and electrophysiological thresholds matched for monaural gaps, LAEP thresholds were about twice as long as perceptual thresholds for uncorrelated gaps, but half as long for correlated and anticorrelated gaps. Nevertheless, LAEP thresholds showed the same asymmetries as perceptual thresholds. For gap durations below 30 ms, LAEPs were dominated by the processing of the leading edge of a gap. For longer gap durations, in contrast, both the leading and the lagging edge of a gap contributed to the evoked response. Formulae for the equivalent rectangular duration (ERD) of the binaural system's temporal window were derived for three common window shapes. The psychophysical ERD was 68 ms for diotic and about 40 ms for anti- and uncorrelated noise. After a nonlinear Z-transform of the stimulus IAC prior to temporal integration, ERDs were about 10 ms for reference correlations of ±1 and 80 ms for uncorrelated reference. Hence, a physiologically motivated peripheral nonlinearity changed the rank order of ERDs across experimental conditions in a plausible manner.
Collapse
Affiliation(s)
- Helge Lüddemann
- Medizinische Physik & Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, D-26111 Oldenburg, Germany.
| | - Birger Kollmeier
- Medizinische Physik & Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, D-26111 Oldenburg, Germany.
| | - Helmut Riedel
- Sektion Biomagnetismus, Neurologische Klinik, Universität Heidelberg, Im Neuenheimer Feld 400, D-69120 Heidelberg, Germany.
| |
Collapse
|
18
|
Abstract
Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.
Collapse
Affiliation(s)
- Weston Cox
- Department of Electrical and Computer Engineering, Seattle University, Seattle, Washington, United States of America
| | - Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
19
|
Bierman HS, Carr CE. Sound localization in the alligator. Hear Res 2015; 329:11-20. [PMID: 26048335 DOI: 10.1016/j.heares.2015.05.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Revised: 05/12/2015] [Accepted: 05/24/2015] [Indexed: 10/23/2022]
Abstract
In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution.
Collapse
Affiliation(s)
- Hilary S Bierman
- Center for Comparative and Evolutionary Biology of Hearing, Department of Biology, University of Maryland College Park, College Park, Maryland 20742, USA.
| | - Catherine E Carr
- Center for Comparative and Evolutionary Biology of Hearing, Department of Biology, University of Maryland College Park, College Park, Maryland 20742, USA.
| |
Collapse
|
20
|
Neural representation of probabilities for Bayesian inference. J Comput Neurosci 2015; 38:315-23. [PMID: 25561333 DOI: 10.1007/s10827-014-0545-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2014] [Revised: 12/07/2014] [Accepted: 12/23/2014] [Indexed: 10/24/2022]
Abstract
Bayesian models are often successful in describing perception and behavior, but the neural representation of probabilities remains in question. There are several distinct proposals for the neural representation of probabilities, but they have not been directly compared in an example system. Here we consider three models: a non-uniform population code where the stimulus-driven activity and distribution of preferred stimuli in the population represent a likelihood function and a prior, respectively; the sampling hypothesis which proposes that the stimulus-driven activity over time represents a posterior probability and that the spontaneous activity represents a prior; and the class of models which propose that a population of neurons represents a posterior probability in a distributed code. It has been shown that the non-uniform population code model matches the representation of auditory space generated in the owl's external nucleus of the inferior colliculus (ICx). However, the alternative models have not been tested, nor have the three models been directly compared in any system. Here we tested the three models in the owl's ICx. We found that spontaneous firing rate and the average stimulus-driven response of these neurons were not consistent with predictions of the sampling hypothesis. We also found that neural activity in ICx under varying levels of sensory noise did not reflect a posterior probability. On the other hand, the responses of ICx neurons were consistent with the non-uniform population code model. We further show that Bayesian inference can be implemented in the non-uniform population code model using one spike per neuron when the population is large and is thus able to support the rapid inference that is necessary for sound localization.
Collapse
|
21
|
Jones H, Kan A, Litovsky RY. Comparing sound localization deficits in bilateral cochlear-implant users and vocoder simulations with normal-hearing listeners. Trends Hear 2014; 18:18/0/2331216514554574. [PMID: 25385244 PMCID: PMC4271768 DOI: 10.1177/2331216514554574] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Bilateral cochlear-implant (BiCI) users are less accurate at localizing free-field (FF) sound sources than normal-hearing (NH) listeners. This performance gap is not well understood but is likely due to a combination of compromises in acoustic signal representation by the two independent speech processors and neural degradation of auditory pathways associated with a patient's hearing loss. To exclusively investigate the effect of CI speech encoding on horizontal-plane sound localization, the present study measured sound localization performance in NH subjects listening to vocoder processed and nonvocoded virtual acoustic space (VAS) stimuli. Various aspects of BiCI stimulation such as independently functioning devices, variable across-ear channel selection, and pulsatile stimulation were simulated using uncorrelated noise (Nu), correlated noise (N0), or Gaussian-enveloped tone (GET) carriers during vocoder processing. Additionally, FF sound localization in BiCI users was measured in the same testing environment for comparison. Distinct response patterns across azimuthal locations were evident for both listener groups and were analyzed using a multilevel regression analysis. Simulated implant speech encoding, regardless of carrier, was detrimental to NH localization and the GET vocoder best simulated BiCI FF performance in NH listeners. Overall, the detrimental effect of vocoder processing on NH performance suggests that sound localization deficits may persist even for BiCI patients who have minimal neural degradation associated with their hearing loss and indicates that CI speech encoding plays a significant role in the sound localization deficits experienced by BiCI users.
Collapse
Affiliation(s)
- Heath Jones
- Waisman Center, University of Wisconsin-Madison, WI, USA
| | - Alan Kan
- Waisman Center, University of Wisconsin-Madison, WI, USA
| | | |
Collapse
|
22
|
Wang Y, Gutfreund Y, Peña JL. Coding space-time stimulus dynamics in auditory brain maps. Front Physiol 2014; 5:135. [PMID: 24782781 PMCID: PMC3986518 DOI: 10.3389/fphys.2014.00135] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2013] [Accepted: 03/19/2014] [Indexed: 11/21/2022] Open
Abstract
Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior.
Collapse
Affiliation(s)
- Yunyan Wang
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine Bronx, NY, USA
| | - Yoram Gutfreund
- The Rappaport Research Institute and Faculty of Medicine The Technion, Haifa, Israel
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine Bronx, NY, USA
| |
Collapse
|
23
|
Franken TP, Bremen P, Joris PX. Coincidence detection in the medial superior olive: mechanistic implications of an analysis of input spiking patterns. Front Neural Circuits 2014; 8:42. [PMID: 24822037 PMCID: PMC4013490 DOI: 10.3389/fncir.2014.00042] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2014] [Accepted: 04/07/2014] [Indexed: 11/24/2022] Open
Abstract
Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying "ipsi-" vs. "contralateral" inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output "spike trains" by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes.
Collapse
Affiliation(s)
| | | | - Philip X. Joris
- Laboratory of Auditory Neurophysiology, Department of Neurosciences, KU LeuvenLeuven, Belgium
| |
Collapse
|
24
|
Zohar O, Shackleton TM, Palmer AR, Shamir M. The effect of correlated neuronal firing and neuronal heterogeneity on population coding accuracy in guinea pig inferior colliculus. PLoS One 2013; 8:e81660. [PMID: 24358120 PMCID: PMC3864845 DOI: 10.1371/journal.pone.0081660] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2013] [Accepted: 10/15/2013] [Indexed: 11/18/2022] Open
Abstract
It has been suggested that the considerable noise in single-cell responses to a stimulus can be overcome by pooling information from a large population. Theoretical studies indicated that correlations in trial-to-trial fluctuations in the responses of different neurons may limit the improvement due to pooling. Subsequent theoretical studies have suggested that inherent neuronal diversity, i.e., the heterogeneity of tuning curves and other response properties of neurons preferentially tuned to the same stimulus, can provide a means to overcome this limit. Here we study the effect of spike-count correlations and the inherent neuronal heterogeneity on the ability to extract information from large neural populations. We use electrophysiological data from the guinea pig Inferior-Colliculus to capture inherent neuronal heterogeneity and single cell statistics, and introduce response correlations artificially. To this end, we generate pseudo-population responses, based on single-cell recording of neurons responding to auditory stimuli with varying binaural correlations. Typically, when pseudo-populations are generated from single cell data, the responses within the population are statistically independent. As a result, the information content of the population will increase indefinitely with its size. In contrast, here we apply a simple algorithm that enables us to generate pseudo-population responses with variable spike-count correlations. This enables us to study the effect of neuronal correlations on the accuracy of conventional rate codes. We show that in a homogenous population, in the presence of even low-level correlations, information content is bounded. In contrast, utilizing a simple linear readout, that takes into account the natural heterogeneity, even of neurons preferentially tuned to the same stimulus, within the neural population, one can overcome the correlated noise and obtain a readout whose accuracy grows linearly with the size of the population.
Collapse
Affiliation(s)
- Oran Zohar
- Deptartment of Physiol, Faculty of Health Sciences, Ben-Gurion University of the Negev, Be'er-Sheva, Israel
| | | | - Alan R. Palmer
- MRC Institute of Hearing Research, University Park, Nottingham, United Kingdom
| | - Maoz Shamir
- Deptartment of Physiol, Faculty of Health Sciences, Ben-Gurion University of the Negev, Be'er-Sheva, Israel
- Department of Physics, Faculty of Natural Sciences, Ben-Gurion University of the Negev, Be'er-Sheva, Israel
- * E-mail:
| |
Collapse
|
25
|
New perspectives on the owl's map of auditory space. Curr Opin Neurobiol 2013; 24:55-62. [PMID: 24492079 DOI: 10.1016/j.conb.2013.08.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2013] [Revised: 08/07/2013] [Accepted: 08/13/2013] [Indexed: 11/20/2022]
Abstract
A map of sound direction was found in the owl's midbrain more than three decades ago. This finding suggested that the brain reconstructs spatial coordinates to represent them. Subsequent research elucidated the variables used to compute the map. Here we provide a review of the processes leading to its emergence and an updated perspective on how and what information is represented.
Collapse
|
26
|
Behavioral sensitivity to broadband binaural localization cues in the ferret. J Assoc Res Otolaryngol 2013; 14:561-72. [PMID: 23615803 PMCID: PMC3705081 DOI: 10.1007/s10162-013-0390-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2013] [Accepted: 04/05/2013] [Indexed: 11/29/2022] Open
Abstract
Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing.
Collapse
|
27
|
Population-wide bias of surround suppression in auditory spatial receptive fields of the owl's midbrain. J Neurosci 2012; 32:10470-8. [PMID: 22855796 DOI: 10.1523/jneurosci.0047-12.2012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The physical arrangement of receptive fields (RFs) within neural structures is important for local computations. Nonuniform distribution of tuning within populations of neurons can influence emergent tuning properties, causing bias in local processing. This issue was studied in the auditory system of barn owls. The owl's external nucleus of the inferior colliculus (ICx) contains a map of auditory space in which the frontal region is overrepresented. We measured spatiotemporal RFs of ICx neurons using spatial white noise. We found a population-wide bias in surround suppression such that suppression from frontal space was stronger. This asymmetry increased with laterality in spatial tuning. The bias could be explained by a model of lateral inhibition based on the overrepresentation of frontal space observed in ICx. The model predicted trends in surround suppression across ICx that matched the data. Thus, the uneven distribution of spatial tuning within the map could explain the topography of time-dependent tuning properties. This mechanism may have significant implications for the analysis of natural scenes by sensory systems.
Collapse
|
28
|
Fischer BJ, Peña JL. Owl's behavior and neural representation predicted by Bayesian inference. Nat Neurosci 2011; 14:1061-6. [PMID: 21725311 PMCID: PMC3145020 DOI: 10.1038/nn.2872] [Citation(s) in RCA: 83] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2010] [Accepted: 04/29/2011] [Indexed: 11/10/2022]
Abstract
The owl captures prey using sound localization. In the classical model, the owl infers sound direction from the position of greatest activity in a brain map of auditory space. However, this model fails to describe the actual behavior. Although owls accurately localize sources near the center of gaze, they systematically underestimate peripheral source directions. We found that this behavior is predicted by statistical inference, formulated as a Bayesian model that emphasizes central directions. We propose that there is a bias in the neural coding of auditory space, which, at the expense of inducing errors in the periphery, achieves high behavioral accuracy at the ethologically relevant range. We found that the owl's map of auditory space decoded by a population vector is consistent with the behavioral model. Thus, a probabilistic model describes both how the map of auditory space supports behavior and why this representation is optimal.
Collapse
Affiliation(s)
- Brian J Fischer
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France.
| | | |
Collapse
|
29
|
Abstract
The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior.
Collapse
Affiliation(s)
- Jose L Pena
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, USA
| | | |
Collapse
|
30
|
Asadollahi A, Endler F, Nelken I, Wagner H. Neural correlates of binaural masking level difference in the inferior colliculus of the barn owl (Tyto alba). Eur J Neurosci 2010; 32:606-18. [PMID: 20618828 DOI: 10.1111/j.1460-9568.2010.07313.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Humans and animals are able to detect signals in noisy environments. Detection improves when the noise and the signal have different interaural phase relationships. The resulting improvement in detection threshold is called the binaural masking level difference. We investigated neural mechanisms underlying the release from masking in the inferior colliculus of barn owls in low-frequency and high-frequency neurons. A tone (signal) was presented either with the same interaural time difference as the noise (masker) or at a 180 degrees phase shift as compared with the interaural time difference of the noise. The changes in firing rates induced by the addition of a signal of increasing level while masker level was kept constant was well predicted by the relative responses to the masker and signal alone. In many cases, the response at the highest signal levels was dominated by the response to the signal alone, in spite of a significant response to the masker at low signal levels, suggesting the presence of occlusion. Detection thresholds and binaural masking level differences were widely distributed. The amount of release from masking increased with increasing masker level. Narrowly tuned neurons in the central nucleus of the inferior colliculus had detection thresholds that were lower than or similar to those of broadly tuned neurons in the external nucleus of the inferior colliculus. Broadly tuned neurons exhibited higher masking level differences than narrowband neurons. These data suggest that detection has different spectral requirements from localization.
Collapse
Affiliation(s)
- Ali Asadollahi
- Institute for Biology II, RWTH Aachen, Mies-van-der-Rohe Strasse 15, D-52074 Aachen, Germany
| | | | | | | |
Collapse
|
31
|
Hausmann L, von Campenhausen M, Endler F, Singheiser M, Wagner H. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba) as demonstrated by virtual ruff removal. PLoS One 2009; 4:e7721. [PMID: 19890389 PMCID: PMC2766829 DOI: 10.1371/journal.pone.0007721] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2009] [Accepted: 10/09/2009] [Indexed: 12/02/2022] Open
Abstract
Background When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. Methodology/Principal Findings HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. Conclusions/Significance The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso–ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.
Collapse
|
32
|
Lüddemann H, Riedel H, Kollmeier B. Electrophysiological and psychophysical asymmetries in sensitivity to interaural correlation steps. Hear Res 2009; 256:39-57. [PMID: 19555753 DOI: 10.1016/j.heares.2009.06.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2008] [Revised: 06/12/2009] [Accepted: 06/15/2009] [Indexed: 10/20/2022]
Abstract
The binaural auditory system's sensitivity to changes in the interaural cross correlation (IAC), as an indicator for the perceived spatial diffuseness of a sound, is of major importance for the ability to distinguish concurrent sound sources. In this article, we present electroencephalographical and corresponding psychophysical experiments with stepwise transitions of the IAC in continuously running noise. Both the transient and sustained brain response, display electrophysiological correlates of specific binaural processing in humans. The transient late auditory evoked potentials (LAEP) systematically depend on the size of the IAC transition, the reference correlation preceding the transition, the direction of the transition and on unspecific context information from the stimulus sequence. The psychophysical and electrophysiological data are characterized by two asymmetries. (1) Major asymmetry: for reference correlations of +1 and -1, psychoacoustical thresholds are comparatively lower, and the peak-to-peak-amplitudes of LAEP are larger than for a reference correlation of zero. (2) Minor asymmetry: for IAC transitions in the positive parameter range, perceptual thresholds are slightly better and peak-to-peak amplitudes are larger than in the negative range. In all experimental conditions, LAEP amplitudes are linearly related to the dB scaled power ratio of correlated (N(0)) versus anticorrelated (N(pi)) signal components. The voltage gain of LAEP per dB(N(0)/N(pi)) closely corresponds to a constant perceptual distance between two correlations. We therefore suggest that activity in the auditory cortex and perceptual IAC sensitivity are better represented by the dB-scaled N(0)/N(pi) power ratio than by the normalized IAC itself.
Collapse
Affiliation(s)
- Helge Lüddemann
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, D-26111 Oldenburg, Germany.
| | | | | |
Collapse
|
33
|
Zimmer U, Macaluso E. Interaural temporal and coherence cues jointly contribute to successful sound movement perception and activation of parietal cortex. Neuroimage 2009; 46:1200-8. [PMID: 19303934 DOI: 10.1016/j.neuroimage.2009.03.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2008] [Revised: 01/27/2009] [Accepted: 03/08/2009] [Indexed: 11/24/2022] Open
Abstract
The perception of movement in the auditory modality requires dynamic changes in the input that reaches the two ears (e.g. sequential changes of interaural time differences; dynamic ITDs). However, it is still unclear as to what extent these temporal cues interact with other interaural cues to determine successful movement perception, and which brain regions are involved in sound movement processing. Here, we presented trains of white-noise bursts containing either static or dynamic ITDs, and we varied parametrically the level of binaural coherence (BC) of both types of stimuli. Behaviorally, we found that movement discrimination sensitivity decreased with decreasing levels of BC. fMRI analyses highlighted a network of temporal, frontal and parietal regions where activity decreased with decreasing BC. Critically, in the intra-parietal sulcus and the supra-marginal gyrus brain activity decreased with decreasing BC, but only for dynamic-ITD sounds (BC by ITD interaction). Thus, these regions activated selectively when the sounds contained both dynamic ITDs and high levels of BC; i.e. when subjects perceived sound movement. We conclude that sound movement perception requires both dynamic changes of the auditory input and effective sound-source localization, and that parietal cortex utilizes interaural temporal and coherence cues for the successful perception of sound movement.
Collapse
Affiliation(s)
- U Zimmer
- NeuroImaging Laboratory, Santa Lucia Foundation, Italy.
| | | |
Collapse
|
34
|
Independence of echo-threshold and echo-delay in the barn owl. PLoS One 2008; 3:e3598. [PMID: 18974886 PMCID: PMC2571984 DOI: 10.1371/journal.pone.0003598] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2008] [Accepted: 10/10/2008] [Indexed: 11/19/2022] Open
Abstract
Despite their prevalence in nature, echoes are not perceived as events separate from the sounds arriving directly from an active source, until the echo's delay is long. We measured the head-saccades of barn owls and the responses of neurons in their auditory space-maps while presenting a long duration noise-burst and a simulated echo. Under this paradigm, there were two possible stimulus segments that could potentially signal the location of the echo. One was at the onset of the echo; the other, after the offset of the direct (leading) sound, when only the echo was present. By lengthening the echo's duration, independently of its delay, spikes and saccades were evoked by the source of the echo even at delays that normally evoked saccades to only the direct source. An echo's location thus appears to be signaled by the neural response evoked after the offset of the direct sound.
Collapse
|
35
|
Mc Laughlin M, Chabwine JN, van der Heijden M, Joris PX. Comparison of bandwidths in the inferior colliculus and the auditory nerve. II: Measurement using a temporally manipulated stimulus. J Neurophysiol 2008; 100:2312-27. [PMID: 18701761 DOI: 10.1152/jn.90252.2008] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To localize low-frequency sounds, humans rely on an interaural comparison of the temporally encoded sound waveform after peripheral filtering. This process can be compared with cross-correlation. For a broadband stimulus, after filtering, the correlation function has a damped oscillatory shape where the periodicity reflects the filter's center frequency and the damping reflects the bandwidth (BW). The physiological equivalent of the correlation function is the noise delay (ND) function, which is obtained from binaural cells by measuring response rate to broadband noise with varying interaural time delays (ITDs). For monaural neurons, delay functions are obtained by counting coincidences for varying delays across spike trains obtained to the same stimulus. Previously, we showed that BWs in monaural and binaural neurons were similar. However, earlier work showed that the damping of delay functions differs significantly between these two populations. Here, we address this paradox by looking at the role of sensitivity to changes in interaural correlation. We measured delay and correlation functions in the cat inferior colliculus (IC) and auditory nerve (AN). We find that, at a population level, AN and IC neurons with similar characteristic frequencies (CF) and BWs can have different responses to changes in correlation. Notably, binaural neurons often show compression, which is not found in the AN and which makes the shape of delay functions more invariant with CF at the level of the IC than at the AN. We conclude that binaural sensitivity is more dependent on correlation sensitivity than has hitherto been appreciated and that the mechanisms underlying correlation sensitivity should be addressed in future studies.
Collapse
Affiliation(s)
- Myles Mc Laughlin
- Laboratory of Auditory Neurophysiology, Medical School, K. U. Leuven, Campus Gasthuisberg O&N 2, Herestraat 49 bus 1021, B-3000 Leuven, Belgium
| | | | | | | |
Collapse
|
36
|
Köppl C, Carr CE. Maps of interaural time difference in the chicken's brainstem nucleus laminaris. BIOLOGICAL CYBERNETICS 2008; 98:541-59. [PMID: 18491165 PMCID: PMC3170859 DOI: 10.1007/s00422-008-0220-6] [Citation(s) in RCA: 85] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2007] [Accepted: 01/08/2008] [Indexed: 05/10/2023]
Abstract
Animals, including humans, use interaural time differences (ITDs) that arise from different sound path lengths to the two ears as a cue of horizontal sound source location. The nature of the neural code for ITD is still controversial. Current models differentiate between two population codes: either a map-like rate-place code of ITD along an array of neurons, consistent with a large body of data in the barn owl, or a population rate code, consistent with data from small mammals. Recently, it was proposed that these different codes reflect optimal coding strategies that depend on head size and sound frequency. The chicken makes an excellent test case of this proposal because its physical prerequisites are similar to small mammals, yet it shares a more recent common ancestry with the owl. We show here that, like in the barn owl, the brainstem nucleus laminaris in mature chickens displayed the major features of a place code of ITD. ITD was topographically represented in the maximal responses of neurons along each isofrequency band, covering approximately the contralateral acoustic hemisphere. Furthermore, the represented ITD range appeared to change with frequency, consistent with a pressure gradient receiver mechanism in the avian middle ear. At very low frequencies, below 400 Hz, maximal neural responses were symmetrically distributed around zero ITD and it remained unclear whether there was a topographic representation. These findings do not agree with the above predictions for optimal coding and thus revive the discussion as to what determines the neural coding strategies for ITDs.
Collapse
Affiliation(s)
- Christine Köppl
- Lehrstuhl für Zoologie, Technische Universität München, Lichtenbergstr. 4, 85747, Garching, Germany.
| | | |
Collapse
|
37
|
Mc Laughlin M, Van de Sande B, van der Heijden M, Joris PX. Comparison of bandwidths in the inferior colliculus and the auditory nerve. I. Measurement using a spectrally manipulated stimulus. J Neurophysiol 2007; 98:2566-79. [PMID: 17881484 DOI: 10.1152/jn.00595.2007] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A defining feature of auditory systems across animal divisions is the ability to sort different frequency components of a sound into separate neural frequency channels. Narrowband filtering in the auditory periphery is of obvious advantage for the representation of sound spectrum and manifests itself pervasively in human psychophysical studies as the critical band. Peripheral filtering also alters coding of the temporal waveform, so that temporal responses in the auditory periphery reflect both the stimulus waveform and peripheral filtering. Temporal coding is essential for the measurement of the time delay between waveforms at the two ears-a critical component of sound localization. A number of human psychophysical studies have shown a wider effective critical bandwidth with binaural stimuli than with monaural stimuli, although other studies found no difference. Here we directly compare binaural and monaural bandwidths (BWs) in the anesthetized cat. We measure monaural BW in the auditory nerve (AN) and binaural BW in the inferior colliculus (IC) using spectrally manipulated broadband noise and response metrics that reflect spike timing. The stimulus was a pair of noise tokens that were interaurally in phase for all frequencies below a certain flip frequency (f(flip)) and that had an interaural phase difference of pi above f(flip). The response was measured as a function of f(flip) and, using a separate stimulus protocol, as a function of interaural correlation. We find that both AN and IC filter BW depend on characteristic frequency, but that there is no difference in mean BW between the AN and IC.
Collapse
Affiliation(s)
- Myles Mc Laughlin
- Laboratory of Auditory Neurophysiology, Medical School, Campus Gasthuisberg, Leuven, Belgium.
| | | | | | | |
Collapse
|
38
|
Soeta Y, Nakagawa S. Auditory evoked magnetic fields in relation to interaural time delay and interaural correlation. Hear Res 2006; 220:106-15. [PMID: 16934951 DOI: 10.1016/j.heares.2006.07.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2005] [Revised: 07/06/2006] [Accepted: 07/13/2006] [Indexed: 11/26/2022]
Abstract
The detection of interaural time differences (ITD) for sound localization depends on the similarity between the left and right ear signals, namely interaural correlation (IAC). Human localization performance deteriorates with decreasing IACs. In order to examine activity related to localization performance in the human cortex, auditory evoked magnetic fields to the ITD of bandpass noises with different IACs were analyzed. When the IAC was 0.95, the N1m amplitudes, i.e., the estimated equivalent current dipole moments, increased with increasing ITD. However the effect of ITD on the N1m amplitudes was not significant when the IAC was 0.5. When the ITD was 0.7 ms, the N1m amplitudes decreased with decreasing IACs. There were no systematic changes in the source location of N1m in the auditory cortex related to changes in ITD or IAC. The results suggest that localization performance is reflected in N1m amplitudes.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology (AIST), 1-8-31Midorigaoka, Ikeda, Osaka 563-8577, Japan.
| | | |
Collapse
|
39
|
Coffey CS, Ebert CS, Marshall AF, Skaggs JD, Falk SE, Crocker WD, Pearson JM, Fitzpatrick DC. Detection of interaural correlation by neurons in the superior olivary complex, inferior colliculus and auditory cortex of the unanesthetized rabbit. Hear Res 2006; 221:1-16. [PMID: 16978812 DOI: 10.1016/j.heares.2006.06.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2006] [Revised: 06/01/2006] [Accepted: 06/09/2006] [Indexed: 12/01/2022]
Abstract
A critical binaural cue important for sound localization and detection of signals in noise is the interaural time difference (ITD), or difference in the time of arrival of sounds at each ear. The ITD can be determined by cross-correlating the sounds at the two ears and finding the ITD where the correlation is maximal. The amount of interaural correlation is affected by properties of spaces and can therefore be used to assess spatial attributes. To examine the neural basis for sensitivity to the overall level of the interaural correlation, we identified subcollicular neurons and neurons in the inferior colliculus (IC) and auditory cortex of unanesthetized rabbits that were sensitive to ITDs and examined their responses as the interaural correlation was varied. Neurons at each brain level could show linear or non-linear responses to changes in interaural correlation. The direction of the non-linearities in most neurons was to increase the slope of the response change for correlations near 1.0. The proportion of neurons with non-linear responses was similar in subcollicular and IC neurons but increased in the auditory cortex. Non-linear response functions to interaural correlation were not related to the type of response as determined by the tuning to ITDs across frequencies. The responses to interaural correlation were also not related to the frequency tuning of the neuron, unlike the responses to ITD, which broadens for neurons tuned to lower frequencies. The neural discriminibility of the ITD using frozen noise in the best neurons was similar to the behavioral acuity in humans at a reference correlation of 1.0. However, for other reference ITDs the neural discriminibility was more linear and generally better than the human discriminibility of the interaural correlation, suggesting that stimulus rather than neural variability is the basis for the decline in human performance at lower levels of interaural correlation.
Collapse
Affiliation(s)
- Charles S Coffey
- Department of Otolaryngology/Head and Neck Surgery, CB #7070, University of North Carolina School of Medicine, 101 Medical Research Building A, Chapel Hill, NC 27599-7070, USA
| | | | | | | | | | | | | | | |
Collapse
|
40
|
|
41
|
Shackleton TM, Arnott RH, Palmer AR. Sensitivity to interaural correlation of single neurons in the inferior colliculus of guinea pigs. J Assoc Res Otolaryngol 2006; 6:244-59. [PMID: 16080025 PMCID: PMC2504597 DOI: 10.1007/s10162-005-0005-8] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2005] [Accepted: 05/06/2005] [Indexed: 11/28/2022] Open
Abstract
Sensitivity to changes in the interaural correlation of 50-ms bursts of narrowband or broadband noise was measured in single neurons in the inferior colliculus of urethane-anaesthetized guinea pigs. Rate vs. interaural correlation functions (rICFs) were measured using two methods. These methods compensated in different ways for the inherent variance in interaural correlation between tokens with the same expected correlation. The shape of all rICFs could be best described by power functions allowing them to be summarized by two parameters. Most rICFs were best fit by a power below 2, indicating that they were only slightly nonlinear. However, there were a few fitted functions that had a power of 3-6, indicating marked curvature. Modeling results indicate that the nonlinearity of the majority of rICFs was explicable in terms of the monaural transduction stages; however, some of the rICFs with power greater than 2 require either multiple inputs to the coincidence detector or additional nonlinearities to be included in the model. Discrimination thresholds were estimated at reference correlations of -1, 0, and +1 using receiver operating characteristic (ROC) analysis of the spike-count distribution at each correlation. Thresholds spanned the full possible range, from a minimum of 0.1 to the maximum possible of 2. Thresholds were generally highest with a reference correlation of -1, intermediate with a reference of 0, and lowest with a reference correlation of +1. Thresholds were lowest for the most steeply sloped rICFs, but thresholds were not strongly correlated to the spike rate variance. The lowest thresholds occurred using narrowband noise that was compensated for internal delays, but they were still about three times larger than human psychophysical thresholds measured using similar stimuli. The data suggest that, unlike pure tone interaural time difference, discrimination of a population measure is required to account for behavioral interaural correlation discrimination performance.
Collapse
Affiliation(s)
- Trevor M Shackleton
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, UK.
| | | | | |
Collapse
|
42
|
Keller CH, Takahashi TT. Localization and identification of concurrent sounds in the owl's auditory space map. J Neurosci 2006; 25:10446-61. [PMID: 16280583 PMCID: PMC6725814 DOI: 10.1523/jneurosci.2093-05.2005] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In nature, sounds from multiple sources sum at the eardrums, generating complex cues for sound localization and identification. In this clutter, the auditory system must determine "what is where." We examined this process in the auditory space map of the barn owl's (Tyto alba) inferior colliculus using two spatially separated sources simultaneously emitting uncorrelated noise bursts, which were uniquely identified by different frequencies of sinusoidal amplitude modulation. Spatial response profiles of isolated neurons were constructed by testing the source-pair centered at various locations in virtual auditory space. The neurons responded whenever a source was placed within the receptive field, generating two clearly segregated foci of activity at appropriate loci. The spike trains were locked strongly to the amplitude modulation of the source within the receptive field, whereas the other source had minimal influence. Two sources amplitude modulated at the same rate were resolved successfully, suggesting that source separation is based on differences of fine structure. The spike rate and synchrony were stronger for whichever source had the stronger average binaural level. A computational model showed that neuronal activity was primarily proportional to the degree of matching between the momentary binaural cues and the preferred values of the neuron. The model showed that individual neurons respond to and synchronize with sources in their receptive field if there are frequencies having an average binaural-level advantage over a second source. Frequencies with interaural phase differences that are shared by both sources may also evoke activity, which may be synchronized with the amplitude modulations from either source.
Collapse
Affiliation(s)
- Clifford H Keller
- Institute of Neuroscience, University of Oregon, Eugene, Oregon 97403, USA.
| | | |
Collapse
|
43
|
Abstract
Sound localization behavior is of great importance for an animal's survival. To localize a sound, animals have to detect a sound source and assign a location to it. In this review we discuss recent results on the underlying mechanisms and on modulatory influences in the barn owl, an auditory specialist with very well developed capabilities to localize sound. Information processing in the barn owl auditory pathway underlying the computations of detection and localization is well understood. This analysis of the sensory information primarily determines the following orienting behavior towards the sound source. However, orienting behavior may be modulated by cognitive (top-down) influences such as attention. We show how advanced stimulation techniques can be used to determine the importance of different cues for sound localization in quasi-realistic stimulation situations, how attentional influences can improve the response to behaviorally relevant stimuli, and how attention can modulate related neural responses. Taken together, these data indicate how sound localization might function in the usually complex natural environment.
Collapse
|
44
|
Zimmer U, Macaluso E. High Binaural Coherence Determines Successful Sound Localization and Increased Activity in Posterior Auditory Areas. Neuron 2005; 47:893-905. [PMID: 16157283 DOI: 10.1016/j.neuron.2005.07.019] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2005] [Revised: 05/02/2005] [Accepted: 07/21/2005] [Indexed: 10/25/2022]
Abstract
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization.
Collapse
Affiliation(s)
- U Zimmer
- Neuroimaging Laboratory, Fondazione Santa Lucia, Via Ardeatina 306, Rome 00179, Italy.
| | | |
Collapse
|
45
|
Saberi K, Petrosyan A. Neural cross-correlation and signal decorrelation: insights into coding of auditory space. J Theor Biol 2005; 235:45-56. [PMID: 15833312 DOI: 10.1016/j.jtbi.2004.12.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2004] [Revised: 09/30/2004] [Accepted: 12/14/2004] [Indexed: 11/15/2022]
Abstract
The auditory systems of humans and many other species use the difference in the time of arrival of acoustic signals at the two ears to compute the lateral position of sound sources. This computation is assumed to initially occur in an assembly of neurons organized along a frequency-by-delay surface. Mathematically, the computations are equivalent to a two-dimensional cross-correlation of the input signals at the two ears, with the position of the peak activity along this surface designating the position of the source in space. In this study, partially correlated signals to the two ears are used to probe the mechanisms for encoding spatial cues in stationary or dynamic (moving) signals. It is demonstrated that a cross-correlation model of the auditory periphery coupled with statistical decision theory can predict the patterns of performance by human subjects for both stationary and motion stimuli as a function of stimulus decorrelation. Implications of these findings for the existence of a unique cortical motion system are discussed.
Collapse
Affiliation(s)
- Kourosh Saberi
- Department of Cognitive Sciences, University of California, Irvine, 92697, USA.
| | | |
Collapse
|
46
|
Abstract
Auditory space-specific neurons in the owl's inferior colliculus selectively respond to the direction of sound propagation, which is defined by combinations of interaural time (ITD) and level (ILD) differences. Mathematical analyses show that the amplitude of postsynaptic potentials in these neurons is a product of two components that vary with either ITD or ILD. Temporal correlation in the fine structure of signals between the ears is essential for detection of ITD. By varying the degree of binaural correlation, we could accurately change the amplitude of the ITD component of postsynaptic potentials in the space-specific neurons. Multiplication worked for the entire range of postsynaptic potentials created by manipulation of ITD.
Collapse
Affiliation(s)
- José Luis Peña
- Division of Biology 216-76, California Institute of Technology, Pasadena, California 91125, USA.
| | | |
Collapse
|
47
|
Spitzer MW, Bala ADS, Takahashi TT. A neuronal correlate of the precedence effect is associated with spatial selectivity in the barn owl's auditory midbrain. J Neurophysiol 2004; 92:2051-70. [PMID: 15381741 DOI: 10.1152/jn.01235.2003] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sound localization in echoic conditions depends on a precedence effect (PE), in which the first arriving sound dominates the perceived location of later reflections. Previous studies have demonstrated neurophysiological correlates of the PE in several species, but the underlying mechanisms remain unknown. The present study documents responses of space-specific neurons in the barn owl's inferior colliculus (IC) to stimuli simulating direct sounds and reflections that overlap in time at the listener's ears. Responses to 100-ms noises with lead-lag delays from 1 to 100 ms were recorded from neurons in the space-mapped subdivisions of IC in anesthetized owls (N2O/isofluorane). Responses to a target located at a unit's best location were usually suppressed by a masker located outside the excitatory portion of the spatial receptive field. The least spatially selective units exhibited temporally symmetric effects, in that the amount of suppression was the same whether the masker led or lagged. Such effects mirror the alteration of localization cues caused by acoustic superposition of leading and lagging sounds. In more spatially selective units, the suppression was often temporally asymmetric, being more pronounced when the masker led. The masker often evoked small changes in spatial tuning that were not related to the magnitude of suppressive effects. The association of temporally asymmetric suppression with spatial selectivity suggests that this property emerges within IC, and not at earlier stages of auditory processing. Asymmetric suppression reduces the ability of highly spatially selective neurons to encode the location of lagging sounds, providing a possible basis for the PE.
Collapse
Affiliation(s)
- Matthew W Spitzer
- Department of Psychology, Monash University, Clayton, Victoria 3800, Australia.
| | | | | |
Collapse
|
48
|
Soeta Y, Hotehama T, Nakagawa S, Tonoike M, Ando Y. Auditory evoked magnetic fields in relation to interaural cross-correlation of band-pass noise. Hear Res 2004; 196:109-14. [PMID: 15464307 DOI: 10.1016/j.heares.2004.07.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2004] [Accepted: 05/25/2004] [Indexed: 11/23/2022]
Abstract
Auditory evoked magnetic fields of the human brain were analyzed in relation to the magnitude of the inter-aural cross-correlation (IACC). IACC of the stimuli was controlled by mixing diotic bandpass and dichotic independent bandpass noise in appropriate ratios. The auditory stimuli were binaurally delivered through plastic tubes and earpieces inserted into ear canals of the nine volunteers with normal hearing who took part in this study. All source signals had the same sound pressure level. Auditory evoked fields (AEFs) were recorded using a neuromagnetometer in a magnetically shielded room. Combinations of a reference stimulus (IACC=1.0) and test stimuli (IACC=0.2, 0.6, 0.85) were presented alternately at a constant interstimulus interval of 0.5 s and MEGs recorded. The results showed that the N1m latencies were not affected by IACC; however, the peak amplitude of N1m significantly decreased with increasing IACC.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology, 1-8-31 Midorigaoka, Ikeda, Osaka 563-8577, Japan.
| | | | | | | | | |
Collapse
|
49
|
Budd TW, Hall DA, Gonçalves MS, Akeroyd MA, Foster JR, Palmer AR, Head K, Summerfield AQ. Binaural specialisation in human auditory cortex: an fMRI investigation of interaural correlation sensitivity. Neuroimage 2004; 20:1783-94. [PMID: 14642488 DOI: 10.1016/j.neuroimage.2003.07.026] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
A listener's sensitivity to the interaural correlation (IAC) of sound plays an important role in several phenomena in binaural hearing. Although IAC has been examined humans, little is known about the neural basis of sensitivity to IAC in humans. The present study employed functional magnetic resonance imaging to measure blood oxygen level-dependent (BOLD) activity in auditory brainstem and cortical structures in human listeners during presentation of band-pass noise stimuli between which IAC was varied systematically. The stimuli evoked significant bilateral activation in the inferior colliculus, medial geniculate body, and auditory cortex. There was a significant positive relationship between BOLD activity and IAC which was confined to a distinct subregion of primary auditory cortex located bilaterally at the lateral extent of Heschl's gyrus. Comparison with published anatomical data indicated that this area may also be cytoarchitecturally distinct. Larger differences in activation were found between levels of IAC near unity than between levels near zero. This response pattern is qualitatively compatible with previous measures of psychophysical and neurophysiological sensitivity to IAC. extensively in neurophysiological studies in animals and in psychophysical studies in
Collapse
Affiliation(s)
- T W Budd
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK.
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Abstract
Behavioral, anatomical, and physiological approaches can be integrated in the study of sound localization in barn owls. Space representation in owls provides a useful example for discussion of place and ensemble coding. Selectivity for space is broad and ambiguous in low-order neurons. Parallel pathways for binaural cues and for different frequency bands converge on high-order space-specific neurons, which encode space more precisely. An ensemble of broadly tuned place-coding neurons may converge on a single high-order neuron to create an improved labeled line. Thus, the two coding schemes are not alternate methods. Owls can localize sounds by using either the isomorphic map of auditory space in the midbrain or forebrain neural networks in which space is not mapped.
Collapse
Affiliation(s)
- Masakazu Konishi
- Division of Biology 216-76, California Institute of Technology, Pasadena, CA 91125, USA.
| |
Collapse
|