1
|
Aralla R, Pauley C, Köppl C. The Neural Representation of Binaural Sound Localization Cues Across Different Subregions of the Chicken's Inferior Colliculus. J Comp Neurol 2024; 532:e25653. [PMID: 38962885 DOI: 10.1002/cne.25653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 05/21/2024] [Accepted: 06/07/2024] [Indexed: 07/05/2024]
Abstract
The sound localization behavior of the nocturnally hunting barn owl and its underlying neural computations is a textbook example of neuroethology. Differences in sound timing and level at the two ears are integrated in a series of well-characterized steps, from brainstem to inferior colliculus (IC), resulting in a topographical neural representation of auditory space. It remains an important question of brain evolution: How is this specialized case derived from a more plesiomorphic pattern? The present study is the first to match physiology and anatomical subregions in the non-owl avian IC. Single-unit responses in the chicken IC were tested for selectivity to different frequencies and to the binaural difference cues. Their anatomical origin was reconstructed with the help of electrolytic lesions and immunohistochemical identification of different subregions of the IC, based on previous characterizations in owl and chicken. In contrast to barn owl, there was no distinct differentiation of responses in the different subregions. We found neural topographies for both binaural cues but no evidence for a coherent representation of auditory space. The results are consistent with previous work in pigeon IC and chicken higher-order midbrain and suggest a plesiomorphic condition of multisensory integration in the midbrain that is dominated by lateral panoramic vision.
Collapse
Affiliation(s)
- Roberta Aralla
- Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky Universität, Oldenburg, Germany
| | - Claire Pauley
- Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky Universität, Oldenburg, Germany
| | - Christine Köppl
- Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky Universität, Oldenburg, Germany
- Research Center for Neurosensory Sciences, Carl von Ossietzky Universität, Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität, Oldenburg, Germany
| |
Collapse
|
2
|
Fischer BJ, Shadron K, Ferger R, Peña JL. Single trial Bayesian inference by population vector readout in the barn owl's sound localization system. PLoS One 2024; 19:e0303843. [PMID: 38771860 PMCID: PMC11108143 DOI: 10.1371/journal.pone.0303843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 05/01/2024] [Indexed: 05/23/2024] Open
Abstract
Bayesian models have proven effective in characterizing perception, behavior, and neural encoding across diverse species and systems. The neural implementation of Bayesian inference in the barn owl's sound localization system and behavior has been previously explained by a non-uniform population code model. This model specifies the neural population activity pattern required for a population vector readout to match the optimal Bayesian estimate. While prior analyses focused on trial-averaged comparisons of model predictions with behavior and single-neuron responses, it remains unknown whether this model can accurately approximate Bayesian inference on single trials under varying sensory reliability, a fundamental condition for natural perception and behavior. In this study, we utilized mathematical analysis and simulations to demonstrate that decoding a non-uniform population code via a population vector readout approximates the Bayesian estimate on single trials for varying sensory reliabilities. Our findings provide additional support for the non-uniform population code model as a viable explanation for the barn owl's sound localization pathway and behavior.
Collapse
Affiliation(s)
- Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Roland Ferger
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - José L. Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
3
|
Wagner H, Egelhaaf M, Carr C. Model organisms and systems in neuroethology: one hundred years of history and a look into the future. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2024; 210:227-242. [PMID: 38227005 PMCID: PMC10995084 DOI: 10.1007/s00359-023-01685-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/27/2023] [Accepted: 11/29/2023] [Indexed: 01/17/2024]
Abstract
The Journal of Comparative Physiology lived up to its name in the last 100 years by including more than 1500 different taxa in almost 10,000 publications. Seventeen phyla of the animal kingdom were represented. The honeybee (Apis mellifera) is the taxon with most publications, followed by locust (Locusta migratoria), crayfishes (Cambarus spp.), and fruitfly (Drosophila melanogaster). The representation of species in this journal in the past, thus, differs much from the 13 model systems as named by the National Institutes of Health (USA). We mention major accomplishments of research on species with specific adaptations, specialist animals, for example, the quantitative description of the processes underlying the axon potential in squid (Loligo forbesii) and the isolation of the first receptor channel in the electric eel (Electrophorus electricus) and electric ray (Torpedo spp.). Future neuroethological work should make the recent genetic and technological developments available for specialist animals. There are many research questions left that may be answered with high yield in specialists and some questions that can only be answered in specialists. Moreover, the adaptations of animals that occupy specific ecological niches often lend themselves to biomimetic applications. We go into some depth in explaining our thoughts in the research of motion vision in insects, sound localization in barn owls, and electroreception in weakly electric fish.
Collapse
Affiliation(s)
- Hermann Wagner
- Institute of Biology II, RWTH Aachen University, 52074, Aachen, Germany.
| | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Catherine Carr
- Department of Biology, University of Maryland at College Park, College Park, USA
| |
Collapse
|
4
|
Bae A, Peña JL. Barn owls specialized sound-driven behavior: Lessons in optimal processing and coding by the auditory system. Hear Res 2024; 443:108952. [PMID: 38242019 DOI: 10.1016/j.heares.2024.108952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/03/2024] [Accepted: 01/11/2024] [Indexed: 01/21/2024]
Abstract
The barn owl, a nocturnal raptor with remarkably efficient prey-capturing abilities, has been one of the initial animal models used for research of brain mechanisms underlying sound localization. Some seminal findings made from their specialized sound localizing auditory system include discoveries of a midbrain map of auditory space, mechanisms towards spatial cue detection underlying sound-driven orienting behavior, and circuit level changes supporting development and experience-dependent plasticity. These findings have explained properties of vital hearing functions and inspired theories in spatial hearing that extend across diverse animal species, thereby cementing the barn owl's legacy as a powerful experimental system for elucidating fundamental brain mechanisms. This concise review will provide an overview of the insights from which the barn owl model system has exemplified the strength of investigating diversity and similarity of brain mechanisms across species. First, we discuss some of the key findings in the specialized system of the barn owl that elucidated brain mechanisms toward detection of auditory cues for spatial hearing. Then we examine how the barn owl has validated mathematical computations and theories underlying optimal hearing across species. And lastly, we conclude with how the barn owl has advanced investigations toward developmental and experience dependent plasticity in sound localization, as well as avenues for future research investigations towards bridging commonalities across species. Analogous to the informative power of Astrophysics for understanding nature through diverse exploration of planets, stars, and galaxies across the universe, miscellaneous research across different animal species pursues broad understanding of natural brain mechanisms and behavior.
Collapse
Affiliation(s)
- Andrea Bae
- Albert Einstein College of Medicine, NY, USA
| | - Jose L Peña
- Albert Einstein College of Medicine, NY, USA.
| |
Collapse
|
5
|
Capshaw G, Brown AD, Peña JL, Carr CE, Christensen-Dalsgaard J, Tollin DJ, Womack MC, McCullagh EA. The continued importance of comparative auditory research to modern scientific discovery. Hear Res 2023; 433:108766. [PMID: 37084504 PMCID: PMC10321136 DOI: 10.1016/j.heares.2023.108766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/23/2023] [Accepted: 04/05/2023] [Indexed: 04/23/2023]
Abstract
A rich history of comparative research in the auditory field has afforded a synthetic view of sound information processing by ears and brains. Some organisms have proven to be powerful models for human hearing due to fundamental similarities (e.g., well-matched hearing ranges), while others feature intriguing differences (e.g., atympanic ears) that invite further study. Work across diverse "non-traditional" organisms, from small mammals to avians to amphibians and beyond, continues to propel auditory science forward, netting a variety of biomedical and technological advances along the way. In this brief review, limited primarily to tetrapod vertebrates, we discuss the continued importance of comparative studies in hearing research from the periphery to central nervous system with a focus on outstanding questions such as mechanisms for sound capture, peripheral and central processing of directional/spatial information, and non-canonical auditory processing, including efferent and hormonal effects.
Collapse
Affiliation(s)
- Grace Capshaw
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Catherine E Carr
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | | | - Daniel J Tollin
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA; Department of Otolaryngology, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Molly C Womack
- Department of Biology, Utah State University, Logan, UT 84322, USA.
| | - Elizabeth A McCullagh
- Department of Integrative Biology, Oklahoma State University, Stillwater, OK 74078, USA.
| |
Collapse
|
6
|
Shadron K, Peña JL. Development of frequency tuning shaped by spatial cue reliability in the barn owl's auditory midbrain. eLife 2023; 12:e84760. [PMID: 37166099 PMCID: PMC10238092 DOI: 10.7554/elife.84760] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/10/2023] [Indexed: 05/12/2023] Open
Abstract
Sensory systems preferentially strengthen responses to stimuli based on their reliability at conveying accurate information. While previous reports demonstrate that the brain reweighs cues based on dynamic changes in reliability, how the brain may learn and maintain neural responses to sensory statistics expected to be stable over time is unknown. The barn owl's midbrain features a map of auditory space where neurons compute horizontal sound location from the interaural time difference (ITD). Frequency tuning of midbrain map neurons correlates with the most reliable frequencies for the neurons' preferred ITD (Cazettes et al., 2014). Removal of the facial ruff led to a specific decrease in the reliability of high frequencies from frontal space. To directly test whether permanent changes in ITD reliability drive frequency tuning, midbrain map neurons were recorded from adult owls, with the facial ruff removed during development, and juvenile owls, before facial ruff development. In both groups, frontally tuned neurons were tuned to frequencies lower than in normal adult owls, consistent with the change in ITD reliability. In addition, juvenile owls exhibited more heterogeneous frequency tuning, suggesting normal developmental processes refine tuning to match ITD reliability. These results indicate causality of long-term statistics of spatial cues in the development of midbrain frequency tuning properties, implementing probabilistic coding for sound localization.
Collapse
Affiliation(s)
- Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of MedicineBronxUnited States
| | - José Luis Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of MedicineBronxUnited States
| |
Collapse
|
7
|
Chou KF, Boyd AD, Best V, Colburn HS, Sen K. A biologically oriented algorithm for spatial sound segregation. Front Neurosci 2022; 16:1004071. [PMID: 36312015 PMCID: PMC9614053 DOI: 10.3389/fnins.2022.1004071] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/28/2022] [Indexed: 11/13/2022] Open
Abstract
Listening in an acoustically cluttered scene remains a difficult task for both machines and hearing-impaired listeners. Normal-hearing listeners accomplish this task with relative ease by segregating the scene into its constituent sound sources, then selecting and attending to a target source. An assistive listening device that mimics the biological mechanisms underlying this behavior may provide an effective solution for those with difficulty listening in acoustically cluttered environments (e.g., a cocktail party). Here, we present a binaural sound segregation algorithm based on a hierarchical network model of the auditory system. In the algorithm, binaural sound inputs first drive populations of neurons tuned to specific spatial locations and frequencies. The spiking response of neurons in the output layer are then reconstructed into audible waveforms via a novel reconstruction method. We evaluate the performance of the algorithm with a speech-on-speech intelligibility task in normal-hearing listeners. This two-microphone-input algorithm is shown to provide listeners with perceptual benefit similar to that of a 16-microphone acoustic beamformer. These results demonstrate the promise of this biologically inspired algorithm for enhancing selective listening in challenging multi-talker scenes.
Collapse
Affiliation(s)
- Kenny F. Chou
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Alexander D. Boyd
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - H. Steven Colburn
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Kamal Sen
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
- *Correspondence: Kamal Sen,
| |
Collapse
|
8
|
Ferger R, Shadron K, Fischer BJ, Peña JL. Barn Owl's Auditory Space Map Activity Matching Conditions for a Population Vector Readout to Drive Adaptive Sound-Localizing Behavior. J Neurosci 2021; 41:10305-10315. [PMID: 34764158 PMCID: PMC8672686 DOI: 10.1523/jneurosci.1061-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 11/21/2022] Open
Abstract
Space-specific neurons in the owl's midbrain form a neural map of auditory space, which supports sound-orienting behavior. Previous work proposed that a population vector (PV) readout of this map, implementing statistical inference, predicts the owl's sound localization behavior. This model also predicts the frontal localization bias normally observed and how sound-localizing behavior changes when the signal-to-noise ratio varies, based on the spread of activity across the map. However, the actual distribution of population activity and whether this pattern is consistent with premises of the PV readout model on a trial-by-trial basis remains unknown. To answer these questions, we investigated whether the population response profile across the midbrain map in the optic tectum of the barn owl matches these predictions using in vivo multielectrode array recordings. We found that response profiles of recorded subpopulations are sufficient for estimating the stimulus interaural time difference using responses from single trials. Furthermore, this decoder matches the expected differences in trial-by-trial variability and frontal bias between stimulus conditions of low and high signal-to-noise ratio. These results support the hypothesis that a PV readout of the midbrain map can mediate statistical inference in sound-localizing behavior of barn owls.SIGNIFICANCE STATEMENT While the tuning of single neurons in the owl's midbrain map of auditory space has been considered predictive of the highly specialized sound-localizing behavior of this species, response properties across the population remain largely unknown. For the first time, this study analyzed the spread of population responses across the map using multielectrode recordings and how it changes with signal-to-noise ratio. The observed responses support the hypothesis concerning the ability of a population vector readout to predict biases in orienting behaviors and mediate uncertainty-dependent behavioral commands. The results are of significance for understanding potential mechanisms for the implementation of optimal behavioral commands across species.
Collapse
Affiliation(s)
- Roland Ferger
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, 10461
| | - Keanu Shadron
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, 10461
| | - Brian J Fischer
- Department of Mathematics, Seattle University, Seattle, Washington 98122
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, 10461
| |
Collapse
|
9
|
Gorman JC, Tufte OL, Miller AVR, DeBello WM, Peña JL, Fischer BJ. Diverse processing underlying frequency integration in midbrain neurons of barn owls. PLoS Comput Biol 2021; 17:e1009569. [PMID: 34762650 PMCID: PMC8610287 DOI: 10.1371/journal.pcbi.1009569] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/23/2021] [Accepted: 10/16/2021] [Indexed: 11/18/2022] Open
Abstract
Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization. Neurons at higher stages of sensory pathways often display selectivity for properties of sensory stimuli that result from computations performed within the nervous system. These emergent response properties can be produced by patterns of neural connectivity and processing that occur within individual cells. Here we investigated whether neural connectivity and single-neuron computation may contribute to the emergence of spatial selectivity in auditory neurons in the barn owl’s midbrain. We used data from in vivo intracellular recordings to test the hypothesis from previous modeling work that these cells function as point neurons that perform a linear sum of their inputs in their subthreshold responses. Results indicate that while some neurons show responses consistent with the point neuron hypothesis, others match predictions of nonlinear integration, indicating a diversity of frequency integration properties across neurons. Modeling further showed that varied connectivity patterns and forms of single-neuron computation may underlie observed responses. These results demonstrate that neurons with complex morphologies may implement diverse integration of synaptic inputs, relevant for adaptive coding and learning.
Collapse
Affiliation(s)
- Julia C. Gorman
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Oliver L. Tufte
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Anna V. R. Miller
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - William M. DeBello
- Center for Neuroscience, University of California - Davis, Davis, California, United States of America
| | - José L. Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, New York, New York, United States of America
| | - Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
10
|
Redundancy between spectral and higher-order texture statistics for natural image segmentation. Vision Res 2021; 187:55-65. [PMID: 34217005 DOI: 10.1016/j.visres.2021.06.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 06/09/2021] [Accepted: 06/11/2021] [Indexed: 11/23/2022]
Abstract
Visual texture, defined by local image statistics, provides important information to the human visual system for perceptual segmentation. Second-order or spectral statistics (equivalent to the Fourier power spectrum) are a well-studied segmentation cue. However, the role of higher-order statistics (HOS) in segmentation remains unclear, particularly for natural images. Recent experiments indicate that, in peripheral vision, the HOS of the widely adopted Portilla-Simoncelli texture model are a weak segmentation cue compared to spectral statistics, despite the fact that both are necessary to explain other perceptual phenomena and to support high-quality texture synthesis. Here we test whether this discrepancy reflects a property of natural image statistics. First, we observe that differences in spectral statistics across segments of natural images are redundant with differences in HOS. Second, using linear and nonlinear classifiers, we show that each set of statistics individually affords high performance in natural scenes and texture segmentation tasks, but combining spectral statistics and HOS produces relatively small improvements. Third, we find that HOS improve segmentation for a subset of images, although these images are difficult to identify. We also find that different subsets of HOS improve segmentation to a different extent, in agreement with previous physiological and perceptual work. These results show that the HOS add modestly to spectral statistics for natural image segmentation. We speculate that tuning to natural image statistics under resource constraints could explain the weak contribution of HOS to perceptual segmentation in human peripheral vision.
Collapse
|
11
|
Pavão R, Sussman ES, Fischer BJ, Peña JL. Natural ITD statistics predict human auditory spatial perception. eLife 2020; 9:e51927. [PMID: 33043884 PMCID: PMC7661036 DOI: 10.7554/elife.51927] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 10/09/2020] [Indexed: 11/28/2022] Open
Abstract
A neural code adapted to the statistical structure of sensory cues may optimize perception. We investigated whether interaural time difference (ITD) statistics inherent in natural acoustic scenes are parameters determining spatial discriminability. The natural ITD rate of change across azimuth (ITDrc) and ITD variability over time (ITDv) were combined in a Fisher information statistic to assess the amount of azimuthal information conveyed by this sensory cue. We hypothesized that natural ITD statistics underlie the neural code for ITD and thus influence spatial perception. To test this hypothesis, sounds with invariant statistics were presented to measure human spatial discriminability and spatial novelty detection. Human auditory spatial perception showed correlation with natural ITD statistics, supporting our hypothesis. Further analysis showed that these results are consistent with classic models of ITD coding and can explain the ITD tuning distribution observed in the mammalian brainstem.
Collapse
Affiliation(s)
- Rodrigo Pavão
- Dominick P. Purpura Department of Neuroscience - Albert Einstein College of MedicineNew YorkUnited States
- Centro de Matemática, Computação e Cognição - Universidade Federal do ABCSanto AndréBrazil
| | - Elyse S Sussman
- Dominick P. Purpura Department of Neuroscience - Albert Einstein College of MedicineNew YorkUnited States
| | - Brian J Fischer
- Department of Mathematics - Seattle UniversitySeattleUnited States
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience - Albert Einstein College of MedicineNew YorkUnited States
| |
Collapse
|
12
|
Effect of Stimulus-Dependent Spike Timing on Population Coding of Sound Location in the Owl's Auditory Midbrain. eNeuro 2020; 7:ENEURO.0244-19.2020. [PMID: 32188709 PMCID: PMC7189487 DOI: 10.1523/eneuro.0244-19.2020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 02/07/2020] [Accepted: 02/18/2020] [Indexed: 11/21/2022] Open
Abstract
In the auditory system, the spectrotemporal structure of acoustic signals determines the temporal pattern of spikes. Here, we investigated this effect in neurons of the barn owl's auditory midbrain (Tyto furcata) that are selective for auditory space and whether it can influence the coding of sound direction. We found that in the nucleus where neurons first become selective to combinations of sound localization cues, reproducibility of spike trains across repeated trials of identical sounds, a metric of across-trial temporal fidelity of spiking patterns evoked by a stimulus, was maximal at the sound direction that elicited the highest firing rate. We then tested the hypothesis that this stimulus-dependent patterning resulted in rate co-modulation of cells with similar frequency and spatial selectivity, driving stimulus-dependent synchrony of population responses. Tetrodes were used to simultaneously record multiple nearby units in the optic tectum (OT), where auditory space is topographically represented. While spiking of neurons in OT showed lower reproducibility across trials compared with upstream nuclei, spike-time synchrony between nearby OT neurons was highest for sounds at their preferred direction. A model of the midbrain circuit explained the relationship between stimulus-dependent reproducibility and synchrony, and demonstrated that this effect can improve the decoding of sound location from the OT output. Thus, stimulus-dependent spiking patterns in the auditory midbrain can have an effect on spatial coding. This study reports a functional connection between spike patterning elicited by spectrotemporal features of a sound and the coding of its location.
Collapse
|
13
|
Schillberg P, Brill S, Nikolay P, Ferger R, Gerhard M, Führ H, Wagner H. Sound localization in barn owls studied with manipulated head-related transfer functions: beyond broadband interaural time and level differences. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2020; 206:477-498. [PMID: 32140774 DOI: 10.1007/s00359-020-01410-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 02/06/2020] [Accepted: 02/13/2020] [Indexed: 10/24/2022]
Abstract
Interaural time and level differences are important cues for sound localization. We wondered whether the broadband information contained in these two cues could fully explain the behavior of barn owls and responses of midbrain neurons in these birds. To tackle this problem, we developed a novel approach based on head-related transfer functions. These filters contain the complete information present at the eardrum. We selected positions in space characterized by equal broadband interaural time and level differences. Stimulation from such positions provides reduced information to the owl. We show that barn owls are able to discriminate between such positions. In many cases, but not all, the owls may have used spectral components of interaural level differences that exceeded the known behavioral resolution and variability for discrimination. Alternatively, the birds may have used template matching. Likewise, neurons in the optic tectum of the barn owl, a nucleus involved in sensorimotor integration, contained more information than is available in the broadband interaural time and level differences. Thus, these data show that more information is available and used by barn owls for sound localization than carried by broadband interaural time and level differences.
Collapse
Affiliation(s)
- Patrick Schillberg
- Institute of Biology II, RWTH Aachen University, Worringerweg 3, 52074, Aachen, Germany
| | - Sandra Brill
- Institute of Biology II, RWTH Aachen University, Worringerweg 3, 52074, Aachen, Germany
| | - Petra Nikolay
- Institute of Biology II, RWTH Aachen University, Worringerweg 3, 52074, Aachen, Germany
| | - Roland Ferger
- Institute of Biology II, RWTH Aachen University, Worringerweg 3, 52074, Aachen, Germany.,Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Maike Gerhard
- Lehrstuhl A für Mathematik, RWTH Aachen University, Templergraben 55, 52056, Aachen, Germany
| | - Hartmut Führ
- Lehrstuhl A für Mathematik, RWTH Aachen University, Templergraben 55, 52056, Aachen, Germany
| | - Hermann Wagner
- Institute of Biology II, RWTH Aachen University, Worringerweg 3, 52074, Aachen, Germany.
| |
Collapse
|
14
|
Synthesis of Hemispheric ITD Tuning from the Readout of a Neural Map: Commonalities of Proposed Coding Schemes in Birds and Mammals. J Neurosci 2019; 39:9053-9061. [PMID: 31570537 DOI: 10.1523/jneurosci.0873-19.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 09/23/2019] [Accepted: 09/25/2019] [Indexed: 11/21/2022] Open
Abstract
A major cue to infer sound direction is the difference in arrival time of the sound at the left and right ears, called interaural time difference (ITD). The neural coding of ITD and its similarity across species have been strongly debated. In the barn owl, an auditory specialist relying on sound localization to capture prey, ITDs within the physiological range determined by the head width are topographically represented at each frequency. The topographic representation suggests that sound direction may be inferred from the location of maximal neural activity within the map. Such topographical representation of ITD, however, is not evident in mammals. Instead, the preferred ITD of neurons in the mammalian brainstem often lies outside the physiological range and depends on the neuron's best frequency. Because of these disparities, it has been assumed that how spatial hearing is achieved in birds and mammals is fundamentally different. However, recent studies reveal ITD responses in the owl's forebrain and midbrain premotor area that are consistent with coding schemes proposed in mammals. Particularly, sound location in owls could be decoded from the relative firing rates of two broadly and inversely ITD-tuned channels. This evidence suggests that, at downstream stages, the code for ITD may not be qualitatively different across species. Thus, while experimental evidence continues to support the notion of differences in ITD representation across species and brain regions, the latest results indicate notable commonalities, suggesting that codes driving orienting behavior in mammals and birds may be comparable.
Collapse
|
15
|
Fang Y, Yu Z, Liu JK, Chen F. A unified neural circuit of causal inference and multisensory integration. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.067] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
16
|
Krumm B, Klump GM, Köppl C, Langemann U. The barn owls' Minimum Audible Angle. PLoS One 2019; 14:e0220652. [PMID: 31442234 PMCID: PMC6707599 DOI: 10.1371/journal.pone.0220652] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 07/19/2019] [Indexed: 11/21/2022] Open
Abstract
Interaural time differences (ITD) and interaural level differences (ILD) are physical cues that enable the auditory system to pinpoint the position of a sound source in space. This ability is crucial for animal communication and predator-prey interactions. The barn owl has evolved an exceptional sense of hearing and shows abilities of sound localisation that outperform most other species. So far, behavioural studies in the barn owl often used reflexive responses to investigate aspects of sound localisation. Furthermore, they predominately probed the higher frequencies of the owl's hearing range (> 3 kHz). In the present study we used a Go/NoGo paradigm to measure the barn owl's behavioural sound localisation acuity (expressed as the Minimum Audible Angle, MAA) as a function of stimulus type (narrow-band noise centred at 500, 1000, 2000, 4000 and 8000 Hz, and broad-band noise) and sound source position. We found significant effects of both stimulus type and sound source position on the barn owls' MAA. The MAA improved with increasing stimulus frequency, from 14° at 500 Hz to 6° at 8000 Hz. The smallest MAA of 4° was found for broadband noise stimuli. Comparing different sound source positions revealed smaller MAAs for frontal compared to lateral stimulus presentation, irrespective of stimulus type. These results are consistent with both the known variations in physical ITDs and variation in the width of neural ITD tuning curves with azimuth and frequency. Physical and neural characteristics combine to result in better spatial acuity for frontal compared to lateral sounds and reduced localisation acuity at lower frequencies.
Collapse
Affiliation(s)
- Bianca Krumm
- Cluster of Excellence “Hearing4all”, Division for Animal Physiology and Behaviour, School of Medicine and Health Sciences, Department of Neuroscience, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Division for Cochlea and auditory brainstem physiology, School of Medicine and Health Sciences, Department of Neuroscience, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Georg M. Klump
- Cluster of Excellence “Hearing4all”, Division for Animal Physiology and Behaviour, School of Medicine and Health Sciences, Department of Neuroscience, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Christine Köppl
- Cluster of Excellence “Hearing4all”, Division for Cochlea and auditory brainstem physiology, School of Medicine and Health Sciences, Department of Neuroscience, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Ulrike Langemann
- Cluster of Excellence “Hearing4all”, Division for Animal Physiology and Behaviour, School of Medicine and Health Sciences, Department of Neuroscience, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
17
|
Emergence of an Adaptive Command for Orienting Behavior in Premotor Brainstem Neurons of Barn Owls. J Neurosci 2018; 38:7270-7279. [PMID: 30012694 DOI: 10.1523/jneurosci.0947-18.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Revised: 06/28/2018] [Accepted: 07/04/2018] [Indexed: 11/21/2022] Open
Abstract
The midbrain map of auditory space commands sound-orienting responses in barn owls. Owls precisely localize sounds in frontal space but underestimate the direction of peripheral sound sources. This bias for central locations was proposed to be adaptive to the decreased reliability in the periphery of sensory cues used for sound localization by the owl. Understanding the neural pathway supporting this biased behavior provides a means to address how adaptive motor commands are implemented by neurons. Here we find that the sensory input for sound direction is weighted by its reliability in premotor neurons of the midbrain tegmentum of owls (male and female), such that the mean population firing rate approximates the head-orienting behavior. We provide evidence that this coding may emerge through convergence of upstream projections from the midbrain map of auditory space. We further show that manipulating the sensory input yields changes predicted by the convergent network in both premotor neural responses and behavior. This work demonstrates how a topographic sensory representation can be linearly read out to adjust behavioral responses by the reliability of the sensory input.SIGNIFICANCE STATEMENT This research shows how statistics of the sensory input can be integrated into a behavioral command by readout of a sensory representation. The firing rate of midbrain premotor neurons receiving sensory information from a topographic representation of auditory space is weighted by the reliability of sensory cues. We show that these premotor responses are consistent with a weighted convergence from the topographic sensory representation. This convergence was also tested behaviorally, where manipulation of stimulus properties led to bidirectional changes in sound localization errors. Thus a topographic representation of auditory space is translated into a premotor command for sound localization that is modulated by sensory reliability.
Collapse
|
18
|
Combination of Interaural Level and Time Difference in Azimuthal Sound Localization in Owls. eNeuro 2018; 4:eN-NWR-0238-17. [PMID: 29379866 PMCID: PMC5779116 DOI: 10.1523/eneuro.0238-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 11/21/2017] [Accepted: 11/21/2017] [Indexed: 11/21/2022] Open
Abstract
A function of the auditory system is to accurately determine the location of a sound source. The main cues for sound location are interaural time (ITD) and level (ILD) differences. Humans use both ITD and ILD to determine the azimuth. Thus far, the conception of sound localization in barn owls was that their facial ruff and asymmetrical ears generate a two-dimensional grid of ITD for azimuth and ILD for elevation. We show that barn owls also use ILD for azimuthal sound localization when ITDs are ambiguous. For high-frequency narrowband sounds, midbrain neurons can signal multiple locations, leading to the perception of an auditory illusion called a phantom source. Owls respond to such an illusory percept by orienting toward it instead of the true source. Acoustical measurements close to the eardrum reveal a small ILD component that changes with azimuth, suggesting that ITD and ILD information could be combined to eliminate the illusion. Our behavioral data confirm that perception was robust against ambiguities if ITD and ILD information was combined. Electrophysiological recordings of ILD sensitivity in the owl’s midbrain support the behavioral findings indicating that rival brain hemispheres drive the decision to orient to either true or phantom sources. Thus, the basis for disambiguation, and reliable detection of sound source azimuth, relies on similar cues across species as similar response to combinations of ILD and narrowband ITD has been observed in humans.
Collapse
|
19
|
Ursino M, Crisafulli A, di Pellegrino G, Magosso E, Cuppini C. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study. Front Comput Neurosci 2017; 11:89. [PMID: 29046631 PMCID: PMC5633019 DOI: 10.3389/fncom.2017.00089] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Accepted: 09/20/2017] [Indexed: 11/17/2022] Open
Abstract
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | | | | | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| |
Collapse
|
20
|
Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl's Auditory Forebrain. eNeuro 2017; 4:eN-NWR-0144-17. [PMID: 28674698 PMCID: PMC5492684 DOI: 10.1523/eneuro.0144-17.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 05/31/2017] [Accepted: 06/07/2017] [Indexed: 11/21/2022] Open
Abstract
While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space.
Collapse
|
21
|
Fischer BJ, Peña JL. Optimal nonlinear cue integration for sound localization. J Comput Neurosci 2017; 42:37-52. [PMID: 27714569 PMCID: PMC5253079 DOI: 10.1007/s10827-016-0626-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 08/10/2016] [Accepted: 09/06/2016] [Indexed: 10/20/2022]
Abstract
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl's sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl's localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University, 901 12th Ave, Seattle, WA, 98122, USA.
| | - Jose Luis Peña
- Department of Neuroscience, Albert Einstein College of Medicine, 1410 Pelham Parkway South, Bronx, NY, 10461, USA
| |
Collapse
|
22
|
Wasmuht DF, Pena JL, Gutfreund Y. Stimulus-specific adaptation to visual but not auditory motion direction in the barn owl's optic tectum. Eur J Neurosci 2016; 45:610-621. [PMID: 27987375 DOI: 10.1111/ejn.13505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 12/11/2016] [Accepted: 12/12/2016] [Indexed: 12/01/2022]
Abstract
Whether the auditory and visual systems use a similar coding strategy to represent motion direction is an open question. We investigated this question in the barn owl's optic tectum (OT) testing stimulus-specific adaptation (SSA) to the direction of motion. SSA, the reduction of the response to a repetitive stimulus that does not generalize to other stimuli, has been well established in OT neurons. SSA suggests a separate representation of the adapted stimulus in upstream pathways. So far, only SSA to static stimuli has been studied in the OT. Here, we examined adaptation to moving auditory and visual stimuli. SSA to motion direction was examined using repeated presentations of moving stimuli, occasionally switching motion to the opposite direction. Acoustic motion was either mimicked by varying binaural spatial cues or implemented in free field using a speaker array. While OT neurons displayed SSA to motion direction in visual space, neither stimulation paradigms elicited significant SSA to auditory motion direction. These findings show a qualitative difference in how auditory and visual motion is processed in the OT and support the existence of dedicated circuitry for representing motion direction in the early stages of visual but not the auditory system.
Collapse
Affiliation(s)
- Dante F Wasmuht
- Department of Neuroscience, The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Bat-Galim, Haifa, 31096, Israel
| | - Jose L Pena
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Yoram Gutfreund
- Department of Neuroscience, The Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, The Technion, Bat-Galim, Haifa, 31096, Israel
| |
Collapse
|
23
|
Kettler L, Christensen-Dalsgaard J, Larsen ON, Wagner H. Low frequency eardrum directionality in the barn owl induced by sound transmission through the interaural canal. BIOLOGICAL CYBERNETICS 2016; 110:333-343. [PMID: 27209198 DOI: 10.1007/s00422-016-0689-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Accepted: 05/09/2016] [Indexed: 05/22/2023]
Abstract
The middle ears of birds are typically connected by interaural cavities that form a cranial canal. Eardrums coupled in this manner may function as pressure difference receivers rather than pressure receivers. Hereby, the eardrum vibrations become inherently directional. The barn owl also has a large interaural canal, but its role in barn owl hearing and specifically in sound localization has been controversial so far. We discuss here existing data and the role of the interaural canal in this species and add a new dataset obtained by laser Doppler vibrometry in a free-field setting. Significant sound transmission across the interaural canal occurred at low frequencies. The sound transmission induces considerable eardrum directionality in a narrow band from 1.5 to 3.5 kHz. This is below the frequency range used by the barn owl for locating prey, but may conceivably be used for locating conspecific callers.
Collapse
Affiliation(s)
- Lutz Kettler
- Department of Biology, Center for Comparative and Evolutionary Biology of Hearing, University of Maryland College Park, College Park, MD, 20742, USA.
- Department of Zoology and Animal Physiology, Institute of Biology II, RWTH Aachen, Worringerweg 3, 52074, Aachen, Germany.
| | | | - Ole Næsbye Larsen
- Department of Biology, University of Southern Denmark, Campusvej 55, 5230, Odense M, Denmark
| | - Hermann Wagner
- Department of Zoology and Animal Physiology, Institute of Biology II, RWTH Aachen, Worringerweg 3, 52074, Aachen, Germany
| |
Collapse
|
24
|
Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System. J Neurosci 2016; 36:2101-10. [PMID: 26888922 DOI: 10.1523/jneurosci.3753-15.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. SIGNIFICANCE STATEMENT In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior.
Collapse
|
25
|
A Neural Model of Auditory Space Compatible with Human Perception under Simulated Echoic Conditions. PLoS One 2015; 10:e0137900. [PMID: 26355676 PMCID: PMC4565656 DOI: 10.1371/journal.pone.0137900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Accepted: 08/22/2015] [Indexed: 11/19/2022] Open
Abstract
In a typical auditory scene, sounds from different sources and reflective surfaces summate in the ears, causing spatial cues to fluctuate. Prevailing hypotheses of how spatial locations may be encoded and represented across auditory neurons generally disregard these fluctuations and must therefore invoke additional mechanisms for detecting and representing them. Here, we consider a different hypothesis in which spatial perception corresponds to an intermediate or sub-maximal firing probability across spatially selective neurons within each hemisphere. The precedence or Haas effect presents an ideal opportunity for examining this hypothesis, since the temporal superposition of an acoustical reflection with sounds arriving directly from a source can cause otherwise stable cues to fluctuate. Our findings suggest that subjects’ experiences may simply reflect the spatial cues that momentarily arise under various acoustical conditions and how these cues are represented. We further suggest that auditory objects may acquire “edges” under conditions when interaural time differences are broadly distributed.
Collapse
|
26
|
Abstract
Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.
Collapse
Affiliation(s)
- Weston Cox
- Department of Electrical and Computer Engineering, Seattle University, Seattle, Washington, United States of America
| | - Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
27
|
Carr CE, Shah S, McColgan T, Ashida G, Kuokkanen PT, Brill S, Kempter R, Wagner H. Maps of interaural delay in the owl's nucleus laminaris. J Neurophysiol 2015. [PMID: 26224776 DOI: 10.1152/jn.00644.2015] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Axons from the nucleus magnocellularis form a presynaptic map of interaural time differences (ITDs) in the nucleus laminaris (NL). These inputs generate a field potential that varies systematically with recording position and can be used to measure the map of ITDs. In the barn owl, the representation of best ITD shifts with mediolateral position in NL, so as to form continuous, smoothly overlapping maps of ITD with iso-ITD contours that are not parallel to the NL border. Frontal space (0°) is, however, represented throughout and thus overrepresented with respect to the periphery. Measurements of presynaptic conduction delay, combined with a model of delay line conduction velocity, reveal that conduction delays can account for the mediolateral shifts in the map of ITD.
Collapse
Affiliation(s)
- Catherine E Carr
- Department of Biology, University of Maryland, College Park, Maryland;
| | - Sahil Shah
- Department of Biology, University of Maryland, College Park, Maryland
| | - Thomas McColgan
- Institute for Biology II, RWTH Aachen, Aachen, Germany; and Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, and Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Go Ashida
- Department of Biology, University of Maryland, College Park, Maryland
| | - Paula T Kuokkanen
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, and Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Sandra Brill
- Institute for Biology II, RWTH Aachen, Aachen, Germany; and
| | - Richard Kempter
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, and Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Hermann Wagner
- Institute for Biology II, RWTH Aachen, Aachen, Germany; and
| |
Collapse
|
28
|
Palanca-Castan N, Köppl C. In vivo Recordings from Low-Frequency Nucleus Laminaris in the Barn Owl. BRAIN, BEHAVIOR AND EVOLUTION 2015; 85:271-86. [PMID: 26182962 DOI: 10.1159/000433513] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 01/20/2015] [Indexed: 11/19/2022]
Abstract
Localization of sound sources relies on 2 main binaural cues: interaural time differences (ITD) and interaural level differences. ITD computing is first carried out in tonotopically organized areas of the brainstem nucleus laminaris (NL) in birds and the medial superior olive (MSO) in mammals. The specific way in which ITD are derived was long assumed to conform to a delay line model in which arrays of systematically arranged cells create a representation of auditory space, with different cells responding maximally to specific ITD. This model conforms in many details to the particular case of the high-frequency regions (above 3 kHz) in the barn owl NL. However, data from recent studies in mammals are not consistent with a delay line model. A new model has been suggested in which neurons are not topographically arranged with respect to ITD and coding occurs through assessment of the overall response of 2 large neuron populations – 1 in each brainstem hemisphere. Currently available data comprise mainly low-frequency (<1,500 Hz) recordings in the case of mammals and higher-frequency recordings in the case of birds. This makes it impossible to distinguish between group-related adaptations and frequency-related adaptations. Here we report the first comprehensive data set from low-frequency NL in the barn owl and compare it to data from other avian and mammalian studies. Our data are consistent with a delay line model, so differences between ITD processing systems are more likely to have originated through divergent evolution of different vertebrate groups.
Collapse
Affiliation(s)
- Nicolas Palanca-Castan
- Cluster of Excellence Hearing4all, Research Center Neurosensory Science and Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University, Oldenburg, Germany
| | | |
Collapse
|