1
|
Buchholz S, Schnupp JWH, Arndt S, Rosskothen-Kuhl N. Interaural level difference sensitivity in neonatally deafened rats fitted with bilateral cochlear implants. Sci Rep 2024; 14:30515. [PMID: 39681610 DOI: 10.1038/s41598-024-82978-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 12/10/2024] [Indexed: 12/18/2024] Open
Abstract
Bilateral cochlear implant (CI) patients exhibit significant limitations in spatial hearing. Their ability to process interaural time differences (ITDs) is often impaired, while their ability to process interaural level differences (ILDs) remains comparatively good. Clinical studies aiming to identify the causes of these limitations are often plagued by confounds and ethical limitations. Recent behavioral work suggests that rats may be a good animal model for studying binaural hearing under neuroprosthetic stimulation, as rats develop excellent ITD sensitivity when provided with suitable CI stimulation. However, their ability to use ILDs has not yet been characterized. Objective of this study is to address this knowledge gap. Neontally deafened rats were bilaterally fitted with CIs, and trained to lateralize binaural stimuli according to ILD. Their behavioral ILD thresholds were measured at pulse rates from 50 to 2400 pps. CI rats exhibited high sensitivity to ILDs with thresholds of a few dB at all tested pulse rates. We conclude that early deafened rats develop good sensitivity, not only to ITDs but also to ILDs, if provided with appropriate CI stimulation. Their generally good performance, in line with expectations from other mammalian species, validates rats as an excellent model for research on binaural auditory prostheses.
Collapse
Affiliation(s)
- Sarah Buchholz
- Neurobiological Research Laboratory, Section for Experimental and Clinical Otology, Department of Otorhinolaryngology, Head and Neck Surgery, Faculty of Medicine, Medical Center - University of Freiburg, Killianst. 5, 79106, Freiburg im Breisgau, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong SAR
- Gerald Choa Neuroscience Institute, Chinese University of Hong Kong, Sha Tin, Hong Kong SAR
- Department of Otolaryngology, Chinese University of Hong Kong, Sha Tin, Hong Kong SAR
- School of Biomedical Sciences, Chinese University of Hong Kong, Sha Tin, Hong Kong SAR
| | - Susan Arndt
- Neurobiological Research Laboratory, Section for Experimental and Clinical Otology, Department of Otorhinolaryngology, Head and Neck Surgery, Faculty of Medicine, Medical Center - University of Freiburg, Killianst. 5, 79106, Freiburg im Breisgau, Germany
| | - Nicole Rosskothen-Kuhl
- Neurobiological Research Laboratory, Section for Experimental and Clinical Otology, Department of Otorhinolaryngology, Head and Neck Surgery, Faculty of Medicine, Medical Center - University of Freiburg, Killianst. 5, 79106, Freiburg im Breisgau, Germany.
- Bernstein Center Freiburg and Faculty of Biology, University of Freiburg, Freiburg, Germany.
| |
Collapse
|
2
|
Bae A, Peña JL. Barn owls specialized sound-driven behavior: Lessons in optimal processing and coding by the auditory system. Hear Res 2024; 443:108952. [PMID: 38242019 DOI: 10.1016/j.heares.2024.108952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/03/2024] [Accepted: 01/11/2024] [Indexed: 01/21/2024]
Abstract
The barn owl, a nocturnal raptor with remarkably efficient prey-capturing abilities, has been one of the initial animal models used for research of brain mechanisms underlying sound localization. Some seminal findings made from their specialized sound localizing auditory system include discoveries of a midbrain map of auditory space, mechanisms towards spatial cue detection underlying sound-driven orienting behavior, and circuit level changes supporting development and experience-dependent plasticity. These findings have explained properties of vital hearing functions and inspired theories in spatial hearing that extend across diverse animal species, thereby cementing the barn owl's legacy as a powerful experimental system for elucidating fundamental brain mechanisms. This concise review will provide an overview of the insights from which the barn owl model system has exemplified the strength of investigating diversity and similarity of brain mechanisms across species. First, we discuss some of the key findings in the specialized system of the barn owl that elucidated brain mechanisms toward detection of auditory cues for spatial hearing. Then we examine how the barn owl has validated mathematical computations and theories underlying optimal hearing across species. And lastly, we conclude with how the barn owl has advanced investigations toward developmental and experience dependent plasticity in sound localization, as well as avenues for future research investigations towards bridging commonalities across species. Analogous to the informative power of Astrophysics for understanding nature through diverse exploration of planets, stars, and galaxies across the universe, miscellaneous research across different animal species pursues broad understanding of natural brain mechanisms and behavior.
Collapse
Affiliation(s)
- Andrea Bae
- Albert Einstein College of Medicine, NY, USA
| | - Jose L Peña
- Albert Einstein College of Medicine, NY, USA.
| |
Collapse
|
3
|
Merchant H, de Lafuente V. A Second Introduction to the Neurobiology of Interval Timing. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:3-23. [PMID: 38918343 DOI: 10.1007/978-3-031-60183-5_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Time is a critical variable that organisms must be able to measure in order to survive in a constantly changing environment. Initially, this paper describes the myriad of contexts where time is estimated or predicted and suggests that timing is not a single process and probably depends on a set of different neural mechanisms. Consistent with this hypothesis, the explosion of neurophysiological and imaging studies in the last 10 years suggests that different brain circuits and neural mechanisms are involved in the ability to tell and use time to control behavior across contexts. Then, we develop a conceptual framework that defines time as a family of different phenomena and propose a taxonomy with sensory, perceptual, motor, and sensorimotor timing as the pillars of temporal processing in the range of hundreds of milliseconds.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro, Mexico.
| | - Victor de Lafuente
- Institute of Neurobiology National Autonomous University of Mexico, Querétaro, Mexico
| |
Collapse
|
4
|
Ordás CM, Alonso-Frech F. The neural basis of somatosensory temporal discrimination threshold as a paradigm for time processing in the sub-second range: An updated review. Neurosci Biobehav Rev 2024; 156:105486. [PMID: 38040074 DOI: 10.1016/j.neubiorev.2023.105486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The temporal aspect of somesthesia is a feature of any somatosensory process and a pre-requisite for the elaboration of proper behavior. Time processing in the milliseconds range is crucial for most of behaviors in everyday life. The somatosensory temporal discrimination threshold (STDT) is the ability to perceive two successive stimuli as separate in time, and deals with time processing in this temporal range. Herein, we focus on the physiology of STDT, on a background of the anatomophysiology of somesthesia and the neurobiological substrates of timing. METHODS A review of the literature through PubMed & Cochrane databases until March 2023 was performed with inclusion and exclusion criteria following PRISMA recommendations. RESULTS 1151 abstracts were identified. 4 duplicate records were discarded before screening. 957 abstracts were excluded because of redundancy, less relevant content or not English-written. 4 were added after revision. Eventually, 194 articles were included. CONCLUSIONS STDT encoding relies on intracortical inhibitory S1 function and is modulated by the basal ganglia-thalamic-cortical interplay through circuits involving the nigrostriatal dopaminergic pathway and probably the superior colliculus.
Collapse
Affiliation(s)
- Carlos M Ordás
- Universidad Rey Juan Carlos, Móstoles, Madrid, Spain; Department of Neurology, Hospital Rey Juan Carlos, Móstoles, Madrid, Spain.
| | - Fernando Alonso-Frech
- Department of Neurology, Hospital Clínico San Carlos, Universidad Complutense de Madrid, Spain
| |
Collapse
|
5
|
Lapshin DN, Vorontsov DD. Mapping the Auditory Space of Culex pipiens Female Mosquitoes in 3D. INSECTS 2023; 14:743. [PMID: 37754711 PMCID: PMC10532353 DOI: 10.3390/insects14090743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 08/30/2023] [Accepted: 08/30/2023] [Indexed: 09/28/2023]
Abstract
The task of directional hearing faces most animals that possess ears. They approach this task in different ways, but a common trait is the use of binaural cues to find the direction to the source of sound. In insects, the task is further complicated by their small size and, hence, minute temporal and level differences between two ears. A single symmetric flagellar particle velocity receiver, such as the antenna of a mosquito, should not be able to discriminate between the two opposite directions along the vector of the sound wave. Paired antennae of mosquitoes presume the usage of binaural hearing, but its mechanisms are expected to be significantly different from the ones typical for the pressure receivers. However, the directionality of flagellar auditory organs has received little attention. Here, we measured the in-flight orientation of antennae in female Culex pipiens pipiens mosquitoes and obtained a detailed physiological mapping of the Johnston's organ directionality at the level of individual sensory units. By combining these data, we created a three-dimensional model of the mosquito's auditory space. The orientation of the antennae was found to be coordinated with the neuronal asymmetry of the Johnston's organs to maintain a uniformly shaped auditory space, symmetric relative to a flying mosquito. The overlap of the directional characteristics of the left and right sensory units was found to be optimal for binaural hearing focused primarily in front of, above and below a flying mosquito.
Collapse
Affiliation(s)
- Dmitry N. Lapshin
- Institute for Information Transmission Problems of the Russian Academy of Sciences, Bolshoy Karetny per. 19, 127994 Moscow, Russia;
| | - Dmitry D. Vorontsov
- Koltzov Institute of Developmental Biology Russian Academy of Sciences, Vavilova 26, 119334 Moscow, Russia
| |
Collapse
|
6
|
Hancock KE, Delgutte B. Neural coding of dichotic pitches in auditory midbrain. J Neurophysiol 2023; 129:872-893. [PMID: 36921210 PMCID: PMC10085564 DOI: 10.1152/jn.00511.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/24/2023] [Accepted: 03/13/2023] [Indexed: 03/17/2023] Open
Abstract
Dichotic pitches such as the Huggins pitch (HP) and the binaural edge pitch (BEP) are perceptual illusions whereby binaural noise that exhibits abrupt changes in interaural phase differences (IPDs) across frequency creates a tonelike pitch percept when presented to both ears, even though it does not produce a pitch when presented monaurally. At the perceptual and cortical levels, dichotic pitches behave as if an actual tone had been presented to the ears, yet investigations of neural correlates of dichotic pitch in single-unit responses at subcortical levels are lacking. We tested for cues to HP and BEP in the responses of binaural neurons in the auditory midbrain of anesthetized cats by varying the expected pitch frequency around each neuron's best frequency (BF). Neuronal firing rates showed specific features (peaks, troughs, or edges) when the pitch frequency crossed the BF, and the type of feature was consistent with a well-established model of binaural processing comprising frequency tuning, internal delays, and firing rates sensitive to interaural correlation. A Jeffress-like neural population model in which the behavior of individual neurons was governed by the cross-correlation model and the neurons were independently distributed along BF and best IPD predicted trends in human psychophysical HP detection but only when the model incorporated physiological BF and best IPD distributions. These results demonstrate the existence of a rate-place code for HP and BEP in the auditory midbrain and provide a firm physiological basis for models of dichotic pitches.NEW & NOTEWORTHY Dichotic pitches are perceptual illusions created centrally through binaural interactions that offer an opportunity to test theories of pitch and binaural hearing. Here we show that binaural neurons in auditory midbrain encode the frequency of two salient types of dichotic pitches via specific features in the pattern of firing rates along the tonotopic axis. This is the first combined single-unit and modeling study of responses of auditory neurons to stimuli evoking a dichotic pitch.
Collapse
Affiliation(s)
- Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts, United States
- Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts, United States
- Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
7
|
Thévenet J, Papet L, Campos Z, Greenfield M, Boyer N, Grimault N, Mathevon N. Spatial release from masking in crocodilians. Commun Biol 2022; 5:869. [PMID: 36008592 PMCID: PMC9411511 DOI: 10.1038/s42003-022-03799-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
Ambient noise is a major constraint on acoustic communication in both animals and humans. One mechanism to overcome this problem is Spatial Release from Masking (SRM), the ability to distinguish a target sound signal from masking noise when both sources are spatially separated. SRM is well described in humans but has been poorly explored in animals. Although laboratory tests with trained individuals have suggested that SRM may be a widespread ability in vertebrates, it may play a limited role in natural environments. Here we combine field experiments with investigations in captivity to test whether crocodilians experience SRM. We show that 2 species of crocodilians are able to use SRM in their natural habitat and that it quickly becomes effective for small angles between the target signal source and the noise source, becoming maximal when the angle exceeds 15∘. Crocodiles can therefore take advantage of SRM to improve sound scene analysis and the detection of biologically relevant signals. The ability to separate target sound signals from masking noise is identified in wild and captive crocodilian species.
Collapse
Affiliation(s)
- Julie Thévenet
- Equipe de Neuro-Ethologie Sensorielle ENES / CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France. .,Equipe Cognition Auditive et Psychoacoustique / CRNL, CNRS, Inserm, University Lyon 1, Bron, France.
| | - Léo Papet
- Equipe de Neuro-Ethologie Sensorielle ENES / CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France. .,Equipe Cognition Auditive et Psychoacoustique / CRNL, CNRS, Inserm, University Lyon 1, Bron, France.
| | - Zilca Campos
- Wildlife Laboratory, Brazilian Agricultural Research Corporation EMBRAPA, Corumbá, Brazil
| | - Michael Greenfield
- Equipe de Neuro-Ethologie Sensorielle ENES / CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France.,Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, KS, 66045, USA
| | - Nicolas Boyer
- Equipe de Neuro-Ethologie Sensorielle ENES / CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
| | - Nicolas Grimault
- Equipe Cognition Auditive et Psychoacoustique / CRNL, CNRS, Inserm, University Lyon 1, Bron, France
| | - Nicolas Mathevon
- Equipe de Neuro-Ethologie Sensorielle ENES / CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
| |
Collapse
|
8
|
Maldarelli G, Firzlaff U, Kettler L, Ondracek JM, Luksch H. Two Types of Auditory Spatial Receptive Fields in Different Parts of the Chicken's Midbrain. J Neurosci 2022; 42:4669-4680. [PMID: 35508384 PMCID: PMC9186802 DOI: 10.1523/jneurosci.2204-21.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 04/04/2022] [Accepted: 04/26/2022] [Indexed: 11/21/2022] Open
Abstract
The optic tectum (OT) is an avian midbrain structure involved in the integration of visual and auditory stimuli. Studies in the barn owl, an auditory specialist, have shown that spatial auditory information is topographically represented in the OT. Little is known about how auditory space is represented in the midbrain of birds with generalist hearing, i.e., most of avian species lacking peripheral adaptations such as facial ruffs or asymmetric ears. Thus, we conducted in vivo extracellular recordings of single neurons in the OT and in the external portion of the formatio reticularis lateralis (FRLx), a brain structure located between the inferior colliculus (IC) and the OT, in anaesthetized chickens of either sex. We found that most of the auditory spatial receptive fields (aSRFs) were spatially confined both in azimuth and elevation, divided into two main classes: round aSRFs, mainly present in the OT, and annular aSRFs, with a ring-like shape around the interaural axis, mainly present in the FRLx. Our data further indicate that interaural time difference (ITD) and interaural level difference (ILD) play a role in the formation of both aSRF classes. These results suggest that, unlike mammals and owls which have a congruent representation of visual and auditory space in the OT, generalist birds separate the computation of auditory space in two different midbrain structures. We hypothesize that the FRLx-annular aSRFs define the distance of a sound source from the axis of the lateral visual fovea, whereas the OT-round aSRFs are involved in multimodal integration of the stimulus around the lateral fovea.SIGNIFICANCE STATEMENT Previous studies implied that auditory spatial receptive fields (aSRFs) in the midbrain of generalist birds are only confined along azimuth. Interestingly, we found SRFs s in the chicken to be confined along both azimuth and elevation. Moreover, the auditory receptive fields are arranged in a concentric manner around the overlapping interaural and visual axes. These data suggest that in generalist birds, which mainly rely on vision, the auditory system mainly serves to align auditory stimuli with the visual axis, while auditory specialized birds like the barn owl compute sound sources more precisely and integrate sound positions in the multimodal space map of the optic tectum (OT).
Collapse
Affiliation(s)
- Gianmarco Maldarelli
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Uwe Firzlaff
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Lutz Kettler
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Janie M Ondracek
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Harald Luksch
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| |
Collapse
|
9
|
Brain-Wide Synaptic Inputs to Aromatase-Expressing Neurons in the Medial Amygdala Suggest Complex Circuitry for Modulating Social Behavior. eNeuro 2022; 9:ENEURO.0329-21.2021. [PMID: 35074828 PMCID: PMC8925724 DOI: 10.1523/eneuro.0329-21.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 12/18/2021] [Accepted: 12/26/2021] [Indexed: 12/16/2022] Open
Abstract
Here, we reveal an unbiased view of the brain regions that provide specific inputs to aromatase-expressing cells in the medial amygdala, neurons that play an outsized role in the production of sex-specific social behaviors, using rabies tracing and light sheet microscopy. While the downstream projections from these cells are known, the specific inputs to the aromatase-expressing cells in the medial amygdala remained unknown. We observed established connections to the medial amygdala (e.g., bed nucleus of the stria terminalis and accessory olfactory bulb) indicating that aromatase neurons are a major target cell type for efferent input including from regions associated with parenting and aggression. We also identified novel and unexpected inputs from areas involved in metabolism, fear and anxiety, and memory and cognition. These results confirm the central role of the medial amygdala in sex-specific social recognition and social behavior, and point to an expanded role for its aromatase-expressing neurons in the integration of multiple sensory and homeostatic factors, which are likely used to modulate many other social behaviors.
Collapse
|
10
|
Cao AS, Van Hooser SD. Paired Feed-Forward Excitation With Delayed Inhibition Allows High Frequency Computations Across Brain Regions. Front Neural Circuits 2022; 15:803065. [PMID: 35210993 PMCID: PMC8862685 DOI: 10.3389/fncir.2021.803065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 12/29/2021] [Indexed: 11/30/2022] Open
Abstract
The transmission of high frequency temporal information across brain regions is critical to perception, but the mechanisms underlying such transmission remain unclear. Long-range projection patterns across brain areas are often comprised of paired feed-forward excitation followed closely by delayed inhibition, including the thalamic triad synapse, thalamic projections to cortex, and projections within the hippocampus. Previous studies have shown that these joint projections produce a shortened period of depolarization, sharpening the timing window over which the postsynaptic neuron can fire. Here we show that these projections can facilitate the transmission of high frequency computations even at frequencies that are highly filtered by neuronal membranes. This temporal facilitation occurred over a range of synaptic parameter values, including variations in synaptic strength, synaptic time constants, short-term synaptic depression, and the delay between excitation and inhibition. Further, these projections can coordinate computations across multiple network levels, even amid ongoing local activity. We suggest that paired feed-forward excitation and inhibition provide a hybrid signal-carrying both a value and a clock-like trigger-to allow circuits to be responsive to input whenever it arrives.
Collapse
Affiliation(s)
- Alexandra S. Cao
- Department of Biology, Brandeis University, Waltham, MA, United States
- Volen Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Stephen D. Van Hooser
- Department of Biology, Brandeis University, Waltham, MA, United States
- Volen Center for Complex Systems, Brandeis University, Waltham, MA, United States
- Sloan-Swartz Center for Theoretical Neurobiology, Brandeis University, Waltham, MA, United States
| |
Collapse
|
11
|
Deep neural network models of sound localization reveal how perception is adapted to real-world environments. Nat Hum Behav 2022; 6:111-133. [PMID: 35087192 PMCID: PMC8830739 DOI: 10.1038/s41562-021-01244-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 10/29/2021] [Indexed: 11/15/2022]
Abstract
Mammals localize sounds using information from their two ears.
Localization in real-world conditions is challenging, as echoes provide
erroneous information, and noises mask parts of target sounds. To better
understand real-world localization we equipped a deep neural network with human
ears and trained it to localize sounds in a virtual environment. The resulting
model localized accurately in realistic conditions with noise and reverberation.
In simulated experiments, the model exhibited many features of human spatial
hearing: sensitivity to monaural spectral cues and interaural time and level
differences, integration across frequency, biases for sound onsets, and limits
on localization of concurrent sources. But when trained in unnatural
environments without either reverberation, noise, or natural sounds, these
performance characteristics deviated from those of humans. The results show how
biological hearing is adapted to the challenges of real-world environments and
illustrate how artificial neural networks can reveal the real-world constraints
that shape perception.
Collapse
|
12
|
Clemens J, Schöneich S, Kostarakos K, Hennig RM, Hedwig B. A small, computationally flexible network produces the phenotypic diversity of song recognition in crickets. eLife 2021; 10:e61475. [PMID: 34761750 PMCID: PMC8635984 DOI: 10.7554/elife.61475] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 11/03/2021] [Indexed: 01/31/2023] Open
Abstract
How neural networks evolved to generate the diversity of species-specific communication signals is unknown. For receivers of the signals, one hypothesis is that novel recognition phenotypes arise from parameter variation in computationally flexible feature detection networks. We test this hypothesis in crickets, where males generate and females recognize the mating songs with a species-specific pulse pattern, by investigating whether the song recognition network in the cricket brain has the computational flexibility to recognize different temporal features. Using electrophysiological recordings from the network that recognizes crucial properties of the pulse pattern on the short timescale in the cricket Gryllus bimaculatus, we built a computational model that reproduces the neuronal and behavioral tuning of that species. An analysis of the model's parameter space reveals that the network can provide all recognition phenotypes for pulse duration and pause known in crickets and even other insects. Phenotypic diversity in the model is consistent with known preference types in crickets and other insects, and arises from computations that likely evolved to increase energy efficiency and robustness of pattern recognition. The model's parameter to phenotype mapping is degenerate - different network parameters can create similar changes in the phenotype - which likely supports evolutionary plasticity. Our study suggests that computationally flexible networks underlie the diverse pattern recognition phenotypes, and we reveal network properties that constrain and support behavioral diversity.
Collapse
Affiliation(s)
- Jan Clemens
- European Neuroscience Institute Göttingen – A Joint Initiative of the University Medical Center Göttingen and the Max-Planck SocietyGöttingenGermany
- BCCN GöttingenGöttingenGermany
| | - Stefan Schöneich
- University of Cambridge, Department of ZoologyCambridgeUnited Kingdom
- Friedrich-Schiller-University Jena, Institute for Zoology and Evolutionary ResearchJenaGermany
| | - Konstantinos Kostarakos
- University of Cambridge, Department of ZoologyCambridgeUnited Kingdom
- Institute of Biology, University of GrazUniversitätsplatzAustria
| | - R Matthias Hennig
- Humboldt-Universität zu Berlin, Department of BiologyPhilippstrasseGermany
| | - Berthold Hedwig
- University of Cambridge, Department of ZoologyCambridgeUnited Kingdom
| |
Collapse
|
13
|
Alzaher M, Vannson N, Deguine O, Marx M, Barone P, Strelnikov K. Brain plasticity and hearing disorders. Rev Neurol (Paris) 2021; 177:1121-1132. [PMID: 34657730 DOI: 10.1016/j.neurol.2021.09.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 09/06/2021] [Accepted: 09/10/2021] [Indexed: 11/30/2022]
Abstract
Permanently changed sensory stimulation can modify functional connectivity patterns in the healthy brain and in pathology. In the pathology case, these adaptive modifications of the brain are referred to as compensation, and the subsequent configurations of functional connectivity are called compensatory plasticity. The variability and extent of auditory deficits due to the impairments in the hearing system determine the related brain reorganization and rehabilitation. In this review, we consider cross-modal and intra-modal brain plasticity related to bilateral and unilateral hearing loss and their restoration using cochlear implantation. Cross-modal brain plasticity may have both beneficial and detrimental effects on hearing disorders. It has a beneficial effect when it serves to improve a patient's adaptation to the visuo-auditory environment. However, the occupation of the auditory cortex by visual functions may be a negative factor for the restoration of hearing with cochlear implants. In what concerns intra-modal plasticity, the loss of interhemispheric asymmetry in asymmetric hearing loss is deleterious for the auditory spatial localization. Research on brain plasticity in hearing disorders can advance our understanding of brain plasticity and improve the rehabilitation of the patients using prognostic, evidence-based approaches from cognitive neuroscience combined with post-rehabilitation objective biomarkers of this plasticity utilizing neuroimaging.
Collapse
Affiliation(s)
- M Alzaher
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France
| | - N Vannson
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France
| | - O Deguine
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France; Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| | - M Marx
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France; Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| | - P Barone
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France.
| | - K Strelnikov
- Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| |
Collapse
|
14
|
Liang X, Zhang X. Signal amplification enhanced by large phase disorder in coupled bistable units. Phys Rev E 2021; 104:034204. [PMID: 34654153 DOI: 10.1103/physreve.104.034204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 08/23/2021] [Indexed: 11/07/2022]
Abstract
We study the maximum response of network-coupled bistable units to subthreshold signals focusing on the effect of phase disorder. We find that for signals with large levels of phase disorder, the network exhibits an enhanced response for intermediate coupling strength, while generating a damped response for low levels of phase disorder. We observe that the large phase-disorder-enhanced response depends mainly on the signal intensity but not on the signal frequency or the network topology. We show that a zero average activity of the units caused by large phase disorder plays a key role in the enhancement of the maximum response. With a detailed analysis, we demonstrate that large phase disorder can suppress the synchronization of the units, leading to the observed resonancelike response. Finally, we examine the robustness of this phenomenon to the unit bistability, the initial phase distribution, and various signal waveform. Our result demonstrates a potential benefit of phase disorder on signal amplification in complex systems.
Collapse
Affiliation(s)
- Xiaoming Liang
- School of Physics and Electronic Engineering, Jiangsu Normal University, Xuzhou 221116, China
| | - Xiyun Zhang
- Department of Physics, Jinan University, Guangzhou, Guangdong 510632, China
| |
Collapse
|
15
|
Wang C, Wang Z, Xie B, Shi X, Yang P, Liu L, Qu T, Qin Q, Xing Y, Zhu W, Teipel SJ, Jia J, Zhao G, Li L, Tang Y. Binaural processing deficit and cognitive impairment in Alzheimer's disease. Alzheimers Dement 2021; 18:1085-1099. [PMID: 34569690 DOI: 10.1002/alz.12464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/07/2021] [Accepted: 08/05/2021] [Indexed: 01/08/2023]
Abstract
Speech comprehension in noisy environments depends on central auditory functions, which are vulnerable in Alzheimer's disease (AD). Binaural processing exploits two ear sounds to optimally process degraded sound information; its characteristics are poorly understood in AD. We studied behavioral and electrophysiological alterations in binaural processing among 121 participants (AD = 27; amnestic mild cognitive impairment [aMCI] = 33; subjective cognitive decline [SCD] = 30; cognitively normal [CN] = 31). We observed impairment of binaural processing in AD and aMCI, and detected a U-shaped curve change in phase synchrony (declining from CN to SCD and to aMCI, but increasing from aMCI to AD). This improvement in phase synchrony accompanying more severe cognitive stages could reflect neural adaptation for binaural processing. Moreover, increased phase synchrony is associated with worse memory during the stages when neural adaptation apparently occurs. These findings support a hypothesis that neural adaptation for binaural processing deficit may exacerbate cognitive impairment, which could help identify biomarkers and therapeutic targets in AD.
Collapse
Affiliation(s)
- Changming Wang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Zhibin Wang
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Beijia Xie
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Xinrui Shi
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Pengcheng Yang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China
| | - Lei Liu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China
| | - Tianshu Qu
- Speech and Hearing Research Center, Peking University, Beijing, China.,Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Qi Qin
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Yi Xing
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China
| | - Wei Zhu
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Stefan J Teipel
- Department of Psychosomatic Medicine, University Medicine Rostock, Rostock, Germany.,DZNE, German Center for Neurodegenerative Diseases, Rostock, Germany
| | - Jianping Jia
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China.,Center of Alzheimer's Disease, Beijing Institute for Brain Disorders, Beijing, China.,Beijing Key Laboratory of Geriatric Cognitive Disorders, Beijing, China.,National Clinical Research Center for Geriatric Disorders, Beijing, China
| | - Guoguang Zhao
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China.,Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| | - Yi Tang
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China
| |
Collapse
|
16
|
Warren B, Nowotny M. Bridging the Gap Between Mammal and Insect Ears – A Comparative and Evolutionary View of Sound-Reception. Front Ecol Evol 2021. [DOI: 10.3389/fevo.2021.667218] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Insects must wonder why mammals have ears only in their head and why they evolved only one common principle of ear design—the cochlea. Ears independently evolved at least 19 times in different insect groups and therefore can be found in completely different body parts. The morphologies and functional characteristics of insect ears are as wildly diverse as the ecological niches they exploit. In both, insects and mammals, hearing organs are constrained by the same biophysical principles and their respective molecular processes for mechanotransduction are thought to share a common evolutionary origin. Due to this, comparative knowledge of hearing across animal phyla provides crucial insight into fundamental processes of auditory transduction, especially at the biomechanical and molecular level. This review will start by comparing hearing between insects and mammals in an evolutionary context. It will then discuss current findings about sound reception will help to bridge the gap between both research fields.
Collapse
|
17
|
Sensitivity to interaural time differences in the inferior colliculus of cochlear implanted rats with or without hearing experience. Hear Res 2021; 408:108305. [PMID: 34315027 DOI: 10.1016/j.heares.2021.108305] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 06/24/2021] [Accepted: 06/29/2021] [Indexed: 01/11/2023]
Abstract
For deaf patients cochlear implants (CIs) can restore substantial amounts of functional hearing. However, binaural hearing, and in particular, the perception of interaural time differences (ITDs) with current CIs has been found to be notoriously poor, especially in the event of early hearing loss. One popular hypothesis for these deficits posits that a lack of early binaural experience may be a principal cause of poor ITD perception in pre-lingually deaf CI patients. This is supported by previous electrophysiological studies done in neonatally deafened, bilateral CI-stimulated animals showing reduced ITD sensitivity. However, we have recently demonstrated that neonatally deafened CI rats can quickly learn to discriminate microsecond ITDs under optimized stimulation conditions which suggests that the inability of human CI users to make use of ITDs is not due to lack of binaural hearing experience during development. In the study presented here, we characterized ITD sensitivity and tuning of inferior colliculus neurons under bilateral CI stimulation of neonatally deafened and hearing experienced rats. The hearing experienced rats were not deafened prior to implantation. Both cohorts were implanted bilaterally between postnatal days 64-77 and recorded immediately following surgery. Both groups showed comparably large proportions of ITD sensitive multi-units in the inferior colliculus (Deaf: 84.8%, Hearing: 82.5%), and the strength of ITD tuning, quantified as mutual information between response and stimulus ITD, was independent of hearing experience. However, the shapes of tuning curves differed substantially between both groups. We observed four main clusters of tuning curves - trough, contralateral, central, and ipsilateral tuning. Interestingly, over 90% of multi-units for hearing experienced rats showed predominantly contralateral tuning, whereas as many as 50% of multi-units in neonatally deafened rats were centrally tuned. However, when we computed neural d' scores to predict likely limits on performance in sound lateralization tasks, we did not find that these differences in tuning shapes predicted worse psychoacoustic performance for the neonatally deafened animals. We conclude that, at least in rats, substantial amounts of highly precise, "innate" ITD sensitivity can be found even after profound hearing loss throughout infancy. However, ITD tuning curve shapes appear to be strongly influenced by auditory experience although substantial lateralization encoding is present even in its absence.
Collapse
|
18
|
Pavão R, Sussman ES, Fischer BJ, Peña JL. Natural ITD statistics predict human auditory spatial perception. eLife 2020; 9:e51927. [PMID: 33043884 PMCID: PMC7661036 DOI: 10.7554/elife.51927] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 10/09/2020] [Indexed: 11/28/2022] Open
Abstract
A neural code adapted to the statistical structure of sensory cues may optimize perception. We investigated whether interaural time difference (ITD) statistics inherent in natural acoustic scenes are parameters determining spatial discriminability. The natural ITD rate of change across azimuth (ITDrc) and ITD variability over time (ITDv) were combined in a Fisher information statistic to assess the amount of azimuthal information conveyed by this sensory cue. We hypothesized that natural ITD statistics underlie the neural code for ITD and thus influence spatial perception. To test this hypothesis, sounds with invariant statistics were presented to measure human spatial discriminability and spatial novelty detection. Human auditory spatial perception showed correlation with natural ITD statistics, supporting our hypothesis. Further analysis showed that these results are consistent with classic models of ITD coding and can explain the ITD tuning distribution observed in the mammalian brainstem.
Collapse
Affiliation(s)
- Rodrigo Pavão
- Dominick P. Purpura Department of Neuroscience - Albert Einstein College of MedicineNew YorkUnited States
- Centro de Matemática, Computação e Cognição - Universidade Federal do ABCSanto AndréBrazil
| | - Elyse S Sussman
- Dominick P. Purpura Department of Neuroscience - Albert Einstein College of MedicineNew YorkUnited States
| | - Brian J Fischer
- Department of Mathematics - Seattle UniversitySeattleUnited States
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience - Albert Einstein College of MedicineNew YorkUnited States
| |
Collapse
|
19
|
Choudhury N, Linley D, Richardson A, Anderson M, Robinson SW, Marra V, Ciampani V, Walter SM, Kopp‐Scheinpflug C, Steinert JR, Forsythe ID. Kv3.1 and Kv3.3 subunits differentially contribute to Kv3 channels and action potential repolarization in principal neurons of the auditory brainstem. J Physiol 2020; 598:2199-2222. [DOI: 10.1113/jp279668] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Accepted: 03/25/2020] [Indexed: 12/13/2022] Open
Affiliation(s)
- Nasreen Choudhury
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Deborah Linley
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Amy Richardson
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Michelle Anderson
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Susan W. Robinson
- Neurotoxicity at the Synaptic Interface MRC Toxicology Unit University of Leicester, UK
| | - Vincenzo Marra
- Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Victoria Ciampani
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Sophie M. Walter
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Conny Kopp‐Scheinpflug
- Division of Neurobiology Department Biology II Ludwig‐Maximilians‐University Munich Großhaderner Strasse 2 Planegg‐Martinsried D‐82152 Germany
| | - Joern R. Steinert
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| | - Ian D. Forsythe
- Auditory Neurophysiology Laboratory, Department of Neuroscience Psychology & Behaviour College of Life Sciences University of Leicester Leicester LE1 7RH UK
| |
Collapse
|
20
|
Mozaffari M, Jiang D, Tucker AS. Developmental aspects of the tympanic membrane: Shedding light on function and disease. Genesis 2019; 58:e23348. [PMID: 31763764 PMCID: PMC7154630 DOI: 10.1002/dvg.23348] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 11/13/2019] [Accepted: 11/14/2019] [Indexed: 12/19/2022]
Abstract
The ear drum, or tympanic membrane (TM), is a key component in the intricate relay that transmits air-borne sound to our fluid-filled inner ear. Despite early belief that the mammalian ear drum evolved as a transformation of a reptilian drum, newer fossil data suggests a parallel and independent evolution of this structure in mammals. The term "drum" belies what is in fact a complex three-dimensional structure formed from multiple embryonic cell lineages. Intriguingly, disease affects the ear drum differently in its different parts, with the superior and posterior parts being much more frequently affected. This suggests a key role for the developmental details of TM formation in its final form and function, both in homeostasis and regeneration. Here we review recent studies in rodent models and humans that are beginning to address large knowledge gaps in TM cell dynamics from a developmental biologist's point of view. We outline the biological and clinical uncertainties that remain, with a view to guiding the indispensable contribution that developmental biology will be able to make to better understanding the TM.
Collapse
Affiliation(s)
- Mona Mozaffari
- Centre for Craniofacial and Regenerative Biology, King's College London, Guy's Hospital, London, UK
| | - Dan Jiang
- Centre for Craniofacial and Regenerative Biology, King's College London, Guy's Hospital, London, UK.,ENT Department, Guy's Hospital, London, UK
| | - Abigail S Tucker
- Centre for Craniofacial and Regenerative Biology, King's College London, Guy's Hospital, London, UK
| |
Collapse
|
21
|
Gao R, Ye J, Xin X. Directional acoustic signal measurement based on the asymmetrical temperature distribution of the parallel microfiber array. OPTICS EXPRESS 2019; 27:34113-34125. [PMID: 31878467 DOI: 10.1364/oe.27.034113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 08/18/2019] [Indexed: 06/10/2023]
Abstract
A parallel microfiber array for the measurement of directional acoustic signals is proposed and experimentally demonstrated. Two microfiber Bragg gratings (micro-FBGs) in single-mode fibers were placed on two sides of a Co2+-doped microfiber, forming an array of three parallel microfibers. The micro-FBGs can measure the temperature difference between the two sides of the Co2+-doped microfiber through interrogation of the matched FBGs. Due to the asymmetrical temperature distribution of the Co2+-doped microfiber under the applied acoustic signal, sound source localization can be realized through the acoustic particle velocity. The experimental results show that an acoustic particle velocity sensitivity of 44.2 V/(m/s) and a direction sensitivity of 0.83mV/deg can be achieved at a frequency of 1000 Hz, and the sound source localization has been realized through the orthogonal direction responses of two crossed Co2+-doped microfibers. The results demonstrate that the parallel microfiber array has the ability to recognize orientation, offering potential for directional acoustic signal detection with miniature size.
Collapse
|
22
|
Synthesis of Hemispheric ITD Tuning from the Readout of a Neural Map: Commonalities of Proposed Coding Schemes in Birds and Mammals. J Neurosci 2019; 39:9053-9061. [PMID: 31570537 DOI: 10.1523/jneurosci.0873-19.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 09/23/2019] [Accepted: 09/25/2019] [Indexed: 11/21/2022] Open
Abstract
A major cue to infer sound direction is the difference in arrival time of the sound at the left and right ears, called interaural time difference (ITD). The neural coding of ITD and its similarity across species have been strongly debated. In the barn owl, an auditory specialist relying on sound localization to capture prey, ITDs within the physiological range determined by the head width are topographically represented at each frequency. The topographic representation suggests that sound direction may be inferred from the location of maximal neural activity within the map. Such topographical representation of ITD, however, is not evident in mammals. Instead, the preferred ITD of neurons in the mammalian brainstem often lies outside the physiological range and depends on the neuron's best frequency. Because of these disparities, it has been assumed that how spatial hearing is achieved in birds and mammals is fundamentally different. However, recent studies reveal ITD responses in the owl's forebrain and midbrain premotor area that are consistent with coding schemes proposed in mammals. Particularly, sound location in owls could be decoded from the relative firing rates of two broadly and inversely ITD-tuned channels. This evidence suggests that, at downstream stages, the code for ITD may not be qualitatively different across species. Thus, while experimental evidence continues to support the notion of differences in ITD representation across species and brain regions, the latest results indicate notable commonalities, suggesting that codes driving orienting behavior in mammals and birds may be comparable.
Collapse
|
23
|
Kettler L, Carr CE. Neural Maps of Interaural Time Difference in the American Alligator: A Stable Feature in Modern Archosaurs. J Neurosci 2019; 39:3882-3896. [PMID: 30886018 PMCID: PMC6520516 DOI: 10.1523/jneurosci.2989-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 02/20/2019] [Accepted: 02/23/2019] [Indexed: 11/21/2022] Open
Abstract
Detection of interaural time differences (ITDs) is crucial for sound localization in most vertebrates. The current view is that optimal computational strategies of ITD detection depend mainly on head size and available frequencies, although evolutionary history should also be taken into consideration. In archosaurs, which include birds and crocodiles, the brainstem nucleus laminaris (NL) developed into the critical structure for ITD detection. In birds, ITDs are mapped in an orderly array or place code, whereas in the mammalian medial superior olive, the analog of NL, maps are not found. As yet, in crocodilians, topographical representations have not been identified. However, nontopographic representations of ITD cannot be excluded due to different anatomical and ethological features of birds and crocodiles. Therefore, we measured ITD-dependent responses in the NL of anesthetized American alligators of either sex and identified the location of the recording sites by lesions made after recording. The measured extracellular field potentials, or neurophonics, were strongly ITD tuned, and their preferred ITDs correlated with the position in NL. As in birds, delay lines, which compensate for external time differences, formed maps of ITD. The broad distributions of best ITDs within narrow frequency bands were not consistent with an optimal coding model. We conclude that the available acoustic cues and the architecture of the acoustic system in early archosaurs led to a stable and similar organization in today's birds and crocodiles, although physical features, such as internally coupled ears, head size, or shape, and audible frequency range, vary among the two groups.SIGNIFICANCE STATEMENT Interaural time difference (ITD) is an important cue for sound localization, and the optimal strategies for encoding ITD in neuronal populations are the subject of ongoing debate. We show that alligators form maps of ITD very similar to birds, suggesting that their common archosaur ancestor reached a stable coding solution different from mammals. Mammals and diapsids evolved tympanic hearing independently, and local optima can be reached in evolution that are not considered by global optimal coding models. Thus, the presence of ITD maps in the brainstem may reflect a local optimum in evolutionary development. Our results underline the importance of comparative animal studies and show that optimal models must be viewed in the light of evolutionary processes.
Collapse
Affiliation(s)
- Lutz Kettler
- Lehrstuhl für Zoologie, Technische Universität München, 85354 Freising, Germany, and
| | - Catherine E Carr
- Department of Biology, University of Maryland, College Park, Maryland 20742
| |
Collapse
|
24
|
Nieder A. Evolution of cognitive and neural solutions enabling numerosity judgements: lessons from primates and corvids. Philos Trans R Soc Lond B Biol Sci 2018; 373:rstb.2016.0514. [PMID: 29292361 DOI: 10.1098/rstb.2016.0514] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/29/2017] [Indexed: 01/29/2023] Open
Abstract
Brains that are capable of representing numerosity, the number of items in a set, have arisen repeatedly and independently in different animal taxa. This review compares the cognitive and physiological mechanisms found in a nonhuman primate, the rhesus macaque, and a corvid songbird, the carrion crow, in order to elucidate the evolutionary adaptations underlying numerical competence. Monkeys and corvids are known for their advanced cognitive competence, despite them both having independently and distinctly evolved endbrains that resulted from a long history of parallel evolution. In both species, numerosity is represented as an analogue magnitude by an approximate number system that obeys the Weber-Fechner Law. In addition, the activity of numerosity-selective neurons in the fronto-parietal association cortex of monkeys and the telencephalic associative area nidopallium caudolaterale of crows mirrors the animals' performance. In both species' brains, neuronal activity is tuned to a preferred numerosity, encodes the numerical value in an approximate fashion, and is best represented on a logarithmic scale. Collectively, the data show an impressive correspondence of the cognitive and neuronal mechanisms for numerosity representations across monkeys and crows. This suggests that remotely related vertebrates with distinctly developed endbrains adopted similar physiological solutions to common computational problems in numerosity processing.This article is part of a discussion meeting issue 'The origins of numerical abilities'.
Collapse
Affiliation(s)
- Andreas Nieder
- Animal Physiology Unit, Institute of Neurobiology, University of Tübingen, Auf der Morgenstelle 28, 72076 Tübingen, Germany
| |
Collapse
|
25
|
Emergence of an Adaptive Command for Orienting Behavior in Premotor Brainstem Neurons of Barn Owls. J Neurosci 2018; 38:7270-7279. [PMID: 30012694 DOI: 10.1523/jneurosci.0947-18.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Revised: 06/28/2018] [Accepted: 07/04/2018] [Indexed: 11/21/2022] Open
Abstract
The midbrain map of auditory space commands sound-orienting responses in barn owls. Owls precisely localize sounds in frontal space but underestimate the direction of peripheral sound sources. This bias for central locations was proposed to be adaptive to the decreased reliability in the periphery of sensory cues used for sound localization by the owl. Understanding the neural pathway supporting this biased behavior provides a means to address how adaptive motor commands are implemented by neurons. Here we find that the sensory input for sound direction is weighted by its reliability in premotor neurons of the midbrain tegmentum of owls (male and female), such that the mean population firing rate approximates the head-orienting behavior. We provide evidence that this coding may emerge through convergence of upstream projections from the midbrain map of auditory space. We further show that manipulating the sensory input yields changes predicted by the convergent network in both premotor neural responses and behavior. This work demonstrates how a topographic sensory representation can be linearly read out to adjust behavioral responses by the reliability of the sensory input.SIGNIFICANCE STATEMENT This research shows how statistics of the sensory input can be integrated into a behavioral command by readout of a sensory representation. The firing rate of midbrain premotor neurons receiving sensory information from a topographic representation of auditory space is weighted by the reliability of sensory cues. We show that these premotor responses are consistent with a weighted convergence from the topographic sensory representation. This convergence was also tested behaviorally, where manipulation of stimulus properties led to bidirectional changes in sound localization errors. Thus a topographic representation of auditory space is translated into a premotor command for sound localization that is modulated by sensory reliability.
Collapse
|
26
|
Affiliation(s)
| | - Jose L Peña
- University of Maryland .,Albert Einstein College of Medicine
| |
Collapse
|
27
|
Cardinal EA, Radford CA, Mensinger AF. The potential for the anterior lateral line to function for sound localization in toadfish (Opsanus tau). J Exp Biol 2018; 221:jeb.180679. [DOI: 10.1242/jeb.180679] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Accepted: 09/21/2018] [Indexed: 01/04/2023]
Abstract
Male oyster toadfish (Opsanus tau) acoustically attract females to nesting sites using a boatwhistle call. The rapid speed of sound underwater combined with the close proximity of the otolithic organs makes inner ear interaural time differences an unlikely mechanism to localize sound. To determine the role that the mechanosensory lateral line may play in sound localization, microwire electrodes were bilaterally implanted into the anterior lateral line nerve to record neural responses to vibrational stimuli. Highest spike rates and strongest phase-locking occurred at distances close to the fish and decreased as the stimulus was moved further from the fish. Bilateral anterior lateral line neuromasts displayed differential directional sensitivity to incoming vibrational stimuli, which suggests the potential for the lateral line to be used for sound localization in the near field. The present study also demonstrates that the spatially separated neuromasts of the toadfish may provide sufficient time delays between sensory organs for determining sound localization cues. Multimodal sensory input processing through both the inner ear (far field) and lateral line (near field) may allow for effective sound localization in fish.
Collapse
Affiliation(s)
- Emily A. Cardinal
- Marine Biological Laboratory, Woods Hole, MA 02543, USA
- Biology Department, University of Minnesota Duluth, Duluth, MN 55812, USA
| | - Craig A. Radford
- Marine Biological Laboratory, Woods Hole, MA 02543, USA
- Leigh Marine Laboratory, Institute of Marine Science, University of Auckland, Warkworth 0941, New Zealand
| | - Allen F. Mensinger
- Marine Biological Laboratory, Woods Hole, MA 02543, USA
- Biology Department, University of Minnesota Duluth, Duluth, MN 55812, USA
| |
Collapse
|
28
|
Itatani N, Klump GM. Interaction of spatial and non-spatial cues in auditory stream segregation in the European starling. Eur J Neurosci 2017; 51:1191-1200. [PMID: 28922512 DOI: 10.1111/ejn.13716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Revised: 09/14/2017] [Accepted: 09/14/2017] [Indexed: 11/29/2022]
Abstract
Integrating sounds from the same source and segregating sounds from different sources in an acoustic scene are an essential function of the auditory system. Naturally, the auditory system simultaneously makes use of multiple cues. Here, we investigate the interaction between spatial cues and frequency cues in stream segregation of European starlings (Sturnus vulgaris) using an objective measure of perception. Neural responses to streaming sounds were recorded, while the bird was performing a behavioural task that results in a higher sensitivity during a one-stream than a two-stream percept. Birds were trained to detect an onset time shift of a B tone in an ABA- triplet sequence in which A and B could differ in frequency and/or spatial location. If the frequency difference or spatial separation between the signal sources or both were increased, the behavioural time shift detection performance deteriorated. Spatial separation had a smaller effect on the performance compared to the frequency difference and both cues additively affected the performance. Neural responses in the primary auditory forebrain were affected by the frequency and spatial cues. However, frequency and spatial cue differences being sufficiently large to elicit behavioural effects did not reveal correlated neural response differences. The difference between the neuronal response pattern and behavioural response is discussed with relation to the task given to the bird. Perceptual effects of combining different cues in auditory scene analysis indicate that these cues are analysed independently and given different weights suggesting that the streaming percept arises consecutively to initial cue analysis.
Collapse
Affiliation(s)
- Naoya Itatani
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Carl-von-Ossietzky University Oldenburg, 26111, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Carl-von-Ossietzky University Oldenburg, Oldenburg, Germany
| | - Georg M Klump
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Carl-von-Ossietzky University Oldenburg, 26111, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Carl-von-Ossietzky University Oldenburg, Oldenburg, Germany
| |
Collapse
|
29
|
Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl's Auditory Forebrain. eNeuro 2017; 4:eN-NWR-0144-17. [PMID: 28674698 PMCID: PMC5492684 DOI: 10.1523/eneuro.0144-17.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 05/31/2017] [Accepted: 06/07/2017] [Indexed: 11/21/2022] Open
Abstract
While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space.
Collapse
|
30
|
Lee N, Mason AC. How spatial release from masking may fail to function in a highly directional auditory system. eLife 2017; 6. [PMID: 28425912 PMCID: PMC5443663 DOI: 10.7554/elife.20731] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 04/19/2017] [Indexed: 11/13/2022] Open
Abstract
Spatial release from masking (SRM) occurs when spatial separation between a signal and masker decreases masked thresholds. The mechanically-coupled ears of Ormia ochracea are specialized for hyperacute directional hearing, but the possible role of SRM, or whether such specializations exhibit limitations for sound source segregation, is unknown. We recorded phonotaxis to a cricket song masked by band-limited noise. With a masker, response thresholds increased and localization was diverted away from the signal and masker. Increased separation from 6° to 90° did not decrease response thresholds or improve localization accuracy, thus SRM does not operate in this range of spatial separations. Tympanal vibrations and auditory nerve responses reveal that localization errors were consistent with changes in peripheral coding of signal location and flies localized towards the ear with better signal detection. Our results demonstrate that, in a mechanically coupled auditory system, specialization for directional hearing does not contribute to source segregation.
Collapse
Affiliation(s)
- Norman Lee
- Department of Biological Sciences, Integrative Behaviour and Neuroscience Group, University of Toronto Scarborough, Toronto, Canada
| | - Andrew C Mason
- Department of Biological Sciences, Integrative Behaviour and Neuroscience Group, University of Toronto Scarborough, Toronto, Canada
| |
Collapse
|
31
|
Huguet G, Meng X, Rinzel J. Phasic Firing and Coincidence Detection by Subthreshold Negative Feedback: Divisive or Subtractive or, Better, Both. Front Comput Neurosci 2017; 11:3. [PMID: 28210218 PMCID: PMC5288357 DOI: 10.3389/fncom.2017.00003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 01/16/2017] [Indexed: 11/26/2022] Open
Abstract
Phasic neurons typically fire only for a fast-rising input, say at the onset of a step current, but not for steady or slow inputs, a property associated with type III excitability. Phasic neurons can show extraordinary temporal precision for phase locking and coincidence detection. Exemplars are found in the auditory brain stem where precise timing is used in sound localization. Phasicness at the cellular level arises from a dynamic, voltage-gated, negative feedback that can be recruited subthreshold, preventing the neuron from reaching spike threshold if the voltage does not rise fast enough. We consider two mechanisms for phasicness: a low threshold potassium current (subtractive mechanism) and a sodium current with subthreshold inactivation (divisive mechanism). We develop and analyze three reduced models with either divisive or subtractive mechanisms or both to gain insight into the dynamical mechanisms for the potentially high temporal precision of type III-excitable neurons. We compare their firing properties and performance for a range of stimuli. The models have characteristic non-monotonic input-output relations, firing rate vs. input intensity, for either stochastic current injection or Poisson-timed excitatory synaptic conductance trains. We assess performance according to precision of phase-locking and coincidence detection by the models' responses to repetitive packets of unitary excitatory synaptic inputs with more or less temporal coherence. We find that each mechanism contributes features but best performance is attained if both are present. The subtractive mechanism confers extraordinary precision for phase locking and coincidence detection but only within a restricted parameter range when the divisive mechanism of sodium inactivation is inoperative. The divisive mechanism guarantees robustness of phasic properties, without compromising excitability, although with somewhat less precision. Finally, we demonstrate that brief transient inhibition if properly timed can enhance the reliability of firing.
Collapse
Affiliation(s)
- Gemma Huguet
- Departament de Matemàtiques, Universitat Politècnica de Catalunya Barcelona, Spain
| | - Xiangying Meng
- Biology Department, University of Maryland College Park, MD, USA
| | - John Rinzel
- Center for Neural Science, New York UniversityNew York, NY, USA; Courant Institute of Mathematical Sciences, New York UniversityNew York, NY, USA
| |
Collapse
|
32
|
Bee MA, Christensen-Dalsgaard J. Sound source localization and segregation with internally coupled ears: the treefrog model. BIOLOGICAL CYBERNETICS 2016; 110:271-290. [PMID: 27730384 PMCID: PMC5107320 DOI: 10.1007/s00422-016-0695-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2015] [Accepted: 09/12/2016] [Indexed: 05/22/2023]
Abstract
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists.
Collapse
Affiliation(s)
- Mark A Bee
- Department of Ecology, Evolution, and Behavior, Graduate Program in Neuroscience, University of Minnesota, 140 Gortner Laboratories, 1479 Gortner Avenue, St. Paul, MN, 55108, USA.
| | | |
Collapse
|
33
|
Vedurmudi AP, Young BA, van Hemmen JL. Internally coupled ears: mathematical structures and mechanisms underlying ICE. BIOLOGICAL CYBERNETICS 2016; 110:359-382. [PMID: 27778100 DOI: 10.1007/s00422-016-0696-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Accepted: 09/13/2016] [Indexed: 05/22/2023]
Abstract
In internally coupled ears (ICE), the displacement of one eardrum creates pressure waves that propagate through air-filled passages in the skull, causing a displacement of the opposing eardrum and vice versa. In this review, a thorough mathematical analysis of the membranes, passages, and propagating pressure waves reveals how internally coupled ears generate unique amplitude and temporal cues for sound localization. The magnitudes of both of these cues are directionally dependent. On the basis of the geometry of the interaural cavity and the elastic properties of the two eardrums confining it at both ends, the present paper reviews the mathematical theory underlying hearing through ICE and derives analytical expressions for eardrum vibrations as well as the pressures inside the internal passages, which ultimately lead to the emergence of highly directional hearing cues. The derived expressions enable one to explicitly see the influence of different parts of the system, e.g., the interaural cavity and the eardrum, on the internal coupling, and the frequency dependence of the coupling. The tympanic fundamental frequency segregates a low-frequency regime with constant time-difference magnification (time dilation factor) from a high-frequency domain with considerable amplitude magnification. By exploiting the physical properties of the coupling, we describe a concrete method to numerically estimate the eardrum's fundamental frequency and damping solely through measurements taken from a live animal.
Collapse
Affiliation(s)
- Anupam P Vedurmudi
- Physik Department T35 and BCCN-Munich, Technische Universität München, 85747, Garching bei München, Germany
| | - Bruce A Young
- Kirksville College of Osteopathic Medicine, A.T. Still University, Kirksville, MO, 63501, USA
| | - J Leo van Hemmen
- Physik Department T35 and BCCN-Munich, Technische Universität München, 85747, Garching bei München, Germany.
| |
Collapse
|
34
|
Niederleitner B, Gutierrez-Ibanez C, Krabichler Q, Weigel S, Luksch H. A novel relay nucleus between the inferior colliculus and the optic tectum in the chicken (Gallus gallus). J Comp Neurol 2016; 525:513-534. [PMID: 27434677 DOI: 10.1002/cne.24082] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Revised: 07/13/2016] [Accepted: 07/17/2016] [Indexed: 11/08/2022]
Abstract
Processing multimodal sensory information is vital for behaving animals in many contexts. The barn owl, an auditory specialist, is a classic model for studying multisensory integration. In the barn owl, spatial auditory information is conveyed to the optic tectum (TeO) by a direct projection from the external nucleus of the inferior colliculus (ICX). In contrast, evidence of an integration of visual and auditory information in auditory generalist avian species is completely lacking. In particular, it is not known whether in auditory generalist species the ICX projects to the TeO at all. Here we use various retrograde and anterograde tracing techniques both in vivo and in vitro, intracellular fillings of neurons in vitro, and whole-cell patch recordings to characterize the connectivity between ICX and TeO in the chicken. We found that there is a direct projection from ICX to the TeO in the chicken, although this is small and only to the deeper layers (layers 13-15) of the TeO. However, we found a relay area interposed among the IC, the TeO, and the isthmic complex that receives strong synaptic input from the ICX and projects broadly upon the intermediate and deep layers of the TeO. This area is an external portion of the formatio reticularis lateralis (FRLx). In addition to the projection to the TeO, cells in FRLx send, via collaterals, descending projections through tectopontine-tectoreticular pathways. This newly described connection from the inferior colliculus to the TeO provides a solid basis for visual-auditory integration in an auditory generalist bird. J. Comp. Neurol. 525:513-534, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Bertram Niederleitner
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| | | | - Quirin Krabichler
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| | - Stefan Weigel
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| | - Harald Luksch
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| |
Collapse
|
35
|
Mensinger AF. Multimodal Sensory Input in the Utricle and Lateral Line of the Toadfish, Opsanus tau. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2016; 877:271-89. [PMID: 26515319 DOI: 10.1007/978-3-319-21059-9_13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
The utricular otolith and the mechanosensory lateral line of the toadfish, Opsanus tau, were investigated for sensitivity to multimodal sensory input by recording neural activity from free swimming fish. The utricle was sensitive to horizontal body movement, and displayed broad sensitivity to low frequency (80-200 Hz) sound. The lateral line was sensitive to water currents, swimming, prey movements, and sound with maximal sensitivity at 100 Hz. Both systems showed directional sensitivity to pure tones and toadfish vocalizations, indicating potential for sound localization. Thus, toadfish possess two hair cell based sensory systems that integrate information from disparate sources. However, swimming movements or predation strikes can saturate each system and it is unclear the effect that self-generated movement has on sensitivity. It is hypothesized that the toadfish's strategy of short distance swim movements allows it to sample the acoustical environment while static. Further study is needed to determine the integration of the two systems and if they are able to segregate and/or integrate multimodal sensory input.
Collapse
Affiliation(s)
- Allen F Mensinger
- Biology Department, University of Minnesota Duluth, Duluth, MN, 55812, USA. .,Marine Biological Laboratory, Woods Hole, MA, 02543, USA.
| |
Collapse
|
36
|
Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model(1,2,3). eNeuro 2016; 3:eN-NWR-0086-15. [PMID: 26866056 PMCID: PMC4745179 DOI: 10.1523/eneuro.0086-15.2015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Revised: 12/16/2015] [Accepted: 12/18/2015] [Indexed: 12/04/2022] Open
Abstract
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
Collapse
|
37
|
Abstract
Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland
| | - Patrick J. C. May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany
| | - Hannu Tiitinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland
| |
Collapse
|
38
|
Vedurmudi AP, Goulet J, Christensen-Dalsgaard J, Young BA, Williams R, van Hemmen JL. How Internally Coupled Ears Generate Temporal and Amplitude Cues for Sound Localization. PHYSICAL REVIEW LETTERS 2016; 116:028101. [PMID: 26824568 DOI: 10.1103/physrevlett.116.028101] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Indexed: 06/05/2023]
Abstract
In internally coupled ears, displacement of one eardrum creates pressure waves that propagate through air-filled passages in the skull and cause displacement of the opposing eardrum, and conversely. By modeling the membrane, passages, and propagating pressure waves, we show that internally coupled ears generate unique amplitude and temporal cues for sound localization. The magnitudes of both these cues are directionally dependent. The tympanic fundamental frequency segregates a low-frequency regime with constant time-difference magnification from a high-frequency domain with considerable amplitude magnification.
Collapse
Affiliation(s)
- A P Vedurmudi
- Physik Department T35 & Bernstein Center for Computational Neuroscience-Munich, Technische Universität München, 85747 Garching bei München, Germany
| | - J Goulet
- Physik Department T35 & Bernstein Center for Computational Neuroscience-Munich, Technische Universität München, 85747 Garching bei München, Germany
- Institute of Neuroscience and Medicine - Neuromodulation INM-7, Research Center Jülich, 52425 Jülich, Germany
| | | | - B A Young
- Kirksville College of Osteopathic Medicine, A.T. Still University, Kirksville, Missouri 63501, USA
| | - R Williams
- Kirksville College of Osteopathic Medicine, A.T. Still University, Kirksville, Missouri 63501, USA
| | - J L van Hemmen
- Physik Department T35 & Bernstein Center for Computational Neuroscience-Munich, Technische Universität München, 85747 Garching bei München, Germany
| |
Collapse
|
39
|
Ono M, Ito T. Functional organization of the mammalian auditory midbrain. J Physiol Sci 2015; 65:499-506. [PMID: 26362672 PMCID: PMC10718034 DOI: 10.1007/s12576-015-0394-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2015] [Accepted: 08/22/2015] [Indexed: 12/12/2022]
Abstract
The inferior colliculus (IC) is a critical nexus between the auditory brainstem and the forebrain. Parallel auditory pathways that emerge from the brainstem are integrated in the IC. In this integration, de-novo auditory information processed as local and ascending inputs converge via the complex neural circuit of the IC. However, it is still unclear how information is processed within the neural circuit. The purpose of this review is to give an anatomical and physiological overview of the IC neural circuit. We address the functional organization of the IC where the excitatory and inhibitory synaptic inputs interact to shape the responses of IC neurons to sound.
Collapse
Affiliation(s)
- Munenori Ono
- Department of Neuroscience, University of Connecticut Health Center, Farmington, CT, 06030-3401, USA.
- Department of Physiology, School of Medicine, Kanazawa Medical University, Uchinada, Ishikawa, 920-0293, Japan.
| | - Tetsufumi Ito
- Department of Anatomy, Faculty of Medical Sciences, University of Fukui, Eiheiji, Fukui, 910-1193, Japan
- Research and Education Program for Life Science, University of Fukui, Fukui, Fukui, 910-8507, Japan
| |
Collapse
|
40
|
Carr CE, Christensen-Dalsgaard J. Sound Localization Strategies in Three Predators. BRAIN, BEHAVIOR AND EVOLUTION 2015; 86:17-27. [PMID: 26398572 DOI: 10.1159/000435946] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs. Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate.
Collapse
Affiliation(s)
- Catherine E Carr
- Department of Biology, University of Maryland Center for the Comparative and Evolutionary Biology of Hearing, College Park, Md., USA
| | | |
Collapse
|
41
|
Bierman HS, Carr CE. Sound localization in the alligator. Hear Res 2015; 329:11-20. [PMID: 26048335 DOI: 10.1016/j.heares.2015.05.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Revised: 05/12/2015] [Accepted: 05/24/2015] [Indexed: 10/23/2022]
Abstract
In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution.
Collapse
Affiliation(s)
- Hilary S Bierman
- Center for Comparative and Evolutionary Biology of Hearing, Department of Biology, University of Maryland College Park, College Park, Maryland 20742, USA.
| | - Catherine E Carr
- Center for Comparative and Evolutionary Biology of Hearing, Department of Biology, University of Maryland College Park, College Park, Maryland 20742, USA.
| |
Collapse
|
42
|
Chen Z, Yuan W. Central plasticity and dysfunction elicited by aural deprivation in the critical period. Front Neural Circuits 2015; 9:26. [PMID: 26082685 PMCID: PMC4451366 DOI: 10.3389/fncir.2015.00026] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 05/13/2015] [Indexed: 12/31/2022] Open
Abstract
The acoustic signal is crucial for animals to obtain information from the surrounding environment. Like other sensory modalities, the central auditory system undergoes adaptive changes (i.e., plasticity) during the developmental stage as well as other stages of life. Owing to its plasticity, auditory centers may be susceptible to various factors, such as medical intervention, variation in ambient acoustic signals and lesion of the peripheral hearing organ. There are critical periods during which auditory centers are vulnerable to abnormal experiences. Particularly in the early postnatal development period, aural inputs are essential for functional maturity of auditory centers. An aural deprivation model, which can be achieved by attenuating or blocking the peripheral acoustic afferent input to the auditory center, is ideal for investigating plastic changes of auditory centers. Generally, auditory plasticity includes structural and functional changes, some of which can be irreversible. Aural deprivation can distort tonotopic maps, disrupt the binaural integration, reorganize the neural network and change the synaptic transmission in the primary auditory cortex or at lower levels of the auditory system. The regulation of specific gene expression and the modified signal pathway may be the deep molecular mechanism of these plastic changes. By studying this model, researchers may explore the pathogenesis of hearing loss and reveal plastic changes of the auditory cortex, facilitating the therapeutic advancement in patients with severe hearing loss. After summarizing developmental features of auditory centers in auditory deprived animals and discussing changes of central auditory remodeling in hearing loss patients, we aim at stressing the significant of an early and well-designed auditory training program for the hearing rehabilitation.
Collapse
Affiliation(s)
- Zhiji Chen
- Department of Otorhinolaryngology Head and Neck Surgery, Southwest Hospital, Third Military Medical University Chongqing, China
| | - Wei Yuan
- Department of Otorhinolaryngology Head and Neck Surgery, Southwest Hospital, Third Military Medical University Chongqing, China
| |
Collapse
|
43
|
Synaptic plasticity in the auditory system: a review. Cell Tissue Res 2015; 361:177-213. [PMID: 25896885 DOI: 10.1007/s00441-015-2176-x] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 03/18/2015] [Indexed: 01/19/2023]
Abstract
Synaptic transmission via chemical synapses is dynamic, i.e., the strength of postsynaptic responses may change considerably in response to repeated synaptic activation. Synaptic strength is increased during facilitation, augmentation and potentiation, whereas a decrease in synaptic strength is characteristic for depression and attenuation. This review attempts to discuss the literature on short-term and long-term synaptic plasticity in the auditory brainstem of mammals and birds. One hallmark of the auditory system, particularly the inner ear and lower brainstem stations, is information transfer through neurons that fire action potentials at very high frequency, thereby activating synapses >500 times per second. Some auditory synapses display morphological specializations of the presynaptic terminals, e.g., calyceal extensions, whereas other auditory synapses do not. The review focuses on short-term depression and short-term facilitation, i.e., plastic changes with durations in the millisecond range. Other types of short-term synaptic plasticity, e.g., posttetanic potentiation and depolarization-induced suppression of excitation, will be discussed much more briefly. The same holds true for subtypes of long-term plasticity, like prolonged depolarizations and spike-time-dependent plasticity. We also address forms of plasticity in the auditory brainstem that do not comprise synaptic plasticity in a strict sense, namely short-term suppression, paired tone facilitation, short-term adaptation, synaptic adaptation and neural adaptation. Finally, we perform a meta-analysis of 61 studies in which short-term depression (STD) in the auditory system is opposed to short-term depression at non-auditory synapses in order to compare high-frequency neurons with those that fire action potentials at a lower rate. This meta-analysis reveals considerably less STD in most auditory synapses than in non-auditory ones, enabling reliable, failure-free synaptic transmission even at frequencies >100 Hz. Surprisingly, the calyx of Held, arguably the best-investigated synapse in the central nervous system, depresses most robustly. It will be exciting to reveal the molecular mechanisms that set high-fidelity synapses apart from other synapses that function much less reliably.
Collapse
|
44
|
Liu AS, Tsunada J, Gold JI, Cohen YE. Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues. eNeuro 2015; 2:ENEURO.0077-14.2015. [PMID: 26464975 PMCID: PMC4596088 DOI: 10.1523/eneuro.0077-14.2015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2014] [Revised: 03/01/2015] [Accepted: 03/30/2015] [Indexed: 11/29/2022] Open
Abstract
Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.
Collapse
Affiliation(s)
| | - Joji Tsunada
- Department of Otorhinolaryngology, Perelman School of Medicine
| | - Joshua I. Gold
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Yale E. Cohen
- Department of Otorhinolaryngology, Perelman School of Medicine
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| |
Collapse
|
45
|
Brown AD, Stecker GC, Tollin DJ. The precedence effect in sound localization. J Assoc Res Otolaryngol 2015; 16:1-28. [PMID: 25479823 PMCID: PMC4310855 DOI: 10.1007/s10162-014-0496-2] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 10/13/2014] [Indexed: 11/29/2022] Open
Abstract
In ordinary listening environments, acoustic signals reaching the ears directly from real sound sources are followed after a few milliseconds by early reflections arriving from nearby surfaces. Early reflections are spectrotemporally similar to their source signals but commonly carry spatial acoustic cues unrelated to the source location. Humans and many other animals, including nonmammalian and even invertebrate animals, are nonetheless able to effectively localize sound sources in such environments, even in the absence of disambiguating visual cues. Robust source localization despite concurrent or nearly concurrent spurious spatial acoustic information is commonly attributed to an assortment of perceptual phenomena collectively termed "the precedence effect," characterizing the perceptual dominance of spatial information carried by the first-arriving signal. Here, we highlight recent progress and changes in the understanding of the precedence effect and related phenomena.
Collapse
Affiliation(s)
- Andrew D. Brown
- />Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045 USA
| | - G. Christopher Stecker
- />Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232 USA
| | - Daniel J. Tollin
- />Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045 USA
| |
Collapse
|
46
|
Cazettes F, Fischer BJ, Pena JL. Spatial cue reliability drives frequency tuning in the barn Owl's midbrain. eLife 2014; 3:e04854. [PMID: 25531067 PMCID: PMC4291741 DOI: 10.7554/elife.04854] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Accepted: 12/21/2014] [Indexed: 11/13/2022] Open
Abstract
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability.
Collapse
Affiliation(s)
- Fanny Cazettes
- Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| | - Brian J Fischer
- Department of Mathematics, Seattle University, Seattle, United States
| | - Jose L Pena
- Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| |
Collapse
|
47
|
Bierman HS, Thornton JL, Jones HG, Koka K, Young BA, Brandt C, Christensen-Dalsgaard J, Carr CE, Tollin DJ. Biophysics of directional hearing in the American alligator (Alligator mississippiensis). ACTA ACUST UNITED AC 2014; 217:1094-107. [PMID: 24671963 DOI: 10.1242/jeb.092866] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Physiological and anatomical studies have suggested that alligators have unique adaptations for spatial hearing. Sound localization cues are primarily generated by the filtering of sound waves by the head. Different vertebrate lineages have evolved external and/or internal anatomical adaptations to enhance these cues, such as pinnae and interaural canals. It has been hypothesized that in alligators, directionality may be enhanced via the acoustic coupling of middle ear cavities, resulting in a pressure difference receiver (PDR) mechanism. The experiments reported here support a role for a PDR mechanism in alligator sound localization by demonstrating that (1) acoustic space cues generated by the external morphology of the animal are not sufficient to generate location cues that match physiological sensitivity, (2) continuous pathways between the middle ears are present to provide an anatomical basis for coupling, (3) the auditory brainstem response shows some directionality, and (4) eardrum movement is directionally sensitive. Together, these data support the role of a PDR mechanism in crocodilians and further suggest this mechanism is a shared archosaur trait, most likely found also in the extinct dinosaurs.
Collapse
Affiliation(s)
- Hilary S Bierman
- Center for Comparative and Evolutionary Biology of Hearing, Department of Biology, University of Maryland College Park, College Park, MD 20742, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
48
|
Selective forces on origin, adaptation and reduction of tympanal ears in insects. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2014; 201:155-69. [DOI: 10.1007/s00359-014-0962-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Revised: 10/28/2014] [Accepted: 10/31/2014] [Indexed: 10/24/2022]
|
49
|
Grothe B, Pecka M. The natural history of sound localization in mammals--a story of neuronal inhibition. Front Neural Circuits 2014; 8:116. [PMID: 25324726 PMCID: PMC4181121 DOI: 10.3389/fncir.2014.00116] [Citation(s) in RCA: 105] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2014] [Accepted: 09/01/2014] [Indexed: 12/14/2022] Open
Abstract
Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.
Collapse
Affiliation(s)
- Benedikt Grothe
- Division of Neurobiology, Department of Biology II, Ludwig Maximilians University Munich Munich, Germany
| | - Michael Pecka
- Division of Neurobiology, Department of Biology II, Ludwig Maximilians University Munich Munich, Germany
| |
Collapse
|
50
|
Liu H, Currano L, Gee D, Helms T, Yu M. Understanding and mimicking the dual optimality of the fly ear. Sci Rep 2014; 3:2489. [PMID: 23966060 PMCID: PMC3749551 DOI: 10.1038/srep02489] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Accepted: 08/06/2013] [Indexed: 11/20/2022] Open
Abstract
The fly Ormia ochracea has the remarkable ability, given an eardrum separation of only 520 μm, to pinpoint the 5 kHz chirp of its cricket host. Previous research showed that the two eardrums are mechanically coupled, which amplifies the directional cues. We have now performed a mechanics and optimization analysis which reveals that the right coupling strength is key: it results in simultaneously optimized directional sensitivity and directional cue linearity at 5 kHz. We next demonstrated that this dual optimality is replicable in a synthetic device and can be tailored for a desired frequency. Finally, we demonstrated a miniature sensor endowed with this dual-optimality at 8 kHz with unparalleled sound localization. This work provides a quantitative and mechanistic explanation for the fly's sound-localization ability from a new perspective, and it provides a framework for the development of fly-ear inspired sensors to overcoming a previously-insurmountable size constraint in engineered sound-localization systems.
Collapse
Affiliation(s)
- Haijun Liu
- Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA
| | | | | | | | | |
Collapse
|