1
|
Berns MP, Nunez GM, Zhang X, Chavan A, Zemlianova K, Mowery TM, Yao JD. Auditory decision-making deficits after permanent noise-induced hearing loss. Sci Rep 2025; 15:2104. [PMID: 39814750 PMCID: PMC11736143 DOI: 10.1038/s41598-024-83374-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 12/13/2024] [Indexed: 01/18/2025] Open
Abstract
Loud noise exposure is one of the leading causes of permanent hearing loss. Individuals with noise-induced hearing loss (NIHL) suffer from speech comprehension deficits and experience impairments to cognitive functions such as attention and decision-making. Here, we investigate the specific underlying cognitive processes during auditory perceptual decision-making that are impacted by NIHL. Gerbils were trained to perform an auditory decision-making task that involves discriminating between slow and fast presentation rates of amplitude-modulated (AM) noise. Decision-making task performance was assessed across pre- versus post-NIHL sessions within the same gerbils. A single exposure session (2 h) to loud broadband noise (120 dB SPL) produced permanent NIHL with elevated threshold shifts in auditory brainstem responses (ABRs). Following NIHL, decision-making task performance was tested at sensation levels comparable to those prior to noise exposure in all animals. Our findings demonstrate NIHL diminished perceptual acuity, reduced attentional focus, altered choice bias, and slowed down evidence accumulation speed. Finally, video-tracking analysis of motor behavior during task performance demonstrates that NIHL can impact sensory-guided decision-based motor execution. Together, these results suggest that NIHL impairs the sensory, cognitive, and motor factors that support auditory decision-making.
Collapse
Affiliation(s)
- Madeline P Berns
- Department of Psychology - Behavioral and Systems Neuroscience, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA
| | - Genesis M Nunez
- Brain Health Institute, Rutgers, The State University of New Jersey, Nelson Biological Laboratory D418, 604 Allison Road, Piscataway, NJ, 08854, USA
| | - Xingeng Zhang
- Brain Health Institute, Rutgers, The State University of New Jersey, Nelson Biological Laboratory D418, 604 Allison Road, Piscataway, NJ, 08854, USA
| | - Anindita Chavan
- Brain Health Institute, Rutgers, The State University of New Jersey, Nelson Biological Laboratory D418, 604 Allison Road, Piscataway, NJ, 08854, USA
| | - Klavdia Zemlianova
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA
| | - Todd M Mowery
- Department of Otolaryngology - Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, 08901, USA
- Brain Health Institute, Rutgers, The State University of New Jersey, Nelson Biological Laboratory D418, 604 Allison Road, Piscataway, NJ, 08854, USA
| | - Justin D Yao
- Department of Otolaryngology - Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, 08901, USA.
- Brain Health Institute, Rutgers, The State University of New Jersey, Nelson Biological Laboratory D418, 604 Allison Road, Piscataway, NJ, 08854, USA.
- Department of Psychology - Behavioral and Systems Neuroscience, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA.
| |
Collapse
|
2
|
Zhao S, Skerritt-Davis B, Elhilali M, Dick F, Chait M. Sustained EEG responses to rapidly unfolding stochastic sounds reflect Bayesian inferred reliability tracking. Prog Neurobiol 2025; 244:102696. [PMID: 39647599 DOI: 10.1016/j.pneurobio.2024.102696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 11/26/2024] [Accepted: 12/02/2024] [Indexed: 12/10/2024]
Abstract
How does the brain track and process rapidly changing sensory information? Current computational accounts suggest that our sensations and decisions arise from the intricate interplay between bottom-up sensory signals and constantly changing expectations regarding the statistics of the surrounding world. A significant focus of recent research is determining which statistical properties are tracked by the brain as it monitors the rapid progression of sensory information. Here, by combining EEG (three experiments N ≥ 22 each) and computational modelling, we examined how the brain processes rapid and stochastic sound sequences that simulate key aspects of dynamic sensory environments. Passively listening participants were exposed to structured tone-pip arrangements that contained transitions between a range of stochastic patterns. Predictions were guided by a Bayesian predictive inference model. We demonstrate that listeners automatically track the statistics of unfolding sounds, even when these are irrelevant to behaviour. Transitions between sequence patterns drove a shift in the sustained EEG response. This was observed to a range of distributional statistics, and even in situations where behavioural detection of these transitions was at floor. These observations suggest that the modulation of the EEG sustained response reflects a process of belief updating within the brain. By establishing a connection between the outputs of the computational model and the observed brain responses, we demonstrate that the dynamics of these transition-related responses align with the tracking of "precision" - the confidence or reliability assigned to a predicted sensory signal - shedding light on the intricate interplay between the brain's statistical tracking mechanisms and its response dynamics.
Collapse
Affiliation(s)
- Sijia Zhao
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK.
| | | | - Mounya Elhilali
- Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, United States
| | - Frederic Dick
- Department of Experimental Psychology, University College London, London WC1H 0DS, UK
| | - Maria Chait
- Ear Institute, University College London, London WC1X 8EE, UK.
| |
Collapse
|
3
|
Peterson RE, Choudhri A, Mitelut C, Tanelus A, Capo-Battaglia A, Williams AH, Schneider DM, Sanes DH. Unsupervised discovery of family specific vocal usage in the Mongolian gerbil. eLife 2024; 12:RP89892. [PMID: 39680425 DOI: 10.7554/elife.89892] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2024] Open
Abstract
In nature, animal vocalizations can provide crucial information about identity, including kinship and hierarchy. However, lab-based vocal behavior is typically studied during brief interactions between animals with no prior social relationship, and under environmental conditions with limited ethological relevance. Here, we address this gap by establishing long-term acoustic recordings from Mongolian gerbil families, a core social group that uses an array of sonic and ultrasonic vocalizations. Three separate gerbil families were transferred to an enlarged environment and continuous 20-day audio recordings were obtained. Using a variational autoencoder (VAE) to quantify 583,237 vocalizations, we show that gerbils exhibit a more elaborate vocal repertoire than has been previously reported and that vocal repertoire usage differs significantly by family. By performing gaussian mixture model clustering on the VAE latent space, we show that families preferentially use characteristic sets of vocal clusters and that these usage preferences remain stable over weeks. Furthermore, gerbils displayed family-specific transitions between vocal clusters. Since gerbils live naturally as extended families in complex underground burrows that are adjacent to other families, these results suggest the presence of a vocal dialect which could be exploited by animals to represent kinship. These findings position the Mongolian gerbil as a compelling animal model to study the neural basis of vocal communication and demonstrates the potential for using unsupervised machine learning with uninterrupted acoustic recordings to gain insights into naturalistic animal behavior.
Collapse
Affiliation(s)
- Ralph E Peterson
- Center for Neural Science, New York University, New York, United States
- Center for Computational Neuroscience, Flatiron Institute, New York, United States
| | - Aman Choudhri
- Columbia University, New York, New York, United States
| | - Catalin Mitelut
- Center for Neural Science, New York University, New York, United States
| | - Aramis Tanelus
- Center for Neural Science, New York University, New York, United States
- Center for Computational Neuroscience, Flatiron Institute, New York, United States
| | | | - Alex H Williams
- Center for Neural Science, New York University, New York, United States
- Center for Computational Neuroscience, Flatiron Institute, New York, United States
| | - David M Schneider
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States
- Department of Psychology, New York University, New York, United States
- Neuroscience Institute, New York University School of Medicine, New York, United States
- Department of Biology, New York University, New York, United States
| |
Collapse
|
4
|
Frühholz S, Rodriguez P, Bonard M, Steiner F, Bobin M. Psychoacoustic and Archeoacoustic nature of ancient Aztec skull whistles. COMMUNICATIONS PSYCHOLOGY 2024; 2:108. [PMID: 39528620 PMCID: PMC11555264 DOI: 10.1038/s44271-024-00157-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 10/31/2024] [Indexed: 11/16/2024]
Abstract
Many ancient cultures used musical tools for social and ritual procedures, with the Aztec skull whistle being a unique exemplar from postclassic Mesoamerica. Skull whistles can produce softer hiss-like but also aversive and scream-like sounds that were potentially meaningful either for sacrificial practices, mythological symbolism, or intimidating warfare of the Aztecs. However, solid psychoacoustic evidence for any theory is missing, especially how human listeners cognitively and affectively respond to skull whistle sounds. Using psychoacoustic listening and classification experiments, we show that skull whistle sounds are predominantly perceived as aversive and scary and as having a hybrid natural-artificial origin. Skull whistle sounds attract mental attention by affectively mimicking other aversive and startling sounds produced by nature and technology. They were psychoacoustically classified as a hybrid mix of being voice- and scream-like but also originating from technical mechanisms. Using human neuroimaging, we furthermore found that skull whistle sounds received a specific decoding of the affective significance in the neural auditory system of human listeners, accompanied by higher-order auditory cognition and symbolic evaluations in fronto-insular-parietal brain systems. Skull whistles thus seem unique sound tools with specific psycho-affective effects on listeners, and Aztec communities might have capitalized on the scary and scream-like nature of skull whistles.
Collapse
Affiliation(s)
- Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland.
- Department of Psychology, University of Oslo, Oslo, Norway.
| | - Pablo Rodriguez
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Mathilde Bonard
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Marine Bobin
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| |
Collapse
|
5
|
Khilkevich A, Lohse M, Low R, Orsolic I, Bozic T, Windmill P, Mrsic-Flogel TD. Brain-wide dynamics linking sensation to action during decision-making. Nature 2024; 634:890-900. [PMID: 39261727 PMCID: PMC11499283 DOI: 10.1038/s41586-024-07908-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 08/05/2024] [Indexed: 09/13/2024]
Abstract
Perceptual decisions rely on learned associations between sensory evidence and appropriate actions, involving the filtering and integration of relevant inputs to prepare and execute timely responses1,2. Despite the distributed nature of task-relevant representations3-10, it remains unclear how transformations between sensory input, evidence integration, motor planning and execution are orchestrated across brain areas and dimensions of neural activity. Here we addressed this question by recording brain-wide neural activity in mice learning to report changes in ambiguous visual input. After learning, evidence integration emerged across most brain areas in sparse neural populations that drive movement-preparatory activity. Visual responses evolved from transient activations in sensory areas to sustained representations in frontal-motor cortex, thalamus, basal ganglia, midbrain and cerebellum, enabling parallel evidence accumulation. In areas that accumulate evidence, shared population activity patterns encode visual evidence and movement preparation, distinct from movement-execution dynamics. Activity in movement-preparatory subspace is driven by neurons integrating evidence, which collapses at movement onset, allowing the integration process to reset. Across premotor regions, evidence-integration timescales were independent of intrinsic regional dynamics, and thus depended on task experience. In summary, learning aligns evidence accumulation to action preparation in activity dynamics across dozens of brain regions. This leads to highly distributed and parallelized sensorimotor transformations during decision-making. Our work unifies concepts from decision-making and motor control fields into a brain-wide framework for understanding how sensory evidence controls actions.
Collapse
Affiliation(s)
- Andrei Khilkevich
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK.
| | - Michael Lohse
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK.
| | - Ryan Low
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Ivana Orsolic
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Tadej Bozic
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Paige Windmill
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Thomas D Mrsic-Flogel
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK.
| |
Collapse
|
6
|
Berns MP, Nunez GM, Zhang X, Chavan A, Zemlianova K, Mowery TM, Yao JD. Auditory decision-making deficits after permanent noise-induced hearing loss. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.23.614535. [PMID: 39386692 PMCID: PMC11463679 DOI: 10.1101/2024.09.23.614535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Loud noise exposure is one of the leading causes of permanent hearing loss. Individuals with noise-induced hearing loss (NIHL) suffer from speech comprehension deficits and experience impairments to cognitive functions such as attention and decision-making. Here, we tested whether a specific sensory deficit, NIHL, can directly impair auditory cognitive function. Gerbils were trained to perform an auditory decision-making task that involves discriminating between slow and fast presentation rates of amplitude-modulated (AM) noise. Decision-making task performance was assessed across pre-versus post-NIHL sessions within the same gerbils. A single exposure session (2 hours) to loud broadband noise (120 dB SPL) produced permanent NIHL with elevated threshold shifts in auditory brainstem responses (ABRs). Following NIHL, decision-making task performance was tested at sensation levels comparable to those prior to noise exposure in all animals. Our findings demonstrate NIHL diminished perceptual acuity, reduced attentional focus, altered choice bias, and slowed down evidence accumulation speed. Finally, video-tracking analysis of motor behavior during task performance demonstrates that NIHL can impact sensory-guided decision-based motor execution. Together, these results suggest that NIHL impairs the sensory, cognitive, and motor factors that support auditory decision-making.
Collapse
|
7
|
Peterson RE, Choudhri A, Mitelut C, Tanelus A, Capo-Battaglia A, Williams AH, Schneider DM, Sanes DH. Unsupervised discovery of family specific vocal usage in the Mongolian gerbil. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.11.532197. [PMID: 39282260 PMCID: PMC11398318 DOI: 10.1101/2023.03.11.532197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
In nature, animal vocalizations can provide crucial information about identity, including kinship and hierarchy. However, lab-based vocal behavior is typically studied during brief interactions between animals with no prior social relationship, and under environmental conditions with limited ethological relevance. Here, we address this gap by establishing long-term acoustic recordings from Mongolian gerbil families, a core social group that uses an array of sonic and ultrasonic vocalizations. Three separate gerbil families were transferred to an enlarged environment and continuous 20-day audio recordings were obtained. Using a variational autoencoder (VAE) to quantify 583,237 vocalizations, we show that gerbils exhibit a more elaborate vocal repertoire than has been previously reported and that vocal repertoire usage differs significantly by family. By performing gaussian mixture model clustering on the VAE latent space, we show that families preferentially use characteristic sets of vocal clusters and that these usage preferences remain stable over weeks. Furthermore, gerbils displayed family-specific transitions between vocal clusters. Since gerbils live naturally as extended families in complex underground burrows that are adjacent to other families, these results suggest the presence of a vocal dialect which could be exploited by animals to represent kinship. These findings position the Mongolian gerbil as a compelling animal model to study the neural basis of vocal communication and demonstrates the potential for using unsupervised machine learning with uninterrupted acoustic recordings to gain insights into naturalistic animal behavior.
Collapse
Affiliation(s)
- Ralph E. Peterson
- Center for Neural Science, New York University, New York, NY
- Center for Computational Neuroscience, Flatiron Institute, New York, NY
| | | | - Catalin Mitelut
- Center for Neural Science, New York University, New York, NY
| | - Aramis Tanelus
- Center for Neural Science, New York University, New York, NY
- Center for Computational Neuroscience, Flatiron Institute, New York, NY
| | | | - Alex H. Williams
- Center for Neural Science, New York University, New York, NY
- Center for Computational Neuroscience, Flatiron Institute, New York, NY
| | | | - Dan H. Sanes
- Center for Neural Science, New York University, New York, NY
- Department of Psychology, New York University, New York, NY
- Department of Biology, New York University, New York, NY
- Neuroscience Institute, New York University School of Medicine, New York, NY
| |
Collapse
|
8
|
Mackey CA, Hauser S, Schoenhaut AM, Temghare N, Ramachandran R. Hierarchical differences in the encoding of amplitude modulation in the subcortical auditory system of awake nonhuman primates. J Neurophysiol 2024; 132:1098-1114. [PMID: 39140590 PMCID: PMC11427057 DOI: 10.1152/jn.00329.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 07/31/2024] [Accepted: 08/12/2024] [Indexed: 08/15/2024] Open
Abstract
Sinusoidal amplitude modulation (SAM) is a key feature of complex sounds. Although psychophysical studies have characterized SAM perception, and neurophysiological studies in anesthetized animals report a transformation from the cochlear nucleus' (CN; brainstem) temporal code to the inferior colliculus' (IC; midbrain's) rate code, none have used awake animals or nonhuman primates to compare CN and IC's coding strategies to modulation-frequency perception. To address this, we recorded single-unit responses and compared derived neurometric measures in the CN and IC to psychometric measures of modulation frequency (MF) discrimination in macaques. IC and CN neurons often exhibited tuned responses to SAM in rate and spike-timing measures of modulation coding. Neurometric thresholds spanned a large range (2-200 Hz ΔMF). The lowest 40% of IC thresholds were less than or equal to psychometric thresholds, regardless of which code was used, whereas CN thresholds were greater than psychometric thresholds. Discrimination at 10-20 Hz could be explained by indiscriminately pooling 30 units in either structure, whereas discrimination at higher MFs was best explained by more selective pooling. This suggests that pooled CN activity was sufficient for AM discrimination. Psychometric and neurometric thresholds decreased as stimulus duration increased, but IC and CN thresholds were higher and more variable than behavior at short durations. This slower subcortical temporal integration compared with behavior was consistent with a drift diffusion model that reproduced individual differences in performance and can constrain future neurophysiological studies of temporal integration. These measures provide an account of AM perception at the neurophysiological, computational, and behavioral levels.NEW & NOTEWORTHY In everyday environments, the brain is tasked with extracting information from sound envelopes, which involves both sensory encoding and perceptual decision-making. Different neural codes for envelope representation have been characterized in midbrain and cortex, but studies of brainstem nuclei such as the cochlear nucleus (CN) have usually been conducted under anesthesia in nonprimate species. Here, we found that subcortical activity in awake monkeys and a biologically plausible perceptual decision-making model accounted for sound envelope discrimination behavior.
Collapse
Affiliation(s)
- Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Samantha Hauser
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
| | - Namrata Temghare
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
9
|
Haimson B, Gilday OD, Lavi-Rudel A, Sagi H, Lottem E, Mizrahi A. Single neuron responses to perceptual difficulty in the mouse auditory cortex. SCIENCE ADVANCES 2024; 10:eadp9816. [PMID: 39141740 PMCID: PMC11323952 DOI: 10.1126/sciadv.adp9816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 07/09/2024] [Indexed: 08/16/2024]
Abstract
Perceptual learning leads to improvement in behavioral performance, yet how the brain supports challenging perceptual demands is unknown. We used two photon imaging in the mouse primary auditory cortex during behavior in a Go-NoGo task designed to test perceptual difficulty. Using general linear model analysis, we found a subset of neurons that increased their responses during high perceptual demands. Single neurons increased their responses to both Go and NoGo sounds when mice were engaged in the more difficult perceptual discrimination. This increased responsiveness contributes to enhanced cortical network discriminability for the learned sounds. Under passive listening conditions, the same neurons responded weaker to the more similar sound pairs of the difficult task, and the training protocol by itself induced specific suppression to the learned sounds. Our findings identify how neuronal activity in auditory cortex is modulated during high perceptual demands, which is a fundamental feature associated with perceptual improvement.
Collapse
Affiliation(s)
- Baruch Haimson
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Omri David Gilday
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amichai Lavi-Rudel
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | | | - Eran Lottem
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Adi Mizrahi
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
10
|
Holey BE, Schneider DM. Sensation and expectation are embedded in mouse motor cortical activity. Cell Rep 2024; 43:114396. [PMID: 38923464 PMCID: PMC11304474 DOI: 10.1016/j.celrep.2024.114396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 05/15/2024] [Accepted: 06/07/2024] [Indexed: 06/28/2024] Open
Abstract
During behavior, the motor cortex sends copies of motor-related signals to sensory cortices. Here, we combine closed-loop behavior with large-scale physiology, projection-pattern-specific recordings, and circuit perturbations to show that neurons in mouse secondary motor cortex (M2) encode sensation and are influenced by expectation. When a movement unexpectedly produces a sound, M2 becomes dominated by sound-evoked activity. Sound responses in M2 are inherited partially from the auditory cortex and are routed back to the auditory cortex, providing a path for the reciprocal exchange of sensory-motor information during behavior. When the acoustic consequences of a movement become predictable, M2 responses to self-generated sounds are selectively gated off. These changes in single-cell responses are reflected in population dynamics, which are influenced by both sensation and expectation. Together, these findings reveal the embedding of sensory and expectation signals in motor cortical activity.
Collapse
Affiliation(s)
- Brooke E Holey
- Center for Neural Science, New York University, New York, NY 10003, USA; Neuroscience Institute, NYU Medical Center, New York, NY 10016, USA
| | - David M Schneider
- Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
11
|
Ishizu K, Nishimoto S, Ueoka Y, Funamizu A. Localized and global representation of prior value, sensory evidence, and choice in male mouse cerebral cortex. Nat Commun 2024; 15:4071. [PMID: 38778078 PMCID: PMC11111702 DOI: 10.1038/s41467-024-48338-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 04/26/2024] [Indexed: 05/25/2024] Open
Abstract
Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.
Collapse
Affiliation(s)
- Kotaro Ishizu
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
| | - Shosuke Nishimoto
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
- Department of Life Sciences, Graduate School of Arts and Sciences, University of Tokyo, 3-8-2, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| | - Yutaro Ueoka
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
| | - Akihiro Funamizu
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan.
- Department of Life Sciences, Graduate School of Arts and Sciences, University of Tokyo, 3-8-2, Komaba, Meguro-ku, Tokyo, 153-8902, Japan.
| |
Collapse
|
12
|
Inguscio BMS, Cartocci G, Sciaraffa N, Nicastri M, Giallini I, Aricò P, Greco A, Babiloni F, Mancini P. Two are better than one: Differences in cortical EEG patterns during auditory and visual verbal working memory processing between Unilateral and Bilateral Cochlear Implanted children. Hear Res 2024; 446:109007. [PMID: 38608331 DOI: 10.1016/j.heares.2024.109007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 03/28/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
Despite the proven effectiveness of cochlear implant (CI) in the hearing restoration of deaf or hard-of-hearing (DHH) children, to date, extreme variability in verbal working memory (VWM) abilities is observed in both unilateral and bilateral CI user children (CIs). Although clinical experience has long observed deficits in this fundamental executive function in CIs, the cause to date is still unknown. Here, we have set out to investigate differences in brain functioning regarding the impact of monaural and binaural listening in CIs compared with normal hearing (NH) peers during a three-level difficulty n-back task undertaken in two sensory modalities (auditory and visual). The objective of this pioneering study was to identify electroencephalographic (EEG) marker pattern differences in visual and auditory VWM performances in CIs compared to NH peers and possible differences between unilateral cochlear implant (UCI) and bilateral cochlear implant (BCI) users. The main results revealed differences in theta and gamma EEG bands. Compared with hearing controls and BCIs, UCIs showed hypoactivation of theta in the frontal area during the most complex condition of the auditory task and a correlation of the same activation with VWM performance. Hypoactivation in theta was also observed, again for UCIs, in the left hemisphere when compared to BCIs and in the gamma band in UCIs compared to both BCIs and NHs. For the latter two, a correlation was found between left hemispheric gamma oscillation and performance in the audio task. These findings, discussed in the light of recent research, suggest that unilateral CI is deficient in supporting auditory VWM in DHH. At the same time, bilateral CI would allow the DHH child to approach the VWM benchmark for NH children. The present study suggests the possible effectiveness of EEG in supporting, through a targeted approach, the diagnosis and rehabilitation of VWM in DHH children.
Collapse
Affiliation(s)
- Bianca Maria Serena Inguscio
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy.
| | - Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy
| | | | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Pietro Aricò
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy; Department of Computer, Control, and Management Engineering "Antonio Ruberti", Sapienza University of Rome, Via Ariosto 125, Rome 00185, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy; Department of Computer Science, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou 310018, China
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| |
Collapse
|
13
|
Anbuhl KL, Diez Castro M, Lee NA, Lee VS, Sanes DH. Cingulate cortex facilitates auditory perception under challenging listening conditions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.10.566668. [PMID: 38014324 PMCID: PMC10680599 DOI: 10.1101/2023.11.10.566668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
We often exert greater cognitive resources (i.e., listening effort) to understand speech under challenging acoustic conditions. This mechanism can be overwhelmed in those with hearing loss, resulting in cognitive fatigue in adults, and potentially impeding language acquisition in children. However, the neural mechanisms that support listening effort are uncertain. Evidence from human studies suggest that the cingulate cortex is engaged under difficult listening conditions, and may exert top-down modulation of the auditory cortex (AC). Here, we asked whether the gerbil cingulate cortex (Cg) sends anatomical projections to the AC that facilitate perceptual performance. To model challenging listening conditions, we used a sound discrimination task in which stimulus parameters were presented in either 'Easy' or 'Hard' blocks (i.e., long or short stimulus duration, respectively). Gerbils achieved statistically identical psychometric performance in Easy and Hard blocks. Anatomical tracing experiments revealed a strong, descending projection from layer 2/3 of the Cg1 subregion of the cingulate cortex to superficial and deep layers of primary and dorsal AC. To determine whether Cg improves task performance under challenging conditions, we bilaterally infused muscimol to inactivate Cg1, and found that psychometric thresholds were degraded for only Hard blocks. To test whether the Cg-to-AC projection facilitates task performance, we chemogenetically inactivated these inputs and found that performance was only degraded during Hard blocks. Taken together, the results reveal a descending cortical pathway that facilitates perceptual performance during challenging listening conditions. Significance Statement Sensory perception often occurs under challenging conditions, such a noisy background or dim environment, yet stimulus sensitivity can remain unaffected. One hypothesis is that cognitive resources are recruited to the task, thereby facilitating perceptual performance. Here, we identify a top-down cortical circuit, from cingulate to auditory cortex in the gerbils, that supports auditory perceptual performance under challenging listening conditions. This pathway is a plausible circuit that supports effortful listening, and may be degraded by hearing loss.
Collapse
|
14
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. Curr Biol 2023; 33:4470-4483.e7. [PMID: 37802051 PMCID: PMC10665086 DOI: 10.1016/j.cub.2023.09.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/27/2023] [Accepted: 09/13/2023] [Indexed: 10/08/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative-choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in the auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that, in spite of modulation of auditory cortical activity by task priors, the auditory cortex does not represent sufficient information about these priors to exploit them optimally. Thus, the combination of rapidly changing sensory information with more slowly varying task information required for decisions in this task might be represented in brain regions other than the auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA.
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
15
|
Ying R, Hamlette L, Nikoobakht L, Balaji R, Miko N, Caras ML. Organization of orbitofrontal-auditory pathways in the Mongolian gerbil. J Comp Neurol 2023; 531:1459-1481. [PMID: 37477903 PMCID: PMC10529810 DOI: 10.1002/cne.25525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/11/2023] [Accepted: 06/26/2023] [Indexed: 07/22/2023]
Abstract
Sound perception is highly malleable, rapidly adjusting to the acoustic environment and behavioral demands. This flexibility is the result of ongoing changes in auditory cortical activity driven by fluctuations in attention, arousal, or prior expectations. Recent work suggests that the orbitofrontal cortex (OFC) may mediate some of these rapid changes, but the anatomical connections between the OFC and the auditory system are not well characterized. Here, we used virally mediated fluorescent tracers to map the projection from OFC to the auditory midbrain, thalamus, and cortex in a classic animal model for auditory research, the Mongolian gerbil (Meriones unguiculatus). We observed no connectivity between the OFC and the auditory midbrain, and an extremely sparse connection between the dorsolateral OFC and higher order auditory thalamic regions. In contrast, we observed a robust connection between the ventral and medial subdivisions of the OFC and the auditory cortex, with a clear bias for secondary auditory cortical regions. OFC axon terminals were found in all auditory cortical lamina but were significantly more concentrated in the infragranular layers. Tissue-clearing and lightsheet microscopy further revealed that auditory cortical-projecting OFC neurons send extensive axon collaterals throughout the brain, targeting both sensory and non-sensory regions involved in learning, decision-making, and memory. These findings provide a more detailed map of orbitofrontal-auditory connections and shed light on the possible role of the OFC in supporting auditory cognition.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| | - Lashaka Hamlette
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Laudan Nikoobakht
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Rakshita Balaji
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Nicole Miko
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Melissa L. Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| |
Collapse
|
16
|
Paraouty N, Yao JD, Varnet L, Chou CN, Chung S, Sanes DH. Sensory cortex plasticity supports auditory social learning. Nat Commun 2023; 14:5828. [PMID: 37730696 PMCID: PMC10511464 DOI: 10.1038/s41467-023-41641-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/11/2023] [Indexed: 09/22/2023] Open
Abstract
Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. Moreover, social exposure induced an improvement in AC neuron sensitivity to auditory task cues. The magnitude of neural change during exposure correlated with task acquisition during practice. In contrast, exposure to only auditory task cues led to poorer neurometric and behavioral outcomes. Finally, social information during exposure was encoded in the AC of observer animals. Together, our results suggest that auditory SL is supported by AC neuron plasticity occurring during social exposure and prior to behavioral performance.
Collapse
Affiliation(s)
- Nihaad Paraouty
- Center for Neural Science New York University, New York, NY, 10003, USA.
| | - Justin D Yao
- Department of Otolaryngology, Rutgers University, New Brunswick, NJ, 08901, USA
| | - Léo Varnet
- Laboratoire des Systèmes Perceptifs, UMR 8248, Ecole Normale Supérieure, PSL University, Paris, 75005, France
| | - Chi-Ning Chou
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
- School of Engineering & Applied Sciences, Harvard University, Cambridge, MA, 02138, USA
| | - SueYeon Chung
- Center for Neural Science New York University, New York, NY, 10003, USA
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Dan H Sanes
- Center for Neural Science New York University, New York, NY, 10003, USA
- Department of Psychology, New York University, New York, NY, 10003, USA
- Department of Biology, New York University, New York, NY, 10003, USA
- Neuroscience Institute, NYU Langone Medical Center, New York, NY, 10003, USA
| |
Collapse
|
17
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.31.526457. [PMID: 37745428 PMCID: PMC10515783 DOI: 10.1101/2023.01.31.526457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that in spite of modulation of auditory cortical activity by task priors, auditory cortex does not represent sufficient information about these priors to exploit them optimally and that decisions in this task require that rapidly changing sensory information be combined with more slowly varying task information extracted and represented in brain regions other than auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: Institute for Quantitative Biosciences, the University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 1130032, Japan
- Present address: Department of Life Sciences, Graduate School of Arts and Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo, 1538902, Japan
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: The Francis Crick Institute, 1 Midland Rd, NW1 4AT London, UK
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
18
|
Abstract
Neural mechanisms of perceptual decision making have been extensively studied in experimental settings that mimic stable environments with repeating stimuli, fixed rules, and payoffs. In contrast, we live in an ever-changing environment and have varying goals and behavioral demands. To accommodate variability, our brain flexibly adjusts decision-making processes depending on context. Here, we review a growing body of research that explores the neural mechanisms underlying this flexibility. We highlight diverse forms of context dependency in decision making implemented through a variety of neural computations. Context-dependent neural activity is observed in a distributed network of brain structures, including posterior parietal, sensory, motor, and subcortical regions, as well as the prefrontal areas classically implicated in cognitive control. We propose that investigating the distributed network underlying flexible decisions is key to advancing our understanding and discuss a path forward for experimental and theoretical investigations.
Collapse
Affiliation(s)
- Gouki Okazawa
- Center for Neural Science, New York University, New York, NY, USA;
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY, USA;
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
19
|
Hu J, Konovalov A, Ruff CC. A unified neural account of contextual and individual differences in altruism. eLife 2023; 12:e80667. [PMID: 36752704 PMCID: PMC9908080 DOI: 10.7554/elife.80667] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 01/19/2023] [Indexed: 02/09/2023] Open
Abstract
Altruism is critical for cooperation and productivity in human societies but is known to vary strongly across contexts and individuals. The origin of these differences is largely unknown, but may in principle reflect variations in different neurocognitive processes that temporally unfold during altruistic decision making (ranging from initial perceptual processing via value computations to final integrative choice mechanisms). Here, we elucidate the neural origins of individual and contextual differences in altruism by examining altruistic choices in different inequality contexts with computational modeling and electroencephalography (EEG). Our results show that across all contexts and individuals, wealth distribution choices recruit a similar late decision process evident in model-predicted evidence accumulation signals over parietal regions. Contextual and individual differences in behavior related instead to initial processing of stimulus-locked inequality-related value information in centroparietal and centrofrontal sensors, as well as to gamma-band synchronization of these value-related signals with parietal response-locked evidence-accumulation signals. Our findings suggest separable biological bases for individual and contextual differences in altruism that relate to differences in the initial processing of choice-relevant information.
Collapse
Affiliation(s)
- Jie Hu
- Zurich Center for Neuroeconomics, Department of Economics, University of ZurichZurichSwitzerland
| | - Arkady Konovalov
- Zurich Center for Neuroeconomics, Department of Economics, University of ZurichZurichSwitzerland
- Centre for Human Brain Health, School of Psychology, University of BirminghamBirminghamUnited Kingdom
| | - Christian C Ruff
- Zurich Center for Neuroeconomics, Department of Economics, University of ZurichZurichSwitzerland
- University Research Priority Program 'Adaptive Brain Circuits in Development and Learning' (URPP AdaBD), University of ZurichZurichSwitzerland
| |
Collapse
|
20
|
Alho J, Khan S, Mamashli F, Perrachione TK, Losh A, McGuiggan NM, Graham S, Nayal Z, Joseph RM, Hämäläinen MS, Bharadwaj H, Kenet T. Atypical cortical processing of bottom-up speech binding cues in children with autism spectrum disorders. Neuroimage Clin 2023; 37:103336. [PMID: 36724734 PMCID: PMC9898310 DOI: 10.1016/j.nicl.2023.103336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 01/10/2023] [Accepted: 01/20/2023] [Indexed: 01/23/2023]
Abstract
Individuals with autism spectrum disorder (ASD) commonly display speech processing abnormalities. Binding of acoustic features of speech distributed across different frequencies into coherent speech objects is fundamental in speech perception. Here, we tested the hypothesis that the cortical processing of bottom-up acoustic cues for speech binding may be anomalous in ASD. We recorded magnetoencephalography while ASD children (ages 7-17) and typically developing peers heard sentences of sine-wave speech (SWS) and modulated SWS (MSS) where binding cues were restored through increased temporal coherence of the acoustic components and the introduction of harmonicity. The ASD group showed increased long-range feedforward functional connectivity from left auditory to parietal cortex with concurrent decreased local functional connectivity within the parietal region during MSS relative to SWS. As the parietal region has been implicated in auditory object binding, our findings support our hypothesis of atypical bottom-up speech binding in ASD. Furthermore, the long-range functional connectivity correlated with behaviorally measured auditory processing abnormalities, confirming the relevance of these atypical cortical signatures to the ASD phenotype. Lastly, the group difference in the local functional connectivity was driven by the youngest participants, suggesting that impaired speech binding in ASD might be ameliorated upon entering adolescence.
Collapse
Affiliation(s)
- Jussi Alho
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA.
| | - Sheraz Khan
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Fahimeh Mamashli
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Ave, Boston, MA 02215, USA
| | - Ainsley Losh
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Nicole M McGuiggan
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Steven Graham
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Zein Nayal
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Robert M Joseph
- Department of Anatomy and Neurobiology, Boston University School of Medicine, 72 East Concord St, Boston, MA 02118, USA
| | - Matti S Hämäläinen
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA
| | - Hari Bharadwaj
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Department of Speech, Language, and Hearing Sciences, and Weldon School of Biomedical Engineering, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907, USA
| | - Tal Kenet
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Boston, MA 02129, USA.
| |
Collapse
|
21
|
Yao JD, Zemlianova KO, Hocker DL, Savin C, Constantinople CM, Chung S, Sanes DH. Transformation of acoustic information to sensory decision variables in the parietal cortex. Proc Natl Acad Sci U S A 2023; 120:e2212120120. [PMID: 36598952 PMCID: PMC9926273 DOI: 10.1073/pnas.2212120120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 11/08/2022] [Indexed: 01/05/2023] Open
Abstract
The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex-recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. During task engagement, stimulus identity decoding performance from simultaneously recorded parietal neurons significantly correlated with psychometric sensitivity. In contrast, decoding performance during passive listening was significantly reduced. Principal component and geometric analyses revealed the emergence of low-dimensional encoding of linearly separable manifolds with respect to stimulus identity and decision, but only during task engagement. These findings confirm that the parietal cortex mediates a transition of acoustic representations into decision-related variables. Finally, using a clustering analysis, we identified three functionally distinct subpopulations of neurons that each encoded task-relevant information during separate temporal segments of a trial. Taken together, our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions.
Collapse
Affiliation(s)
- Justin D. Yao
- Center for Neural Science, New York University, New YorkNY 10003
- Department of Otolaryngology, Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ08901
- Brain Health Institute, Rutgers University, Piscataway, NJ08854
| | | | - David L. Hocker
- Center for Neural Science, New York University, New YorkNY 10003
| | - Cristina Savin
- Center for Neural Science, New York University, New YorkNY 10003
- Neuroscience Institute, New York University Langone School of Medicine, New York, NY10016
- Center for Data Science, New York University, New YorkNY 10011
| | - Christine M. Constantinople
- Center for Neural Science, New York University, New YorkNY 10003
- Neuroscience Institute, New York University Langone School of Medicine, New York, NY10016
| | - SueYeon Chung
- Center for Neural Science, New York University, New YorkNY 10003
- Flatiron Institute, Simons Foundation, New YorkNY 10010
| | - Dan H. Sanes
- Center for Neural Science, New York University, New YorkNY 10003
- Neuroscience Institute, New York University Langone School of Medicine, New York, NY10016
- Department of Psychology, New York University, New YorkNY 10003
- Department of Biology, New York University, New YorkNY 10003
| |
Collapse
|
22
|
Cabrera L, Lorenzini I, Rosen S, Varnet L, Lorenzi C. Temporal integration for amplitude modulation in childhood: Interaction between internal noise and memory. Hear Res 2021; 415:108403. [PMID: 34879987 DOI: 10.1016/j.heares.2021.108403] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 11/17/2021] [Accepted: 11/25/2021] [Indexed: 11/25/2022]
Abstract
It is still unclear whether the gradual improvement in amplitude-modulation (AM) sensitivity typically found in children up to 10 years of age reflects an improvement in "processing efficiency" (the central ability to use information extracted by sensory mechanisms). This hypothesis was tested by evaluating temporal integration for AM, a capacity relying on memory and decision factors. This was achieved by measuring the effect of increasing the number of AM cycles (2 vs 8) on AM-detection thresholds for three groups of children aged from 5 to 11 years and a group of young adults. AM-detection thresholds were measured using a forced-choice procedure and sinusoidal AM (4 or 32 Hz rate) applied to a 1024-Hz pure-tone carrier. All age groups demonstrated temporal integration for AM at both rates; that is, significant improvements in AM sensitivity with a higher number of AM cycles. However, an effect of age is observed as both 5-6 year olds and adults exhibited more temporal integration compared to 7-8 and 10-11 year olds at both rates. This difference is due to: (i) the 5-6 year olds displaying the worst thresholds with 2 AM cycles, but similar thresholds with 8 cycles compared to the 7-8 and 10-11 year olds, and, (ii) adults showing the best thresholds with 8 AM cycles but similar thresholds with 2 cycles compared to the 7-8 and 10-11 year olds. Computational modelling indicated that higher levels of internal noise combined with poorer short-term memory capacities in children accounted for the developmental trends. Improvement in processing efficiency may therefore account for the development of AM detection in childhood. This article is part of the Special Issue Outer hair cell Edited by Joseph Santos-Sacchi and Kumar Navaratnam.
Collapse
Affiliation(s)
- Laurianne Cabrera
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, F-75006 Paris, France; Speech, Hearing and Phonetic Sciences, UCL, United Kingdom.
| | - Irene Lorenzini
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, F-75006 Paris, France
| | - Stuart Rosen
- Speech, Hearing and Phonetic Sciences, UCL, United Kingdom
| | - Léo Varnet
- Laboratoire des Systèmes Perceptifs (UMR 8248), CNRS, Ecole normale supérieure, Université Paris Sciences & Lettres (PSL), Paris, France
| | - Christian Lorenzi
- Laboratoire des Systèmes Perceptifs (UMR 8248), CNRS, Ecole normale supérieure, Université Paris Sciences & Lettres (PSL), Paris, France
| |
Collapse
|
23
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|