1
|
Kang H, Kanold PO. Sparse representation of neurons for encoding complex sounds in the auditory cortex. Prog Neurobiol 2024; 241:102661. [PMID: 39303758 PMCID: PMC11875025 DOI: 10.1016/j.pneurobio.2024.102661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 08/20/2024] [Accepted: 09/05/2024] [Indexed: 09/22/2024]
Abstract
Listening in complex sound environments requires rapid segregation of different sound sources, e.g., having a conversation with multiple speakers or other environmental sounds. Efficient processing requires fast encoding of inputs to adapt to target sounds and identify relevant information from past experiences. This adaptation process represents an early phase of implicit learning of the sound statistics to form auditory memory. The auditory cortex (ACtx) plays a crucial role in this implicit learning process, but the underlying circuits are unknown. In awake mice, we recorded neuronal responses in different ACtx subfields using in vivo 2-photon imaging of excitatory and inhibitory (parvalbumin; PV) neurons. We used a paradigm adapted from human studies that induced rapid implicit learning from passively presented complex sounds and imaged A1 Layer 4 (L4), A1 L2/3, and A2 L2/3. In this paradigm, a frozen spectro-temporally complex 'Target' sound randomly re-occurred within a stream of other random complex sounds. All ACtx subregions contained distinct groups of cells specifically responsive to complex acoustic sequences, indicating that even thalamocortical input layers (A1 L4) respond to complex sounds. Subgroups of excitatory and inhibitory cells in all subfields showed decreased responses for re-occurring Target sounds, indicating that ACtx is highly involved in the early implicit learning phase. At the population level, activity was more decorrelated to Target sounds independent of the duration of frozen token, subregions, and cell type. These findings suggest that ACtx and its input layers contribute to the early phase of auditory memory for complex sounds, suggesting a parallel strategy across ACtx areas and between excitatory and inhibitory neurons.
Collapse
Affiliation(s)
- HiJee Kang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Kavli NDI, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
2
|
Goto Y, Kitajo K. Selective consistency of recurrent neural networks induced by plasticity as a mechanism of unsupervised perceptual learning. PLoS Comput Biol 2024; 20:e1012378. [PMID: 39226313 PMCID: PMC11398647 DOI: 10.1371/journal.pcbi.1012378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 09/13/2024] [Accepted: 07/30/2024] [Indexed: 09/05/2024] Open
Abstract
Understanding the mechanism by which the brain achieves relatively consistent information processing contrary to its inherent inconsistency in activity is one of the major challenges in neuroscience. Recently, it has been reported that the consistency of neural responses to stimuli that are presented repeatedly is enhanced implicitly in an unsupervised way, and results in improved perceptual consistency. Here, we propose the term "selective consistency" to describe this input-dependent consistency and hypothesize that it will be acquired in a self-organizing manner by plasticity within the neural system. To test this, we investigated whether a reservoir-based plastic model could acquire selective consistency to repeated stimuli. We used white noise sequences randomly generated in each trial and referenced white noise sequences presented multiple times. The results showed that the plastic network was capable of acquiring selective consistency rapidly, with as little as five exposures to stimuli, even for white noise. The acquisition of selective consistency could occur independently of performance optimization, as the network's time-series prediction accuracy for referenced stimuli did not improve with repeated exposure and optimization. Furthermore, the network could only achieve selective consistency when in the region between order and chaos. These findings suggest that the neural system can acquire selective consistency in a self-organizing manner and that this may serve as a mechanism for certain types of learning.
Collapse
Affiliation(s)
- Yujin Goto
- Division of Neural Dynamics, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Aichi, Japan
- Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Okazaki, Aichi, Japan
| | - Keiichi Kitajo
- Division of Neural Dynamics, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Aichi, Japan
- Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Okazaki, Aichi, Japan
| |
Collapse
|
3
|
Bayazitov IT, Teubner BJW, Feng F, Wu Z, Li Y, Blundon JA, Zakharenko SS. Sound-evoked adenosine release in cooperation with neuromodulatory circuits permits auditory cortical plasticity and perceptual learning. Cell Rep 2024; 43:113758. [PMID: 38358887 PMCID: PMC10939737 DOI: 10.1016/j.celrep.2024.113758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/21/2023] [Accepted: 01/23/2024] [Indexed: 02/17/2024] Open
Abstract
Meaningful auditory memories are formed in adults when acoustic information is delivered to the auditory cortex during heightened states of attention, vigilance, or alertness, as mediated by neuromodulatory circuits. Here, we identify that, in awake mice, acoustic stimulation triggers auditory thalamocortical projections to release adenosine, which prevents cortical plasticity (i.e., selective expansion of neural representation of behaviorally relevant acoustic stimuli) and perceptual learning (i.e., experience-dependent improvement in frequency discrimination ability). This sound-evoked adenosine release (SEAR) becomes reduced within seconds when acoustic stimuli are tightly paired with the activation of neuromodulatory (cholinergic or dopaminergic) circuits or periods of attentive wakefulness. If thalamic adenosine production is enhanced, then SEAR elevates further, the neuromodulatory circuits are unable to sufficiently reduce SEAR, and associative cortical plasticity and perceptual learning are blocked. This suggests that transient low-adenosine periods triggered by neuromodulatory circuits permit associative cortical plasticity and auditory perceptual learning in adults to occur.
Collapse
Affiliation(s)
- Ildar T Bayazitov
- Division of Neural Circuits and Behavior, Department of Developmental Neurobiology, St. Jude Children's Research Hospital, Memphis, TN 38105, USA
| | - Brett J W Teubner
- Division of Neural Circuits and Behavior, Department of Developmental Neurobiology, St. Jude Children's Research Hospital, Memphis, TN 38105, USA
| | - Feng Feng
- Division of Neural Circuits and Behavior, Department of Developmental Neurobiology, St. Jude Children's Research Hospital, Memphis, TN 38105, USA
| | - Zhaofa Wu
- School of Life Sciences, Peking University, Beijing 100871, China
| | - Yulong Li
- School of Life Sciences, Peking University, Beijing 100871, China
| | - Jay A Blundon
- Division of Neural Circuits and Behavior, Department of Developmental Neurobiology, St. Jude Children's Research Hospital, Memphis, TN 38105, USA
| | - Stanislav S Zakharenko
- Division of Neural Circuits and Behavior, Department of Developmental Neurobiology, St. Jude Children's Research Hospital, Memphis, TN 38105, USA.
| |
Collapse
|
4
|
Kang H, Auksztulewicz R, Chan CH, Cappotto D, Rajendran VG, Schnupp JWH. Cross-modal implicit learning of random time patterns. Hear Res 2023; 438:108857. [PMID: 37639922 DOI: 10.1016/j.heares.2023.108857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 07/12/2023] [Accepted: 07/21/2023] [Indexed: 08/31/2023]
Abstract
Perception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, implicit learning of temporal patterns in one modality can also improve their processing in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers using electroencephalography (EEG), while participants were exposed to brief sequences of randomly timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment across both modalities (Transfer) or only within a modality (Control), enabling implicit learning in one modality and its transfer. Using a novel method of analysis of single-trial EEG responses, we showed that learning temporal structures within and across modalities is reflected in neural learning curves. These putative neural correlates of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. The modality-specific mechanisms for learning of temporal information and general mechanisms which mediate learning transfer across modalities had distinct physiological signatures: temporal learning within modalities relied on modality-specific brain regions while learning transfer affected beta-band activity in frontal regions.
Collapse
Affiliation(s)
- HiJee Kang
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany
| | - Chi Hong Chan
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R
| | - Drew Cappotto
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; UCL Ear Institute, University College London, London, United Kingdom
| | - Vani G Rajendran
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R; Department of Cognitive Neuroscience, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, NM
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong S.A.R.
| |
Collapse
|
5
|
Kang H, Kanold PO. Auditory memory of complex sounds in sparsely distributed, highly correlated neurons in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.02.526903. [PMID: 36778416 PMCID: PMC9915716 DOI: 10.1101/2023.02.02.526903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Listening in complex sound environments requires rapid segregation of different sound sources e.g., speakers from each other, speakers from other sounds, or different instruments in an orchestra, and also adjust auditory processing on the prevailing sound conditions. Thus, fast encoding of inputs and identifying and adapting to reoccurring sounds are necessary for efficient and agile sound perception. This adaptation process represents an early phase of developing implicit learning of sound statistics and thus represents a form of auditory memory. The auditory cortex (ACtx) is known to play a key role in this encoding process but the underlying circuits and if hierarchical processing exists are not known. To identify ACtx regions and cells involved in this process, we simultaneously imaged population of neurons in different ACtx subfields using in vivo 2-photon imaging in awake mice. We used an experimental stimulus paradigm adapted from human studies that triggers rapid and robust implicit learning to passively present complex sounds and imaged A1 Layer 4 (L4), A1 L2/3, and A2 L2/3. In this paradigm, a frozen spectro-temporally complex 'Target' sound would be randomly re-occurring within a stream of random other complex sounds. We find distinct groups of cells that are specifically responsive to complex acoustic sequences across all subregions indicating that even the initial thalamocortical input layers (A1 L4) respond to complex sounds. Cells in all imaged regions showed decreased response amplitude for reoccurring Target sounds indicating that a memory signature is present even in the thalamocortical input layers. On the population level we find increased synchronized activity across cells to the Target sound and that this synchronized activity was more consistent across cells regardless of the duration of frozen token within Target sounds in A2, compared to A1. These findings suggest that ACtx and its input layers play a role in auditory memory for complex sounds and suggest a hierarchical structure of processes for auditory memory.
Collapse
Affiliation(s)
- HiJee Kang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 20215
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 20215
| |
Collapse
|
6
|
Pinto D, Prior A, Zion Golumbic E. Assessing the Sensitivity of EEG-Based Frequency-Tagging as a Metric for Statistical Learning. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:214-234. [PMID: 37215560 PMCID: PMC10158570 DOI: 10.1162/nol_a_00061] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 11/10/2021] [Indexed: 05/24/2023]
Abstract
Statistical learning (SL) is hypothesized to play an important role in language development. However, the measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative measure for studying SL. We tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive electroencephalograph (EEG) recordings of neural activity in humans. Importantly, we used carefully constructed controls to address potential acoustic confounds of the frequency-tagging approach, and compared the sensitivity of EEG-based metrics to both explicit and implicit behavioral tests of SL. Group-level results confirm that frequency-tagging can provide a robust indication of SL for an artificial language, above and beyond potential acoustic confounds. However, this metric had very low sensitivity at the level of individual participants, with significant effects found only in 30% of participants. Comparison of the neural metric to previously established behavioral measures for assessing SL showed a significant yet weak correspondence with performance on an implicit task, which was above-chance in 70% of participants, but no correspondence with the more common explicit 2-alternative forced-choice task, where performance did not exceed chance-level. Given the proposed ubiquitous nature of SL, our results highlight some of the operational and methodological challenges of obtaining robust metrics for assessing SL, as well as the potential confounds that should be taken into account when using the frequency-tagging approach in EEG studies.
Collapse
Affiliation(s)
- Danna Pinto
- The Leslie and Susan Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Anat Prior
- Department of Learning Disabilities, University of Haifa, Haifa, Israel
| | - Elana Zion Golumbic
- The Leslie and Susan Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| |
Collapse
|