1
|
Robert P, Zatorre R, Gupta A, Sein J, Anton JL, Belin P, Thoret E, Morillon B. Auditory hemispheric asymmetry for actions and objects. Cereb Cortex 2024; 34:bhae292. [PMID: 39051660 DOI: 10.1093/cercor/bhae292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 06/08/2024] [Accepted: 07/03/2024] [Indexed: 07/27/2024] Open
Abstract
What is the function of auditory hemispheric asymmetry? We propose that the identification of sound sources relies on the asymmetric processing of two complementary and perceptually relevant acoustic invariants: actions and objects. In a large dataset of environmental sounds, we observed that temporal and spectral modulations display only weak covariation. We then synthesized auditory stimuli by simulating various actions (frictions) occurring on different objects (solid surfaces). Behaviorally, discrimination of actions relies on temporal modulations, while discrimination of objects relies on spectral modulations. Functional magnetic resonance imaging data showed that actions and objects are decoded in the left and right hemispheres, respectively, in bilateral superior temporal and left inferior frontal regions. This asymmetry reflects a generic differential processing-through differential neural sensitivity to temporal and spectral modulations present in environmental sounds-that supports the efficient categorization of actions and objects. These results support an ecologically valid framework of the functional role of auditory brain asymmetry.
Collapse
Affiliation(s)
- Paul Robert
- Institut de Neurosciences des Systèmes (INS), Inserm/UMR1106, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Robert Zatorre
- Montreal Neurological Institute (MNI), Cognitive Neuroscience Unit, McGill University, 3801 Rue University, Montréal, QC H3A 2B4, Canada
- Centre for Research in Brain, Language, and Music (CRBLM), McGill University, Faculty of Medicine 3640 de la Montagne, Montreal QC H3G 2A8, Canada
| | - Akanksha Gupta
- Institut de Neurosciences des Systèmes (INS), Inserm/UMR1106, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Julien Sein
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Jean-Luc Anton
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Pascal Belin
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Etienne Thoret
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
- PRISM Laboratory, CNRS/UMR7061, Aix Marseille University, 31 Chemin Joseph Aiguier, Marseille, 13402 Cedex 20, France
- Laboratoire d'Informatique et Systèmes (LIS), CNRS/UMR7020, Aix Marseille University, 52 Av Escadrille Normandie Niemen, Marseille, 13397 Cedex 20, France
- Institute of Language, Communication, and the Brain (ILCB), Aix Marseille University, 5 avenue Pasteur, Aix-en-Provence, 13604 Cedex 1, France
| | - Benjamin Morillon
- Institut de Neurosciences des Systèmes (INS), Inserm/UMR1106, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| |
Collapse
|
2
|
van der Willigen RF, Versnel H, van Opstal AJ. Spectral-temporal processing of naturalistic sounds in monkeys and humans. J Neurophysiol 2024; 131:38-63. [PMID: 37965933 PMCID: PMC11305640 DOI: 10.1152/jn.00129.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 10/23/2023] [Accepted: 11/13/2023] [Indexed: 11/16/2023] Open
Abstract
Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images.NEW & NOTEWORTHY We provide comparative data on primate audition of naturalistic sounds comprising hearing thresholds, reaction time distributions, and spectral-temporal modulation transfer functions. Our psychophysical experiments demonstrate that auditory information is primarily processed in a spectral-temporal-independent manner by both monkeys and humans. Singular value decomposition of known visual spatiotemporal contrast sensitivity, in comparison to our auditory spectral-temporal sensitivity, revealed a striking contrast in how the brain encodes natural sounds as opposed to natural images, as vision appears to be space-time inseparable.
Collapse
Affiliation(s)
- Robert F van der Willigen
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- School of Communication, Media and Information Technology, Rotterdam University of Applied Sciences, Rotterdam, The Netherlands
- Research Center Creating 010, Rotterdam University of Applied Sciences, Rotterdam, The Netherlands
| | - Huib Versnel
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Otorhinolaryngology and Head & Neck Surgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - A John van Opstal
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Papadaki E, Koustakas T, Werner A, Lindenberger U, Kühn S, Wenger E. Resting-state functional connectivity in an auditory network differs between aspiring professional and amateur musicians and correlates with performance. Brain Struct Funct 2023; 228:2147-2163. [PMID: 37792073 PMCID: PMC10587189 DOI: 10.1007/s00429-023-02711-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 09/10/2023] [Indexed: 10/05/2023]
Abstract
Auditory experience-dependent plasticity is often studied in the domain of musical expertise. Available evidence suggests that years of musical practice are associated with structural and functional changes in auditory cortex and related brain regions. Resting-state functional magnetic resonance imaging (MRI) can be used to investigate neural correlates of musical training and expertise beyond specific task influences. Here, we compared two groups of musicians with varying expertise: 24 aspiring professional musicians preparing for their entrance exam at Universities of Arts versus 17 amateur musicians without any such aspirations but who also performed music on a regular basis. We used an interval recognition task to define task-relevant brain regions and computed functional connectivity and graph-theoretical measures in this network on separately acquired resting-state data. Aspiring professionals performed significantly better on all behavioral indicators including interval recognition and also showed significantly greater network strength and global efficiency than amateur musicians. Critically, both average network strength and global efficiency were correlated with interval recognition task performance assessed in the scanner, and with an additional measure of interval identification ability. These findings demonstrate that task-informed resting-state fMRI can capture connectivity differences that correspond to expertise-related differences in behavior.
Collapse
Affiliation(s)
- Eleftheria Papadaki
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany.
- International Max Planck Research School on the Life Course (LIFE), Berlin, Germany.
| | - Theodoros Koustakas
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - André Werner
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany, London, UK
| | - Simone Kühn
- Lise Meitner Group for Environmental Neuroscience, Max Planck Institute for Human Development, Berlin, Germany
- Neuronal Plasticity Working Group, Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Elisabeth Wenger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| |
Collapse
|
4
|
Dura-Bernal S, Griffith EY, Barczak A, O'Connell MN, McGinnis T, Moreira JVS, Schroeder CE, Lytton WW, Lakatos P, Neymotin SA. Data-driven multiscale model of macaque auditory thalamocortical circuits reproduces in vivo dynamics. Cell Rep 2023; 42:113378. [PMID: 37925640 PMCID: PMC10727489 DOI: 10.1016/j.celrep.2023.113378] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/05/2023] [Accepted: 10/19/2023] [Indexed: 11/07/2023] Open
Abstract
We developed a detailed model of macaque auditory thalamocortical circuits, including primary auditory cortex (A1), medial geniculate body (MGB), and thalamic reticular nucleus, utilizing the NEURON simulator and NetPyNE tool. The A1 model simulates a cortical column with over 12,000 neurons and 25 million synapses, incorporating data on cell-type-specific neuron densities, morphology, and connectivity across six cortical layers. It is reciprocally connected to the MGB thalamus, which includes interneurons and core and matrix-layer-specific projections to A1. The model simulates multiscale measures, including physiological firing rates, local field potentials (LFPs), current source densities (CSDs), and electroencephalography (EEG) signals. Laminar CSD patterns, during spontaneous activity and in response to broadband noise stimulus trains, mirror experimental findings. Physiological oscillations emerge spontaneously across frequency bands comparable to those recorded in vivo. We elucidate population-specific contributions to observed oscillation events and relate them to firing and presynaptic input patterns. The model offers a quantitative theoretical framework to integrate and interpret experimental data and predict its underlying cellular and circuit mechanisms.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Erica Y Griffith
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Annamaria Barczak
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Monica N O'Connell
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Tammy McGinnis
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Joao V S Moreira
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Charles E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY, USA
| | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Kings County Hospital Center, Brooklyn, NY, USA
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA
| | - Samuel A Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
5
|
Zou X, Ji Z, Zhang T, Huang T, Wu S. Visual information processing through the interplay between fine and coarse signal pathways. Neural Netw 2023; 166:692-703. [PMID: 37604078 DOI: 10.1016/j.neunet.2023.07.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 07/19/2023] [Accepted: 07/30/2023] [Indexed: 08/23/2023]
Abstract
Object recognition is often viewed as a feedforward, bottom-up process in machine learning, but in real neural systems, object recognition is a complicated process which involves the interplay between two signal pathways. One is the parvocellular pathway (P-pathway), which is slow and extracts fine features of objects; the other is the magnocellular pathway (M-pathway), which is fast and extracts coarse features of objects. It has been suggested that the interplay between the two pathways endows the neural system with the capacity of processing visual information rapidly, adaptively, and robustly. However, the underlying computational mechanism remains largely unknown. In this study, we build a two-pathway model to elucidate the computational properties associated with the interactions between two visual pathways. Specifically, we model two visual pathways using two convolution neural networks: one mimics the P-pathway, referred to as FineNet, which is deep, has small-size kernels, and receives detailed visual inputs; the other mimics the M-pathway, referred to as CoarseNet, which is shallow, has large-size kernels, and receives blurred visual inputs. We show that CoarseNet can learn from FineNet through imitation to improve its performance, FineNet can benefit from the feedback of CoarseNet to improve its robustness to noise; and the two pathways interact with each other to achieve rough-to-fine information processing. Using visual backward masking as an example, we further demonstrate that our model can explain visual cognitive behaviors that involve the interplay between two pathways. We hope that this study gives us insight into understanding the interaction principles between two visual pathways.
Collapse
Affiliation(s)
- Xiaolong Zou
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China; Beijing Academy of Artificial Intelligence, Beijing, China.
| | - Zilong Ji
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; Institue of Cognitive Neuroscience, University College London, London, UK.
| | - Tianqiu Zhang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.
| | - Tiejun Huang
- Beijing Academy of Artificial Intelligence, Beijing, China; School of Computer Science, Peking University, Beijing, China.
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China; Beijing Academy of Artificial Intelligence, Beijing, China.
| |
Collapse
|
6
|
Schultheiβ H, Zulfiqar I, Verardo C, Jolivet RB, Moerel M. Modelling homeostatic plasticity in the auditory cortex results in neural signatures of tinnitus. Neuroimage 2023; 271:119987. [PMID: 36940510 DOI: 10.1016/j.neuroimage.2023.119987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 12/23/2022] [Accepted: 02/25/2023] [Indexed: 03/22/2023] Open
Abstract
Tinnitus is a clinical condition where a sound is perceived without an external sound source. Homeostatic plasticity (HSP), serving to increase neural activity as compensation for the reduced input to the auditory pathway after hearing loss, has been proposed as a mechanism underlying tinnitus. In support, animal models of tinnitus show evidence of increased neural activity after hearing loss, including increased spontaneous and sound-driven firing rate, as well as increased neural noise throughout the auditory processing pathway. Bridging these findings to human tinnitus, however, has proven to be challenging. Here we implement hearing loss-induced HSP in a Wilson-Cowan Cortical Model of the auditory cortex to predict how homeostatic principles operating at the microscale translate to the meso- to macroscale accessible through human neuroimaging. We observed HSP-induced response changes in the model that were previously proposed as neural signatures of tinnitus, but that have also been reported as correlates of hearing loss and hyperacusis. As expected, HSP increased spontaneous and sound-driven responsiveness in hearing-loss affected frequency channels of the model. We furthermore observed evidence of increased neural noise and the appearance of spatiotemporal modulations in neural activity, which we discuss in light of recent human neuroimaging findings. Our computational model makes quantitative predictions that require experimental validation, and may thereby serve as the basis of future human studies of hearing loss, tinnitus, and hyperacusis.
Collapse
Affiliation(s)
- Hannah Schultheiβ
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Master Systems Biology, Faculty of Science and Engineering, Maastricht University, Maastricht, the Netherlands
| | - Isma Zulfiqar
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Claudio Verardo
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands; The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Renaud B Jolivet
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
7
|
Liu XP, Wang X. Distinct neuronal types contribute to hybrid temporal encoding strategies in primate auditory cortex. PLoS Biol 2022; 20:e3001642. [PMID: 35613218 PMCID: PMC9132345 DOI: 10.1371/journal.pbio.3001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/22/2022] [Indexed: 11/18/2022] Open
Abstract
Studies of the encoding of sensory stimuli by the brain often consider recorded neurons as a pool of identical units. Here, we report divergence in stimulus-encoding properties between subpopulations of cortical neurons that are classified based on spike timing and waveform features. Neurons in auditory cortex of the awake marmoset (Callithrix jacchus) encode temporal information with either stimulus-synchronized or nonsynchronized responses. When we classified single-unit recordings using either a criteria-based or an unsupervised classification method into regular-spiking, fast-spiking, and bursting units, a subset of intrinsically bursting neurons formed the most highly synchronized group, with strong phase-locking to sinusoidal amplitude modulation (SAM) that extended well above 20 Hz. In contrast with other unit types, these bursting neurons fired primarily on the rising phase of SAM or the onset of unmodulated stimuli, and preferred rapid stimulus onset rates. Such differentiating behavior has been previously reported in bursting neuron models and may reflect specializations for detection of acoustic edges. These units responded to natural stimuli (vocalizations) with brief and precise spiking at particular time points that could be decoded with high temporal stringency. Regular-spiking units better reflected the shape of slow modulations and responded more selectively to vocalizations with overall firing rate increases. Population decoding using time-binned neural activity found that decoding behavior differed substantially between regular-spiking and bursting units. A relatively small pool of bursting units was sufficient to identify the stimulus with high accuracy in a manner that relied on the temporal pattern of responses. These unit type differences may contribute to parallel and complementary neural codes. Neurons in auditory cortex show highly diverse responses to sounds. This study suggests that neuronal type inferred from baseline firing properties accounts for much of this diversity, with a subpopulation of bursting units being specialized for precise temporal encoding.
Collapse
Affiliation(s)
- Xiao-Ping Liu
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- * E-mail: (X-PL); (XW)
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- * E-mail: (X-PL); (XW)
| |
Collapse
|
8
|
Goal-driven, neurobiological-inspired convolutional neural network models of human spatial hearing. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.05.104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
9
|
Predicting neuronal response properties from hemodynamic responses in the auditory cortex. Neuroimage 2021; 244:118575. [PMID: 34517127 DOI: 10.1016/j.neuroimage.2021.118575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 09/10/2021] [Indexed: 11/22/2022] Open
Abstract
Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.
Collapse
|
10
|
Boscain U, Prandi D, Sacchelli L, Turco G. A bio-inspired geometric model for sound reconstruction. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2021; 11:2. [PMID: 33394219 PMCID: PMC7782772 DOI: 10.1186/s13408-020-00099-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Accepted: 12/08/2020] [Indexed: 05/03/2023]
Abstract
The reconstruction mechanisms built by the human auditory system during sound reconstruction are still a matter of debate. The purpose of this study is to propose a mathematical model of sound reconstruction based on the functional architecture of the auditory cortex (A1). The model is inspired by the geometrical modelling of vision, which has undergone a great development in the last ten years. There are, however, fundamental dissimilarities, due to the different role played by time and the different group of symmetries. The algorithm transforms the degraded sound in an 'image' in the time-frequency domain via a short-time Fourier transform. Such an image is then lifted to the Heisenberg group and is reconstructed via a Wilson-Cowan integro-differential equation. Preliminary numerical experiments are provided, showing the good reconstruction properties of the algorithm on synthetic sounds concentrated around two frequencies.
Collapse
Affiliation(s)
- Ugo Boscain
- CNRS, LJLL, Sorbonne Université, Université de Paris, Inria, Paris, France
| | - Dario Prandi
- Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systèmes, 91190 Gif-sur-Yvette, France
| | - Ludovic Sacchelli
- Université Lyon, Université Claude Bernard Lyon 1, CNRS, LAGEPP UMR 5007, 43 bd du 11 novembre 1918, F-69100 Villeurbanne, France
| | - Giuseppina Turco
- CNRS, Laboratoire de Linguistique Formelle, UMR 7110, Université de Paris, Paris, France
| |
Collapse
|