1
|
Zhang WH. Decentralized Neural Circuits of Multisensory Information Integration in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:1-21. [PMID: 38270850 DOI: 10.1007/978-981-99-7611-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
The brain combines multisensory inputs together to obtain a complete and reliable description of the world. Recent experiments suggest that several interconnected multisensory brain areas are simultaneously involved to integrate multisensory information. It was unknown how these mutually connected multisensory areas achieve multisensory integration. To answer this question, using biologically plausible neural circuit models we developed a decentralized system for information integration that comprises multiple interconnected multisensory brain areas. Through studying an example of integrating visual and vestibular cues to infer heading direction, we show that such a decentralized system is well consistent with experimental observations. In particular, we demonstrate that this decentralized system can optimally integrate information by implementing sampling-based Bayesian inference. The Poisson variability of spike generation provides appropriate variability to drive sampling, and the interconnections between multisensory areas store the correlation prior between multisensory stimuli. The decentralized system predicts that optimally integrated information emerges locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics and O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
2
|
Wang Y, Zeng Y. Multisensory Concept Learning Framework Based on Spiking Neural Networks. Front Syst Neurosci 2022; 16:845177. [PMID: 35645741 PMCID: PMC9133338 DOI: 10.3389/fnsys.2022.845177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/20/2022] [Indexed: 11/13/2022] Open
Abstract
Concept learning highly depends on multisensory integration. In this study, we propose a multisensory concept learning framework based on brain-inspired spiking neural networks to create integrated vectors relying on the concept's perceptual strength of auditory, gustatory, haptic, olfactory, and visual. With different assumptions, two paradigms: Independent Merge (IM) and Associate Merge (AM) are designed in the framework. For testing, we employed eight distinct neural models and three multisensory representation datasets. The experiments show that integrated vectors are closer to human beings than the non-integrated ones. Furthermore, we systematically analyze the similarities and differences between IM and AM paradigms and validate the generality of our framework.
Collapse
Affiliation(s)
- Yuwei Wang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- *Correspondence: Yi Zeng
| |
Collapse
|
3
|
Chen L, Liao HI. Microsaccadic Eye Movements but not Pupillary Dilation Response Characterizes the Crossmodal Freezing Effect. Cereb Cortex Commun 2020; 1:tgaa072. [PMID: 34296132 PMCID: PMC8153075 DOI: 10.1093/texcom/tgaa072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 11/14/2022] Open
Abstract
In typical spatial orienting tasks, the perception of crossmodal (e.g., audiovisual) stimuli evokes greater pupil dilation and microsaccade inhibition than unisensory stimuli (e.g., visual). The characteristic pupil dilation and microsaccade inhibition has been observed in response to "salient" events/stimuli. Although the "saliency" account is appealing in the spatial domain, whether this occurs in the temporal context remains largely unknown. Here, in a brief temporal scale (within 1 s) and with the working mechanism of involuntary temporal attention, we investigated how eye metric characteristics reflect the temporal dynamics of perceptual organization, with and without multisensory integration. We adopted the crossmodal freezing paradigm using the classical Ternus apparent motion. Results showed that synchronous beeps biased the perceptual report for group motion and triggered the prolonged sound-induced oculomotor inhibition (OMI), whereas the sound-induced OMI was not obvious in a crossmodal task-free scenario (visual localization without audiovisual integration). A general pupil dilation response was observed in the presence of sounds in both visual Ternus motion categorization and visual localization tasks. This study provides the first empirical account of crossmodal integration by capturing microsaccades within a brief temporal scale; OMI but not pupillary dilation response characterizes task-specific audiovisual integration (shown by the crossmodal freezing effect).
Collapse
Affiliation(s)
- Lihan Chen
- Department of Brain and Cognitive Sciences, Schools of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100871, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China
| | - Hsin-I Liao
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0198, Japan
| |
Collapse
|
4
|
Gall AJ, Goodwin AM, Khacherian OS, Teal LB. Superior Colliculus Lesions Lead to Disrupted Responses to Light in Diurnal Grass Rats ( Arvicanthis niloticus). J Biol Rhythms 2019; 35:45-57. [PMID: 31619104 DOI: 10.1177/0748730419881920] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The circadian system regulates daily rhythms of physiology and behavior. Although extraordinary advances have been made to elucidate the brain mechanisms underlying the circadian system in nocturnal species, less is known in diurnal species. Recent studies have shown that retinorecipient brain areas such as the intergeniculate leaflet (IGL) and olivary pretectal nucleus (OPT) are critical for the display of normal patterns of daily activity in diurnal grass rats (Arvicanthis niloticus). Specifically, grass rats with IGL and OPT lesions respond to light in similar ways to intact nocturnal animals. Importantly, both the IGL and OPT project to one another in nocturnal species, and there is evidence that these 2 brain regions also project to the superior colliculus (SC). The SC receives direct retinal input, is involved in the triggering of rapid eye movement sleep in nocturnal rats, and is disproportionately large in the diurnal grass rat. The objective of the current study was to use diurnal grass rats to test the hypothesis that the SC is critical for the expression of diurnal behavior and physiology. We performed bilateral electrolytic lesions of the SC in female grass rats to examine behavioral patterns and acute responses to light. Most grass rats with SC lesions expressed significantly reduced activity in the presence of light. Exposing these grass rats to constant darkness reinstated activity levels during the subjective day, suggesting that light masks their ability to display a diurnal activity profile in 12:12 LD. Altogether, our data suggest that the SC is critical for maintaining normal responses to light in female grass rats.
Collapse
Affiliation(s)
- Andrew J Gall
- Department of Psychology and Neuroscience Program, Hope College, Holland, Michigan
| | - Alyssa M Goodwin
- Department of Psychology and Neuroscience Program, Hope College, Holland, Michigan
| | - Ohanes S Khacherian
- Department of Psychology and Neuroscience Program, Hope College, Holland, Michigan
| | - Laura B Teal
- Department of Psychology and Neuroscience Program, Hope College, Holland, Michigan
| |
Collapse
|
5
|
Li Q, Xi Y, Zhang M, Liu L, Tang X. Distinct Mechanism of Audiovisual Integration With Informative and Uninformative Sound in a Visual Detection Task: A DCM Study. Front Comput Neurosci 2019; 13:59. [PMID: 31555115 PMCID: PMC6727739 DOI: 10.3389/fncom.2019.00059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 08/16/2019] [Indexed: 02/03/2023] Open
Abstract
Previous studies have shown that task-irrelevant auditory information can provide temporal clues for the detection of visual targets and improve visual perception; such sounds are called informative sounds. The neural mechanism of the integration of informative sound and visual stimulus has been investigated extensively, using behavioral measurement or neuroimaging methods such as functional magnetic resonance imaging (fMRI) and event-related potential (ERP), but the dynamic processes of audiovisual integration cannot be characterized formally in terms of directed neuronal coupling. The present study adopts dynamic causal modeling (DCM) of fMRI data to identify changes in effective connectivity in the hierarchical brain networks that underwrite audiovisual integration and memory. This allows us to characterize context-sensitive changes in neuronal coupling and show how visual processing is contextualized by the processing of informative and uninformative sounds. Our results show that audiovisual integration with informative and uninformative sounds conforms to different optimal models in the two conditions, indicating distinct neural mechanisms of audiovisual integration. The findings also reveal that a sound is uninformative owing to low-level automatic audiovisual integration and informative owing to integration in high-level cognitive processes.
Collapse
Affiliation(s)
- Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Yang Xi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China.,School of Computer Science, Northeast Electric Power University, Jilin, China
| | - Mengchao Zhang
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| |
Collapse
|
6
|
Amerineni R, Gupta RS, Gupta L. Multimodal Object Classification Models Inspired by Multisensory Integration in the Brain. Brain Sci 2019; 9:brainsci9010003. [PMID: 30609705 PMCID: PMC6356735 DOI: 10.3390/brainsci9010003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 12/12/2018] [Accepted: 12/25/2018] [Indexed: 11/16/2022] Open
Abstract
Two multimodal classification models aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli are introduced. The feature-integrating model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The decision-integrating model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classifier models are implemented using multilayer perceptrons and multivariate statistical classifiers. Experiments involving the classification of noisy and attenuated auditory and visual representations of ten digits are designed to demonstrate the properties of the multimodal classifiers and to compare the performances of multimodal and unimodal classifiers. The experimental results show that the multimodal classification systems exhibit an important aspect of the "inverse effectiveness principle" by yielding significantly higher classification accuracies when compared with those of the unimodal classifiers. Furthermore, the flexibility offered by the generalized models enables the simulations and evaluations of various combinations of multimodal stimuli and classifiers under varying uncertainty conditions.
Collapse
Affiliation(s)
- Rajesh Amerineni
- Department of Electrical and Computer Engineering, Southern Illinois University, Carbondale, IL 62901, USA.
| | - Resh S Gupta
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA.
| | - Lalit Gupta
- Department of Electrical and Computer Engineering, Southern Illinois University, Carbondale, IL 62901, USA.
| |
Collapse
|
7
|
Ito S, Feldheim DA. The Mouse Superior Colliculus: An Emerging Model for Studying Circuit Formation and Function. Front Neural Circuits 2018; 12:10. [PMID: 29487505 PMCID: PMC5816945 DOI: 10.3389/fncir.2018.00010] [Citation(s) in RCA: 95] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Accepted: 01/22/2018] [Indexed: 11/30/2022] Open
Abstract
The superior colliculus (SC) is a midbrain area where visual, auditory and somatosensory information are integrated to initiate motor commands. The SC plays a central role in visual information processing in the mouse; it receives projections from 85% to 90% of the retinal ganglion cells (RGCs). While the mouse SC has been a long-standing model used to study retinotopic map formation, a number of technological advances in mouse molecular genetic techniques, large-scale physiological recordings and SC-dependent visual behavioral assays have made the mouse an even more ideal model to understand the relationship between circuitry and behavior.
Collapse
Affiliation(s)
- Shinya Ito
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - David A Feldheim
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, United States
| |
Collapse
|
8
|
Cuppini C, Ursino M, Magosso E, Ross LA, Foxe JJ, Molholm S. A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum. Front Hum Neurosci 2017; 11:518. [PMID: 29163099 PMCID: PMC5670153 DOI: 10.3389/fnhum.2017.00518] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Accepted: 10/11/2017] [Indexed: 11/13/2022] Open
Abstract
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Mauro Ursino
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Elisa Magosso
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Lars A. Ross
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - John J. Foxe
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
- Department of Neuroscience and The Del Monte Institute for Neuroscience, University of Rochester School of Medicine, Rochester, NY, United States
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| |
Collapse
|
9
|
Ohshiro T, Angelaki DE, DeAngelis GC. A Neural Signature of Divisive Normalization at the Level of Multisensory Integration in Primate Cortex. Neuron 2017; 95:399-411.e8. [PMID: 28728025 DOI: 10.1016/j.neuron.2017.06.043] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Revised: 06/19/2017] [Accepted: 06/26/2017] [Indexed: 10/19/2022]
Abstract
Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration.
Collapse
Affiliation(s)
- Tomokazu Ohshiro
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14611, USA; Department of Physiology, Tohoku University School of Medicine, Sendai 980-8575, Japan
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14611, USA.
| |
Collapse
|
10
|
Kardamakis AA, Pérez-Fernández J, Grillner S. Spatiotemporal interplay between multisensory excitation and recruited inhibition in the lamprey optic tectum. eLife 2016; 5. [PMID: 27635636 PMCID: PMC5026466 DOI: 10.7554/elife.16472] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/14/2016] [Indexed: 11/23/2022] Open
Abstract
Animals integrate the different senses to facilitate event-detection for navigation in their environment. In vertebrates, the optic tectum (superior colliculus) commands gaze shifts by synaptic integration of different sensory modalities. Recent works suggest that tectum can elaborate gaze reorientation commands on its own, rather than merely acting as a relay from upstream/forebrain circuits to downstream premotor centers. We show that tectal circuits can perform multisensory computations independently and, hence, configure final motor commands. Single tectal neurons receive converging visual and electrosensory inputs, as investigated in the lamprey - a phylogenetically conserved vertebrate. When these two sensory inputs overlap in space and time, response enhancement of output neurons occurs locally in the tectum, whereas surrounding areas and temporally misaligned inputs are inhibited. Retinal and electrosensory afferents elicit local monosynaptic excitation, quickly followed by inhibition via recruitment of GABAergic interneurons. Multisensory inputs can thus regulate event-detection within tectum through local inhibition without forebrain control. DOI:http://dx.doi.org/10.7554/eLife.16472.001 Many events occur around us simultaneously, which we detect through our senses. A critical task is to decide which of these events is the most important to look at in a given moment of time. This problem is solved by an ancient area of the brain called the optic tectum (known as the superior colliculus in mammals). The different senses are represented as superimposed maps in the optic tectum. Events that occur in different locations activate different areas of the map. Neurons in the optic tectum combine the responses from different senses to direct the animal’s attention and increase how reliably important events are detected. If an event is simultaneously registered by two senses, then certain neurons in the optic tectum will enhance their activity. By contrast, if two senses provide conflicting information about how different events progress, then these same neurons will be silenced. While this phenomenon of ‘multisensory integration’ is well described, little is known about how the optic tectum performs this integration. Kardamakis, Pérez-Fernández and Grillner have now studied multisensory integration in fish called lampreys, which belong to the oldest group of backboned animals. These fish can navigate using electroreception – the ability to detect electrical signals from the environment. Experiments that examined the connections between neurons in the optic tectum and monitored their activity revealed a neural circuit that consists of two types of neurons: inhibitory interneurons, and projecting neurons that connect the optic tectum to different motor centers in the brainstem. The circuit contains neurons that can receive inputs from both vision and electroreception when these senses are both activated from the same point in space. Incoming signals from the two senses activate the areas on the sensory maps that correspond to the location where the event occurred. This triggers the activity of the interneurons, which immediately send ‘stop’ signals. Thus, while an area of the sensory map and its output neurons are activated, the surrounding areas of the tectum are inhibited. Overall, the findings presented by Kardamakis, Pérez-Fernández and Grillner suggest that the optic tectum can direct attention to a particular event without requiring input from other brain areas. This ability has most likely been preserved throughout evolution. Future studies will aim to determine how the commands generated by the optic tectum circuit are translated into movements. DOI:http://dx.doi.org/10.7554/eLife.16472.002
Collapse
Affiliation(s)
| | | | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
11
|
Schumacher S, Burt de Perera T, Thenert J, von der Emde G. Cross-modal object recognition and dynamic weighting of sensory inputs in a fish. Proc Natl Acad Sci U S A 2016; 113:7638-43. [PMID: 27313211 PMCID: PMC4941484 DOI: 10.1073/pnas.1603120113] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Most animals use multiple sensory modalities to obtain information about objects in their environment. There is a clear adaptive advantage to being able to recognize objects cross-modally and spontaneously (without prior training with the sense being tested) as this increases the flexibility of a multisensory system, allowing an animal to perceive its world more accurately and react to environmental changes more rapidly. So far, spontaneous cross-modal object recognition has only been shown in a few mammalian species, raising the question as to whether such a high-level function may be associated with complex mammalian brain structures, and therefore absent in animals lacking a cerebral cortex. Here we use an object-discrimination paradigm based on operant conditioning to show, for the first time to our knowledge, that a nonmammalian vertebrate, the weakly electric fish Gnathonemus petersii, is capable of performing spontaneous cross-modal object recognition and that the sensory inputs are weighted dynamically during this task. We found that fish trained to discriminate between two objects with either vision or the active electric sense, were subsequently able to accomplish the task using only the untrained sense. Furthermore we show that cross-modal object recognition is influenced by a dynamic weighting of the sensory inputs. The fish weight object-related sensory inputs according to their reliability, to minimize uncertainty and to enable an optimal integration of the senses. Our results show that spontaneous cross-modal object recognition and dynamic weighting of sensory inputs are present in a nonmammalian vertebrate.
Collapse
Affiliation(s)
| | | | - Johanna Thenert
- Institut für Zoologie, Universität Bonn, 53115 Bonn, Germany
| | | |
Collapse
|
12
|
Abstract
UNLABELLED How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain.
Collapse
|
13
|
Bauer J, Magg S, Wermter S. Attention modeled as information in learning multisensory integration. Neural Netw 2015; 65:44-52. [PMID: 25688997 DOI: 10.1016/j.neunet.2015.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2014] [Revised: 01/14/2015] [Accepted: 01/18/2015] [Indexed: 11/30/2022]
Abstract
Top-down cognitive processes affect the way bottom-up cross-sensory stimuli are integrated. In this paper, we therefore extend a successful previous neural network model of learning multisensory integration in the superior colliculus (SC) by top-down, attentional input and train it on different classes of cross-modal stimuli. The network not only learns to integrate cross-modal stimuli, but the model also reproduces neurons specializing in different combinations of modalities as well as behavioral and neurophysiological phenomena associated with spatial and feature-based attention. Importantly, we do not provide the model with any information about which input neurons are sensory and which are attentional. If the basic mechanisms of our model-self-organized learning of input statistics and divisive normalization-play a major role in the ontogenesis of the SC, then this work shows that these mechanisms suffice to explain a wide range of aspects both of bottom-up multisensory integration and the top-down influence on multisensory integration.
Collapse
Affiliation(s)
- Johannes Bauer
- University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany.
| | - Sven Magg
- University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany.
| | - Stefan Wermter
- University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany.
| |
Collapse
|
14
|
Ursino M, Cuppini C, Magosso E. Neurocomputational approaches to modelling multisensory integration in the brain: A review. Neural Netw 2014; 60:141-65. [DOI: 10.1016/j.neunet.2014.08.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Revised: 08/05/2014] [Accepted: 08/07/2014] [Indexed: 10/24/2022]
|
15
|
A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies. BIOMED RESEARCH INTERNATIONAL 2013; 2013:475427. [PMID: 24228250 PMCID: PMC3818813 DOI: 10.1155/2013/475427] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 08/28/2013] [Accepted: 08/28/2013] [Indexed: 11/17/2022]
Abstract
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.
Collapse
|
16
|
Veale R. A Neurorobotics Approach to Investigating Word Learning Behaviors. ROBOTICS 2013. [DOI: 10.4018/978-1-4666-4607-0.ch083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
This chapter presents two examples of how neurorobotics is being used to further understanding of word learning in the human infant. The chapter begins by presenting an example of how neurorobotics has been used to explore the synchrony constraint of word-referent association in young infants. The chapter then demonstrates the application of neurorobotics to free looking behavior, another important basic behavior with repercussions in how infants map visual stimuli to auditory stimuli. Neurorobotics complements other approaches by validating proposed mechanisms, by linking behavior to neural implementation, and by bringing to light very specific questions that would otherwise remain unasked. Neurorobotics requires rigorous implementation of the target behaviors at many vertical levels, from the level of individual neurons up to the level of aggregate measures, such as net looking time. By implementing these in a real-world robot, it is possible to identify discontinuities in our understanding of how parts of the system function. The approach is thus informative for empiricists (both neurally and behaviorally), but it is also pragmatically useful, since it results in functional robotic systems performing human-like behavior.
Collapse
|
17
|
Cuppini C, Magosso E, Rowland B, Stein B, Ursino M. Hebbian mechanisms help explain development of multisensory integration in the superior colliculus: a neural network model. BIOLOGICAL CYBERNETICS 2012; 106:691-713. [PMID: 23011260 PMCID: PMC3552306 DOI: 10.1007/s00422-012-0511-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2011] [Accepted: 07/11/2012] [Indexed: 06/01/2023]
Abstract
The superior colliculus (SC) integrates relevant sensory information (visual, auditory, somatosensory) from several cortical and subcortical structures, to program orientation responses to external events. However, this capacity is not present at birth, and it is acquired only through interactions with cross-modal events during maturation. Mathematical models provide a quantitative framework, valuable in helping to clarify the specific neural mechanisms underlying the maturation of the multisensory integration in the SC. We extended a neural network model of the adult SC (Cuppini et al., Front Integr Neurosci 4:1-15, 2010) to describe the development of this phenomenon starting from an immature state, based on known or suspected anatomy and physiology, in which: (1) AES afferents are present but weak, (2) Responses are driven from non-AES afferents, and (3) The visual inputs have a marginal spatial tuning. Sensory experience was modeled by repeatedly presenting modality-specific and cross-modal stimuli. Synapses in the network were modified by simple Hebbian learning rules. As a consequence of this exposure, (1) Receptive fields shrink and come into spatial register, and (2) SC neurons gained the adult characteristic integrative properties: enhancement, depression, and inverse effectiveness. Importantly, the unique architecture of the model guided the development so that integration became dependent on the relationship between the cortical input and the SC. Manipulations of the statistics of the experience during the development changed the integrative profiles of the neurons, and results matched well with the results of physiological studies.
Collapse
Affiliation(s)
- C Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna, Bologna, Italy.
| | | | | | | | | |
Collapse
|
18
|
Brown SR. Emergence in the central nervous system. Cogn Neurodyn 2012; 7:173-95. [PMID: 24427200 DOI: 10.1007/s11571-012-9229-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2012] [Revised: 10/04/2012] [Accepted: 11/20/2012] [Indexed: 11/30/2022] Open
Abstract
"Emergence" is an idea that has received much attention in consciousness literature, but it is difficult to find characterizations of that concept which are both specific and useful. I will precisely define and characterize a type of epistemic ("weak") emergence and show that it is a property of some neural circuits throughout the CNS, on micro-, meso- and macroscopic levels. I will argue that possession of this property can result in profoundly altered neural dynamics on multiple levels in cortex and other systems. I will first describe emergent neural entities (ENEs) abstractly. I will then show how ENEs function specifically and concretely, and demonstrate some implications of this type of emergence for the CNS.
Collapse
Affiliation(s)
- Steven Ravett Brown
- Department of Neuroscience, Mt. Sinai School of Medicine, Icahn Medical Institute, 1425 Madison Ave, Rm 10-70E, New York, NY 10029 USA ; 158 W 23rd St, Fl 3, New York, NY 10011 USA
| |
Collapse
|
19
|
Hun Ki Lim, Keniston LP, Cios KJ. Modeling of Multisensory Convergence with a Network of Spiking Neurons: A Reverse Engineering Approach. IEEE Trans Biomed Eng 2011; 58:1940-9. [DOI: 10.1109/tbme.2011.2125962] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
20
|
Ohshiro T, Angelaki DE, DeAngelis GC. A normalization model of multisensory integration. Nat Neurosci 2011; 14:775-82. [PMID: 21552274 PMCID: PMC3102778 DOI: 10.1038/nn.2815] [Citation(s) in RCA: 188] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2010] [Accepted: 03/21/2011] [Indexed: 11/08/2022]
Abstract
Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration shown by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which uses a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the important features of multisensory integration by neurons.
Collapse
Affiliation(s)
- Tomokazu Ohshiro
- Dept. of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, NY 14627
| | - Dora E. Angelaki
- Dept. of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110
| | - Gregory C. DeAngelis
- Dept. of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, NY 14627
| |
Collapse
|
21
|
Cuppini C, Magosso E, Ursino M. Organization, maturation, and plasticity of multisensory integration: insights from computational modeling studies. Front Psychol 2011; 2:77. [PMID: 21687448 PMCID: PMC3110383 DOI: 10.3389/fpsyg.2011.00077] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2010] [Accepted: 04/12/2011] [Indexed: 11/15/2022] Open
Abstract
In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | |
Collapse
|
22
|
Hoshino O. Neuronal responses below firing threshold for subthreshold cross-modal enhancement. Neural Comput 2011; 23:958-83. [PMID: 21222529 DOI: 10.1162/neco_a_00096] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Multisensory integration (such as somatosensation-vision, gustation-olfaction) could occur even between subthreshold stimuli that in isolation do not reach perceptual awareness. For example, when a somatosensory (subthreshold) stimulus is delivered within a close spatiotemporal congruency, a visual (subthreshold) stimulus evokes a visual percept. Cross-modal enhancement of visual perception is maximal when the somatosensory stimulation precedes the visual one by tens of milliseconds. This rapid modulatory response would not be consistent with a top-down mechanism acting through higher-order multimodal cortical areas, but rather a direct interaction between lower-order unimodal areas. To elucidate the neuronal mechanisms of subthreshold cross-modal enhancement, we simulated a neural network model. In the model, lower unimodal (X, Y) and higher multimodal (M) networks are reciprocally connected by bottom-up and top-down axonal projections. The lower networks are laterally connected with each other. A pair of stimuli was presented to the lower networks, whose respective intensities were too weak to induce salient neuronal activity (population response) when presented alone. Neurons of the Y network were slightly depolarized below firing threshold when a cross-modal stimulus was presented alone to the X network. This allowed the Y network to make a rapid (within tens of milliseconds) population response when presented with a subsequent congruent stimulus. The reaction speed of the Y network was accelerated, provided that the top-down projections were strengthened. We suggest that a subthreshold (nonpopulation) response to a cross-modal stimulus, acting through interaction between lower (primary unisensory) areas, may be essential for a rapid suprathreshold (population) response to a congruent stimulus that follows. Top-down influences on cross-modal enhancement may be faster than expected, accelerating reaction speed to input, in which ongoing-spontaneous subthreshold excitation of lower-order unimodal cells by higher-order multimodal cells may play an active role.
Collapse
Affiliation(s)
- Osamu Hoshino
- Department of Intelligent Systems Engineering, Ibaraki University, Hitachi, Ibaraki, 316-8511, Japan.
| |
Collapse
|
23
|
Bürck M, Friedel P, Sichert AB, Vossen C, van Hemmen JL. Optimality in mono- and multisensory map formation. BIOLOGICAL CYBERNETICS 2010; 103:1-20. [PMID: 20502911 DOI: 10.1007/s00422-010-0393-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2010] [Accepted: 04/10/2010] [Indexed: 05/29/2023]
Abstract
In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.
Collapse
Affiliation(s)
- Moritz Bürck
- Technical University of Munich, Munich, Germany.
| | | | | | | | | |
Collapse
|
24
|
Magosso E. Integrating Information From Vision and Touch: A Neural Network Modeling Study. ACTA ACUST UNITED AC 2010; 14:598-612. [DOI: 10.1109/titb.2010.2040750] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
25
|
Cuppini C, Ursino M, Magosso E, Rowland BA, Stein BE. An emergent model of multisensory integration in superior colliculus neurons. Front Integr Neurosci 2010; 4:6. [PMID: 20431725 PMCID: PMC2861478 DOI: 10.3389/fnint.2010.00006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Accepted: 03/03/2010] [Indexed: 11/21/2022] Open
Abstract
Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | | | | | |
Collapse
|
26
|
Stein BE, Stanford TR, Ramachandran R, Perrault TJ, Rowland BA. Challenges in quantifying multisensory integration: alternative criteria, models, and inverse effectiveness. Exp Brain Res 2009; 198:113-26. [PMID: 19551377 PMCID: PMC3056521 DOI: 10.1007/s00221-009-1880-8] [Citation(s) in RCA: 134] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2008] [Accepted: 05/21/2009] [Indexed: 11/30/2022]
Abstract
Single-neuron studies provide a foundation for understanding many facets of multisensory integration. These studies have used a variety of criteria for identifying and quantifying multisensory integration. While a number of techniques have been used, an explicit discussion of the assumptions, criteria, and analytical methods traditionally used to define the principles of multisensory integration is lacking. This was not problematic when the field was small, but with rapid growth a number of alternative techniques and models have been introduced, each with its own criteria and sets of implicit assumptions to define and characterize what is thought to be the same phenomenon. The potential for misconception prompted this reexamination of traditional approaches in order to clarify their underlying assumptions and analytic techniques. The objective here is to review and discuss traditional quantitative methods advanced in the study of single-neuron physiology in order to appreciate the process of multisensory integration and its impact.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA
| | | | | | | | | |
Collapse
|
27
|
Alvarado JC, Stanford TR, Rowland BA, Vaughan JW, Stein BE. Multisensory integration in the superior colliculus requires synergy among corticocollicular inputs. J Neurosci 2009; 29:6580-92. [PMID: 19458228 PMCID: PMC2805025 DOI: 10.1523/jneurosci.0525-09.2009] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2009] [Revised: 03/31/2009] [Accepted: 04/05/2009] [Indexed: 11/21/2022] Open
Abstract
Influences from the visual (AEV), auditory (FAES), and somatosensory (SIV) divisions of the cat anterior ectosylvian sulcus (AES) play a critical role in rendering superior colliculus (SC) neurons capable of multisensory integration. However, it is not known whether this is accomplished via their independent sensory-specific action or via some cross-modal cooperative action that emerges as a consequence of their convergence on SC neurons. Using visual-auditory SC neurons as a model, we examined how selective and combined deactivation of FAES and AEV affected SC multisensory (visual-auditory) and unisensory (visual-visual) integration capabilities. As noted earlier, multisensory integration yielded SC responses that were significantly greater than those evoked by the most effective individual component stimulus. This multisensory "response enhancement" was more evident when the component stimuli were weakly effective. Conversely, unisensory integration was dominated by the lack of response enhancement. During cryogenic deactivation of FAES and/or AEV, the unisensory responses of SC neurons were only modestly affected; however, their multisensory response enhancement showed a significant downward shift and was eliminated. The shift was similar in magnitude for deactivation of either AES subregion and, in general, only marginally greater when both were deactivated simultaneously. These data reveal that SC multisensory integration is dependent on the cooperative action of distinct subsets of unisensory corticofugal afferents, afferents whose sensory combination matches the multisensory profile of their midbrain target neurons, and whose functional synergy is specific to rendering SC neurons capable of synthesizing information from those particular senses.
Collapse
Affiliation(s)
- Juan Carlos Alvarado
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | | | | | | | |
Collapse
|