1
|
Peng B, Huang JJ, Li Z, Zhang LI, Tao HW. Cross-modal enhancement of defensive behavior via parabigemino-collicular projections. Curr Biol 2024; 34:3616-3631.e5. [PMID: 39019036 PMCID: PMC11373540 DOI: 10.1016/j.cub.2024.06.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/19/2024] [Accepted: 06/20/2024] [Indexed: 07/19/2024]
Abstract
Effective detection and avoidance from environmental threats are crucial for animals' survival. Integration of sensory cues associated with threats across different modalities can significantly enhance animals' detection and behavioral responses. However, the neural circuit-level mechanisms underlying the modulation of defensive behavior or fear response under simultaneous multimodal sensory inputs remain poorly understood. Here, we report in mice that bimodal looming stimuli combining coherent visual and auditory signals elicit more robust defensive/fear reactions than unimodal stimuli. These include intensified escape and prolonged hiding, suggesting a heightened defensive/fear state. These various responses depend on the activity of the superior colliculus (SC), while its downstream nucleus, the parabigeminal nucleus (PBG), predominantly influences the duration of hiding behavior. PBG temporally integrates visual and auditory signals and enhances the salience of threat signals by amplifying SC sensory responses through its feedback projection to the visual layer of the SC. Our results suggest an evolutionarily conserved pathway in defense circuits for multisensory integration and cross-modality enhancement.
Collapse
Affiliation(s)
- Bo Peng
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, USA
| | - Junxiang J Huang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Graduate Program in Biomedical and Biological Sciences, University of Southern California, Los Angeles, CA 90033, USA
| | - Zhong Li
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Li I Zhang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| | - Huizhong Whit Tao
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| |
Collapse
|
2
|
Smyre SA, Bean NL, Stein BE, Rowland BA. The brain can develop conflicting multisensory principles to guide behavior. Cereb Cortex 2024; 34:bhae247. [PMID: 38879756 PMCID: PMC11179994 DOI: 10.1093/cercor/bhae247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/30/2024] [Indexed: 06/19/2024] Open
Abstract
Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.
Collapse
Affiliation(s)
- Scott A Smyre
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
3
|
Melleu FF, Canteras NS. Pathways from the Superior Colliculus to the Basal Ganglia. Curr Neuropharmacol 2024; 22:1431-1453. [PMID: 37702174 PMCID: PMC11097988 DOI: 10.2174/1570159x21666230911102118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/22/2023] [Accepted: 02/26/2023] [Indexed: 09/14/2023] Open
Abstract
The present work aims to review the structural organization of the mammalian superior colliculus (SC), the putative pathways connecting the SC and the basal ganglia, and their role in organizing complex behavioral output. First, we review how the complex intrinsic connections between the SC's laminae projections allow for the construction of spatially aligned, visual-multisensory maps of the surrounding environment. Moreover, we present a summary of the sensory-motor inputs of the SC, including a description of the integration of multi-sensory inputs relevant to behavioral control. We further examine the major descending outputs toward the brainstem and spinal cord. As the central piece of this review, we provide a thorough analysis covering the putative interactions between the SC and the basal ganglia. To this end, we explore the diverse thalamic routes by which information from the SC may reach the striatum, including the pathways through the lateral posterior, parafascicular, and rostral intralaminar thalamic nuclei. We also examine the interactions between the SC and subthalamic nucleus, representing an additional pathway for the tectal modulation of the basal ganglia. Moreover, we discuss how information from the SC might also be relayed to the basal ganglia through midbrain tectonigral and tectotegmental projections directed at the substantia nigra compacta and ventrotegmental area, respectively, influencing the dopaminergic outflow to the dorsal and ventral striatum. We highlight the vast interplay between the SC and the basal ganglia and raise several missing points that warrant being addressed in future studies.
Collapse
Affiliation(s)
| | - Newton Sabino Canteras
- Department of Anatomy, Institute of Biomedical Sciences, University of Sao Paulo, Sao Paulo, SP, Brazil
| |
Collapse
|
4
|
Bean NL, Stein BE, Rowland BA. Cross-modal exposure restores multisensory enhancement after hemianopia. Cereb Cortex 2023; 33:11036-11046. [PMID: 37724427 PMCID: PMC10646694 DOI: 10.1093/cercor/bhad343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/20/2023] Open
Abstract
Hemianopia is a common consequence of unilateral damage to visual cortex that manifests as a profound blindness in contralesional space. A noninvasive cross-modal (visual-auditory) exposure paradigm has been developed in an animal model to ameliorate this disorder. Repeated stimulation of a visual-auditory stimulus restores overt responses to visual stimuli in the blinded hemifield. It is believed to accomplish this by enhancing the visual sensitivity of circuits remaining after a lesion of visual cortex; in particular, circuits involving the multisensory neurons of the superior colliculus. Neurons in this midbrain structure are known to integrate spatiotemporally congruent visual and auditory signals to amplify their responses, which, in turn, enhances behavioral performance. Here we evaluated the relationship between the rehabilitation of hemianopia and this process of multisensory integration. Induction of hemianopia also eliminated multisensory enhancement in the blinded hemifield. Both vision and multisensory enhancement rapidly recovered with the rehabilitative cross-modal exposures. However, although both reached pre-lesion levels at similar rates, they did so with different spatial patterns. The results suggest that the capability for multisensory integration and enhancement is not a pre-requisite for visual recovery in hemianopia, and that the underlying mechanisms for recovery may be more complex than currently appreciated.
Collapse
Affiliation(s)
- Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| |
Collapse
|
5
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
6
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized ('attenuated' or 'habituated') or sensitized ('potentiated') to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual-auditory stimuli, followed by a fourth stimulus that was either the same or different ('switch'). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual-auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron's transform, nor are they dependent on the neuron's output.
Collapse
|
7
|
Kheirkhah K, Moradi V, Kavianpour I, Farahani S. Comparison of Maturity in Auditory-Visual Multisensory Processes With Sound-Induced Flash Illusion Test in Children and Adults. Cureus 2022; 14:e27631. [PMID: 36072200 PMCID: PMC9437373 DOI: 10.7759/cureus.27631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/30/2022] [Indexed: 11/05/2022] Open
|
8
|
Jiang H, Stanford TR, Rowland BA, Stein BE. Association Cortex Is Essential to Reverse Hemianopia by Multisensory Training. Cereb Cortex 2021; 31:5015-5023. [PMID: 34056645 DOI: 10.1093/cercor/bhab138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 04/19/2021] [Accepted: 04/19/2021] [Indexed: 11/14/2022] Open
Abstract
Hemianopia induced by unilateral visual cortex lesions can be resolved by repeatedly exposing the blinded hemifield to auditory-visual stimuli. This rehabilitative "training" paradigm depends on mechanisms of multisensory plasticity that restore the lost visual responsiveness of multisensory neurons in the ipsilesional superior colliculus (SC) so that they can once again support vision in the blinded hemifield. These changes are thought to operate via the convergent visual and auditory signals relayed to the SC from association cortex (the anterior ectosylvian sulcus [AES], in cat). The present study tested this assumption by cryogenically deactivating ipsilesional AES in hemianopic, anesthetized cats during weekly multisensory training sessions. No signs of visual recovery were evident in this condition, even after providing animals with up to twice the number of training sessions required for effective rehabilitation. Subsequent training under the same conditions, but with AES active, reversed the hemianopia within the normal timeframe. These results indicate that the corticotectal circuit that is normally engaged in SC multisensory plasticity has to be operational for the brain to use visual-auditory experience to resolve hemianopia.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| |
Collapse
|
9
|
Billock VA, Kinney MJ, Schnupp JW, Meredith MA. A simple vector-like law for perceptual information combination is also followed by a class of cortical multisensory bimodal neurons. iScience 2021; 24:102527. [PMID: 34142039 PMCID: PMC8188495 DOI: 10.1016/j.isci.2021.102527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 01/10/2021] [Accepted: 05/05/2021] [Indexed: 11/25/2022] Open
Abstract
An interdisciplinary approach to sensory information combination shows a correspondence between perceptual and neural measures of nonlinear multisensory integration. In psychophysics, sensory information combinations are often characterized by the Minkowski formula, but the neural substrates of many psychophysical multisensory interactions are unknown. We show that audiovisual interactions - for both psychophysical detection threshold data and cortical bimodal neurons - obey similar vector-like Minkowski models, suggesting that cortical bimodal neurons could underlie multisensory perceptual sensitivity. An alternative Bayesian model is not a good predictor of cortical bimodal response. In contrast to cortex, audiovisual data from superior colliculus resembles the 'City-Block' combination rule used in perceptual similarity metrics. Previous work found a simple power law amplification rule is followed for perceptual appearance measures and by cortical subthreshold multisensory neurons. The two most studied neural cell classes in cortical multisensory interactions may provide neural substrates for two important perceptual modes: appearance-based and performance-based perception.
Collapse
Affiliation(s)
- Vincent A. Billock
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
| | - Micah J. Kinney
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
- Naval Air Warfare Center, NAWCAD, Patuxent River, MD 20670, USA
| | - Jan W.H. Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong, China
| | - M. Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University, Richmond, VA 23298, USA
| |
Collapse
|
10
|
Smyre SA, Wang Z, Stein BE, Rowland BA. Multisensory enhancement of overt behavior requires multisensory experience. Eur J Neurosci 2021; 54:4514-4527. [PMID: 34013578 DOI: 10.1111/ejn.15315] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/11/2021] [Accepted: 05/14/2021] [Indexed: 11/27/2022]
Abstract
The superior colliculus (SC) is richly endowed with neurons that integrate cues from different senses to enhance their physiological responses and the overt behaviors they mediate. However, in the absence of experience with cross-modal combinations (e.g., visual-auditory), they fail to develop this characteristic multisensory capability: Their multisensory responses are no greater than their most effective unisensory responses. Presumably, this impairment in neural development would be reflected as corresponding impairments in SC-mediated behavioral capabilities such as detection and localization performance. Here, we tested that assumption directly in cats raised to adulthood in darkness. They, along with a normally reared cohort, were trained to approach brief visual or auditory stimuli. The animals were then tested with these stimuli individually and in combination under ambient light conditions consistent with their rearing conditions and home environment as well as under the opposite lighting condition. As expected, normally reared animals detected and localized the cross-modal combinations significantly better than their individual component stimuli. However, dark-reared animals showed significant defects in multisensory detection and localization performance. The results indicate that a physiological impairment in single multisensory SC neurons is predictive of an impairment in overt multisensory behaviors.
Collapse
Affiliation(s)
- Scott A Smyre
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Zhengyang Wang
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Benjamin A Rowland
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| |
Collapse
|
11
|
Bean NL, Stein BE, Rowland BA. Stimulus value gates multisensory integration. Eur J Neurosci 2021; 53:3142-3159. [PMID: 33667027 DOI: 10.1111/ejn.15167] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 02/18/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022]
Abstract
The brain enhances its perceptual and behavioral decisions by integrating information from its multiple senses in what are believed to be optimal ways. This phenomenon of "multisensory integration" appears to be pre-conscious, effortless, and highly efficient. The present experiments examined whether experience could modify this seemingly automatic process. Cats were trained in a localization task in which congruent pairs of auditory-visual stimuli are normally integrated to enhance detection and orientation/approach performance. Consistent with the results of previous studies, animals more reliably detected and approached cross-modal pairs than their modality-specific component stimuli, regardless of whether the pairings were novel or familiar. However, when provided evidence that one of the modality-specific component stimuli had no value (it was not rewarded) animals ceased integrating it with other cues, and it lost its previous ability to enhance approach behaviors. Cross-modal pairings involving that stimulus failed to elicit enhanced responses even when the paired stimuli were congruent and mutually informative. However, the stimulus regained its ability to enhance responses when it was associated with reward. This suggests that experience can selectively block access of stimuli (i.e., filter inputs) to the multisensory computation. Because this filtering process results in the loss of useful information, its operation and behavioral consequences are not optimal. Nevertheless, the process can be of substantial value in natural environments, rich in dynamic stimuli, by using experience to minimize the impact of stimuli unlikely to be of biological significance, and reducing the complexity of the problem of matching signals across the senses.
Collapse
Affiliation(s)
- Naomi L Bean
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | | |
Collapse
|
12
|
Shadi K, Dyer E, Dovrolis C. Multisensory integration in the mouse cortical connectome using a network diffusion model. Netw Neurosci 2020; 4:1030-1054. [PMID: 33195947 PMCID: PMC7655044 DOI: 10.1162/netn_a_00164] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 08/03/2020] [Indexed: 01/05/2023] Open
Abstract
Having a structural network representation of connectivity in the brain is instrumental in analyzing communication dynamics and neural information processing. In this work, we make steps towards understanding multisensory information flow and integration using a network diffusion approach. In particular, we model the flow of evoked activity, initiated by stimuli at primary sensory regions, using the asynchronous linear threshold (ALT) diffusion model. The ALT model captures how evoked activity that originates at a given region of the cortex “ripples through” other brain regions (referred to as an activation cascade). We find that a small number of brain regions–the claustrum and the parietal temporal cortex being at the top of the list–are involved in almost all cortical sensory streams. This suggests that the cortex relies on an hourglass architecture to first integrate and compress multisensory information from multiple sensory regions, before utilizing that lower dimensionality representation in higher level association regions and more complex cognitive tasks. Having a structural network representation of connectivity in the brain is instrumental in analyzing communication dynamics and neural information processing. In this work, we make steps towards understanding multisensory information flow and integration using a network diffusion approach. In particular, we model the flow of evoked activity, initiated by stimuli at primary sensory regions, using the asynchronous linear threshold (ALT) diffusion model. The ALT model captures how evoked activity that originates at a given region of the cortex “ripples through” other brain regions (referred to as an activation cascade). We apply the ALT model to the mouse connectome provided by the Allen Institute for Brain Science. A first result, using functional datasets based on voltage-sensitive dye (VSD) imaging, is that the ALT model, despite its simplicity, predicts the temporal ordering of each sensory activation cascade quite accurately. We further apply this model to study multisensory integration and find that a small number of brain regionsthe claustrum and the parietal temporal cortex being at the top of the listare involved in almost all cortical sensory streams. This suggests that the cortex relies on an hourglass architecture to first integrate and compress multisensory information from multiple sensory regions, before utilizing that lower dimensionality representation in higher level association regions and more complex cognitive tasks.
Collapse
Affiliation(s)
- Kamal Shadi
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Eva Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | | |
Collapse
|
13
|
Siemann JK, Veenstra-VanderWeele J, Wallace MT. Approaches to Understanding Multisensory Dysfunction in Autism Spectrum Disorder. Autism Res 2020; 13:1430-1449. [PMID: 32869933 PMCID: PMC7721996 DOI: 10.1002/aur.2375] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 12/14/2022]
Abstract
Abnormal sensory responses are a DSM-5 symptom of autism spectrum disorder (ASD), and research findings demonstrate altered sensory processing in ASD. Beyond difficulties with processing information within single sensory domains, including both hypersensitivity and hyposensitivity, difficulties in multisensory processing are becoming a core issue of focus in ASD. These difficulties may be targeted by treatment approaches such as "sensory integration," which is frequently applied in autism treatment but not yet based on clear evidence. Recently, psychophysical data have emerged to demonstrate multisensory deficits in some children with ASD. Unlike deficits in social communication, which are best understood in humans, sensory and multisensory changes offer a tractable marker of circuit dysfunction that is more easily translated into animal model systems to probe the underlying neurobiological mechanisms. Paralleling experimental paradigms that were previously applied in humans and larger mammals, we and others have demonstrated that multisensory function can also be examined behaviorally in rodents. Here, we review the sensory and multisensory difficulties commonly found in ASD, examining laboratory findings that relate these findings across species. Next, we discuss the known neurobiology of multisensory integration, drawing largely on experimental work in larger mammals, and extensions of these paradigms into rodents. Finally, we describe emerging investigations into multisensory processing in genetic mouse models related to autism risk. By detailing findings from humans to mice, we highlight the advantage of multisensory paradigms that can be easily translated across species, as well as the potential for rodent experimental systems to reveal opportunities for novel treatments. LAY SUMMARY: Sensory and multisensory deficits are commonly found in ASD and may result in cascading effects that impact social communication. By using similar experiments to those in humans, we discuss how studies in animal models may allow an understanding of the brain mechanisms that underlie difficulties in multisensory integration, with the ultimate goal of developing new treatments. Autism Res 2020, 13: 1430-1449. © 2020 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Justin K Siemann
- Department of Biological Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Jeremy Veenstra-VanderWeele
- Department of Psychiatry, Columbia University, Center for Autism and the Developing Brain, New York Presbyterian Hospital, and New York State Psychiatric Institute, New York, New York, USA
| | - Mark T Wallace
- Department of Psychiatry, Vanderbilt University, Nashville, Tennessee, USA
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
14
|
Làdavas E, Tosatto L, Bertini C. Behavioural and functional changes in neglect after multisensory stimulation. Neuropsychol Rehabil 2020; 32:662-689. [DOI: 10.1080/09602011.2020.1786411] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Elisabetta Làdavas
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy
- Department of Psychology, University of Bologna, Bologna, Italy
| | | | - Caterina Bertini
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy
- Department of Psychology, University of Bologna, Bologna, Italy
| |
Collapse
|
15
|
Oess T, Löhr MPR, Schmid D, Ernst MO, Neumann H. From Near-Optimal Bayesian Integration to Neuromorphic Hardware: A Neural Network Model of Multisensory Integration. Front Neurorobot 2020; 14:29. [PMID: 32499692 PMCID: PMC7243343 DOI: 10.3389/fnbot.2020.00029] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 04/22/2020] [Indexed: 11/18/2022] Open
Abstract
While interacting with the world our senses and nervous system are constantly challenged to identify the origin and coherence of sensory input signals of various intensities. This problem becomes apparent when stimuli from different modalities need to be combined, e.g., to find out whether an auditory stimulus and a visual stimulus belong to the same object. To cope with this problem, humans and most other animal species are equipped with complex neural circuits to enable fast and reliable combination of signals from various sensory organs. This multisensory integration starts in the brain stem to facilitate unconscious reflexes and continues on ascending pathways to cortical areas for further processing. To investigate the underlying mechanisms in detail, we developed a canonical neural network model for multisensory integration that resembles neurophysiological findings. For example, the model comprises multisensory integration neurons that receive excitatory and inhibitory inputs from unimodal auditory and visual neurons, respectively, as well as feedback from cortex. Such feedback projections facilitate multisensory response enhancement and lead to the commonly observed inverse effectiveness of neural activity in multisensory neurons. Two versions of the model are implemented, a rate-based neural network model for qualitative analysis and a variant that employs spiking neurons for deployment on a neuromorphic processing. This dual approach allows to create an evaluation environment with the ability to test model performances with real world inputs. As a platform for deployment we chose IBM's neurosynaptic chip TrueNorth. Behavioral studies in humans indicate that temporal and spatial offsets as well as reliability of stimuli are critical parameters for integrating signals from different modalities. The model reproduces such behavior in experiments with different sets of stimuli. In particular, model performance for stimuli with varying spatial offset is tested. In addition, we demonstrate that due to the emergent properties of network dynamics model performance is close to optimal Bayesian inference for integration of multimodal sensory signals. Furthermore, the implementation of the model on a neuromorphic processing chip enables a complete neuromorphic processing cascade from sensory perception to multisensory integration and the evaluation of model performance for real world inputs.
Collapse
Affiliation(s)
- Timo Oess
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Maximilian P R Löhr
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Daniel Schmid
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Marc O Ernst
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Heiko Neumann
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| |
Collapse
|
16
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
17
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
18
|
Stein BE, Rowland BA. Using superior colliculus principles of multisensory integration to reverse hemianopia. Neuropsychologia 2020; 141:107413. [PMID: 32113921 DOI: 10.1016/j.neuropsychologia.2020.107413] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Revised: 02/04/2020] [Accepted: 02/24/2020] [Indexed: 11/18/2022]
Abstract
The diversity of our senses conveys many advantages; it enables them to compensate for one another when needed, and the information they provide about a common event can be integrated to facilitate its processing and, ultimately, adaptive responses. These cooperative interactions are produced by multisensory neurons. A well-studied model in this context is the multisensory neuron in the output layers of the superior colliculus (SC). These neurons integrate and amplify their cross-modal (e.g., visual-auditory) inputs, thereby enhancing the physiological salience of the initiating event and the probability that it will elicit SC-mediated detection, localization, and orientation behavior. Repeated experience with the same visual-auditory stimulus can also increase the neuron's sensitivity to these individual inputs. This observation raised the possibility that such plasticity could be engaged to restore visual responsiveness when compromised. For example, unilateral lesions of visual cortex compromise the visual responsiveness of neurons in the multisensory output layers of the ipsilesional SC and produces profound contralesional blindness (hemianopia). The possibility that multisensory plasticity could restore the visual responses of these neurons, and reverse blindness, was tested in the cat model of hemianopia. Hemianopic subjects were repeatedly presented with spatiotemporally congruent visual-auditory stimulus pairs in the blinded hemifield on a daily or weekly basis. After several weeks of this multisensory exposure paradigm, visual responsiveness was restored in SC neurons and behavioral responses were elicited by visual stimuli in the previously blind hemifield. The constraints on the effectiveness of this procedure proved to be the same as those constraining SC multisensory plasticity: whereas repetitions of a congruent visual-auditory stimulus was highly effective, neither exposure to its individual component stimuli, nor to these stimuli in non-congruent configurations was effective. The restored visual responsiveness proved to be robust, highly competitive with that in the intact hemifield, and sufficient to support visual discrimination.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA.
| |
Collapse
|
19
|
Sugiyama S, Kinukawa T, Takeuchi N, Nishihara M, Shioiri T, Inui K. Tactile Cross-Modal Acceleration Effects on Auditory Steady-State Response. Front Integr Neurosci 2019; 13:72. [PMID: 31920574 PMCID: PMC6927992 DOI: 10.3389/fnint.2019.00072] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 12/02/2019] [Indexed: 01/09/2023] Open
Abstract
In the sensory cortex, cross-modal interaction occurs during the early cortical stages of processing; however, its effect on the speed of neuronal activity remains unclear. In this study, we used magnetoencephalography (MEG) to investigate whether tactile stimulation influences auditory steady-state responses (ASSRs). To this end, a 0.5-ms electrical pulse was randomly presented to the dorsum of the left or right hand of 12 healthy volunteers at 700 ms while a train of 25-ms pure tones were applied to the left or right side at 75 dB for 1,200 ms. Peak latencies of 40-Hz ASSR were measured. Our results indicated that tactile stimulation significantly shortened subsequent ASSR latency. This cross-modal effect was observed from approximately 50 ms to 125 ms after the onset of tactile stimulation. The somatosensory information that appeared to converge on the auditory system may have arisen during the early processing stages, with the reduced ASSR latency indicating that a new sensory event from the cross-modal inputs served to increase the speed of ongoing sensory processing. Collectively, our findings indicate that ASSR latency changes are a sensitive index of accelerated processing.
Collapse
Affiliation(s)
- Shunsuke Sugiyama
- Department of Psychiatry and Psychotherapy, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Tomoaki Kinukawa
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | | | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
| | - Toshiki Shioiri
- Department of Psychiatry and Psychotherapy, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Koji Inui
- Departmernt of Functioning and Disability, Institute for Developmental Research, Kasugai, Japan
| |
Collapse
|
20
|
Meijer GT, Mertens PEC, Pennartz CMA, Olcese U, Lansink CS. The circuit architecture of cortical multisensory processing: Distinct functions jointly operating within a common anatomical network. Prog Neurobiol 2019; 174:1-15. [PMID: 30677428 DOI: 10.1016/j.pneurobio.2019.01.004] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 12/21/2018] [Accepted: 01/21/2019] [Indexed: 12/16/2022]
Abstract
Our perceptual systems continuously process sensory inputs from different modalities and organize these streams of information such that our subjective representation of the outside world is a unified experience. By doing so, they also enable further cognitive processing and behavioral action. While cortical multisensory processing has been extensively investigated in terms of psychophysics and mesoscale neural correlates, an in depth understanding of the underlying circuit-level mechanisms is lacking. Previous studies on circuit-level mechanisms of multisensory processing have predominantly focused on cue integration, i.e. the mechanism by which sensory features from different modalities are combined to yield more reliable stimulus estimates than those obtained by using single sensory modalities. In this review, we expand the framework on the circuit-level mechanisms of cortical multisensory processing by highlighting that multisensory processing is a family of functions - rather than a single operation - which involves not only the integration but also the segregation of modalities. In addition, multisensory processing not only depends on stimulus features, but also on cognitive resources, such as attention and memory, as well as behavioral context, to determine the behavioral outcome. We focus on rodent models as a powerful instrument to study the circuit-level bases of multisensory processes, because they enable combining cell-type-specific recording and interventional techniques with complex behavioral paradigms. We conclude that distinct multisensory processes share overlapping anatomical substrates, are implemented by diverse neuronal micro-circuitries that operate in parallel, and are flexibly recruited based on factors such as stimulus features and behavioral constraints.
Collapse
Affiliation(s)
- Guido T Meijer
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Paul E C Mertens
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Umberto Olcese
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Carien S Lansink
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| |
Collapse
|
21
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
22
|
Gharaei S, Arabzadeh E, Solomon SG. Integration of visual and whisker signals in rat superior colliculus. Sci Rep 2018; 8:16445. [PMID: 30401871 PMCID: PMC6219574 DOI: 10.1038/s41598-018-34661-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 10/16/2018] [Indexed: 12/12/2022] Open
Abstract
Multisensory integration is a process by which signals from different sensory modalities are combined to facilitate detection and localization of external events. One substrate for multisensory integration is the midbrain superior colliculus (SC) which plays an important role in orienting behavior. In rodent SC, visual and somatosensory (whisker) representations are in approximate registration, but whether and how these signals interact is unclear. We measured spiking activity in SC of anesthetized hooded rats, during presentation of visual- and whisker stimuli that were tested simultaneously or in isolation. Visual responses were found in all layers, but were primarily located in superficial layers. Whisker responsive sites were primarily found in intermediate layers. In single- and multi-unit recording sites, spiking activity was usually only sensitive to one modality, when stimuli were presented in isolation. By contrast, we observed robust and primarily suppressive interactions when stimuli were presented simultaneously to both modalities. We conclude that while visual and whisker representations in SC of rat are partially overlapping, there is limited excitatory convergence onto individual sites. Multimodal integration may instead rely on suppressive interactions between modalities.
Collapse
Affiliation(s)
- Saba Gharaei
- Discipline of Physiology, School of Medical Sciences, The University of Sydney, Sydney, Australia. .,Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia. .,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, Australia.
| | - Ehsan Arabzadeh
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, Australia
| | - Samuel G Solomon
- Discipline of Physiology, School of Medical Sciences, The University of Sydney, Sydney, Australia.,Institute of Behavioural Neuroscience, University College London, London, UK
| |
Collapse
|
23
|
Xu J, Bi T, Wu J, Meng F, Wang K, Hu J, Han X, Zhang J, Zhou X, Keniston L, Yu L. Spatial receptive field shift by preceding cross-modal stimulation in the cat superior colliculus. J Physiol 2018; 596:5033-5050. [PMID: 30144059 DOI: 10.1113/jp275427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 08/21/2018] [Indexed: 12/11/2022] Open
Abstract
KEY POINTS It has been known for some time that sensory information of one type can bias the spatial perception of another modality. However, there is a lack of evidence of this occurring in individual neurons. In the present study, we found that the spatial receptive field of superior colliculus multisensory neurons could be dynamically shifted by a preceding stimulus in a different modality. The extent to which the receptive field shifted was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. This result provides a neural mechanism that could underlie the process of cross-modal spatial calibration. ABSTRACT Psychophysical studies have shown that the different senses can be spatially entrained by each other. This can be observed in certain phenomena, such as ventriloquism, in which a visual stimulus can attract the perceived location of a spatially discordant sound. However, the neural mechanism underlying this cross-modal spatial recalibration has remained unclear, as has whether it takes place dynamically. We explored these issues in multisensory neurons of the cat superior colliculus (SC), a midbrain structure that involves both cross-modal and sensorimotor integration. Sequential cross-modal stimulation showed that the preceding stimulus can shift the receptive field (RF) of the lagging response. This cross-modal spatial calibration took place in both auditory and visual RFs, although auditory RFs shifted slightly more. By contrast, if a preceding stimulus was from the same modality, it failed to induce a similarly substantial RF shift. The extent of the RF shift was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. A narrow time gap and high stimulus salience were able to induce larger RF shifts. In addition, when both visual and auditory stimuli were presented simultaneously, a substantial RF shift toward the location-fixed stimulus was also induced. These results, taken together, reveal an online cross-modal process and reflect the details of the organization of SC inter-sensory spatial calibration.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Fanzhu Meng
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Kun Wang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiao Han
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| |
Collapse
|
24
|
Effect of acceleration of auditory inputs on the primary somatosensory cortex in humans. Sci Rep 2018; 8:12883. [PMID: 30150686 PMCID: PMC6110726 DOI: 10.1038/s41598-018-31319-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interaction occurs during the early stages of processing in the sensory cortex; however, its effect on neuronal activity speed remains unclear. We used magnetoencephalography to investigate whether auditory stimulation influences the initial cortical activity in the primary somatosensory cortex. A 25-ms pure tone was randomly presented to the left or right side of healthy volunteers at 1000 ms when electrical pulses were applied to the left or right median nerve at 20 Hz for 1500 ms because we did not observe any cross-modal effect elicited by a single pulse. The latency of N20 m originating from Brodmann's area 3b was measured for each pulse. The auditory stimulation significantly shortened the N20 m latency at 1050 and 1100 ms. This reduction in N20 m latency was identical for the ipsilateral and contralateral sounds for both latency points. Therefore, somatosensory-auditory interaction, such as input to the area 3b from the thalamus, occurred during the early stages of synaptic transmission. Auditory information that converged on the somatosensory system was considered to have arisen from the early stages of the feedforward pathway. Acceleration of information processing through the cross-modal interaction seemed to be partly due to faster processing in the sensory cortex.
Collapse
|
25
|
Modified Origins of Cortical Projections to the Superior Colliculus in the Deaf: Dispersion of Auditory Efferents. J Neurosci 2018; 38:4048-4058. [PMID: 29610441 DOI: 10.1523/jneurosci.2858-17.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Revised: 03/13/2018] [Accepted: 03/16/2018] [Indexed: 11/21/2022] Open
Abstract
Following the loss of a sensory modality, such as deafness or blindness, crossmodal plasticity is commonly identified in regions of the cerebrum that normally process the deprived modality. It has been hypothesized that significant changes in the patterns of cortical afferent and efferent projections may underlie these functional crossmodal changes. However, studies of thalamocortical and corticocortical connections have refuted this hypothesis, instead revealing a profound resilience of cortical afferent projections following deafness and blindness. This report is the first study of cortical outputs following sensory deprivation, characterizing cortical projections to the superior colliculus in mature cats (N = 5, 3 female) with perinatal-onset deafness. The superior colliculus was exposed to a retrograde pathway tracer, and subsequently labeled cells throughout the cerebrum were identified and quantified. Overall, the percentage of cortical projections arising from auditory cortex was substantially increased, not decreased, in early-deaf cats compared with intact animals. Furthermore, the distribution of labeled cortical neurons was no longer localized to a particular cortical subregion of auditory cortex but dispersed across auditory cortical regions. Collectively, these results demonstrate that, although patterns of cortical afferents are stable following perinatal deafness, the patterns of cortical efferents to the superior colliculus are highly mutable.SIGNIFICANCE STATEMENT When a sense is lost, the remaining senses are functionally enhanced through compensatory crossmodal plasticity. In deafness, brain regions that normally process sound contribute to enhanced visual and somatosensory perception. We demonstrate that hearing loss alters connectivity between sensory cortex and the superior colliculus, a midbrain region that integrates sensory representations to guide orientation behavior. Contrasting expectation, the proportion of projections from auditory cortex increased in deaf animals compared with normal hearing, with a broad distribution across auditory fields. This is the first description of changes in cortical efferents following sensory loss and provides support for models predicting an inability to form a coherent, multisensory percept of the environment following periods of abnormal development.
Collapse
|
26
|
Development of the Mechanisms Governing Midbrain Multisensory Integration. J Neurosci 2018; 38:3453-3465. [PMID: 29496891 DOI: 10.1523/jneurosci.2631-17.2018] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 12/15/2017] [Accepted: 01/19/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to integrate information across multiple senses enhances the brain's ability to detect, localize, and identify external events. This process has been well documented in single neurons in the superior colliculus (SC), which synthesize concordant combinations of visual, auditory, and/or somatosensory signals to enhance the vigor of their responses. This increases the physiological salience of crossmodal events and, in turn, the speed and accuracy of SC-mediated behavioral responses to them. However, this capability is not an innate feature of the circuit and only develops postnatally after the animal acquires sufficient experience with covariant crossmodal events to form links between their modality-specific components. Of critical importance in this process are tectopetal influences from association cortex. Recent findings suggest that, despite its intuitive appeal, a simple generic associative rule cannot explain how this circuit develops its ability to integrate those crossmodal inputs to produce enhanced multisensory responses. The present neurocomputational model explains how this development can be understood as a transition from a default state in which crossmodal SC inputs interact competitively to one in which they interact cooperatively. Crucial to this transition is the operation of a learning rule requiring coactivation among tectopetal afferents for engagement. The model successfully replicates findings of multisensory development in normal cats and cats of either sex reared with special experience. In doing so, it explains how the cortico-SC projections can use crossmodal experience to craft the multisensory integration capabilities of the SC and adapt them to the environment in which they will be used.SIGNIFICANCE STATEMENT The brain's remarkable ability to integrate information across the senses is not present at birth, but typically develops in early life as experience with crossmodal cues is acquired. Recent empirical findings suggest that the mechanisms supporting this development must be more complex than previously believed. The present work integrates these data with what is already known about the underlying circuit in the midbrain to create and test a mechanistic model of multisensory development. This model represents a novel and comprehensive framework that explains how midbrain circuits acquire multisensory experience and reveals how disruptions in this neurotypic developmental trajectory yield divergent outcomes that will affect the multisensory processing capabilities of the mature brain.
Collapse
|
27
|
Bach EC, Vaughan JW, Stein BE, Rowland BA. Pulsed Stimuli Elicit More Robust Multisensory Enhancement than Expected. Front Integr Neurosci 2018; 11:40. [PMID: 29354037 PMCID: PMC5758560 DOI: 10.3389/fnint.2017.00040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/15/2017] [Indexed: 11/28/2022] Open
Abstract
Neurons in the superior colliculus (SC) integrate cross-modal inputs to generate responses that are more robust than to either input alone, and are frequently greater than their sum (superadditive enhancement). Previously, the principles of a real-time multisensory transform were identified and used to accurately predict a neuron's responses to combinations of brief flashes and noise bursts. However, environmental stimuli frequently have more complex temporal structures that elicit very different response dynamics than previously examined. The present study tested whether such stimuli (i.e., pulsed) would be treated similarly by the multisensory transform. Pulsing visual and auditory stimuli elicited responses composed of higher discharge rates that had multiple peaks temporally aligned to the stimulus pulses. Combinations pulsed cues elicited multiple peaks of superadditive enhancement within the response window. Measured over the entire response, this resulted in larger enhancements than expected given enhancements elicited by non-pulsed (“sustained”) stimuli. However, as with sustained stimuli, the dynamics of multisensory responses to pulsed stimuli were highly related to the temporal dynamics of the unisensory inputs. This suggests that the specific characteristics of the multisensory transform are not determined by the external features of the cross-modal stimulus configuration; rather the temporal structure and alignment of the unisensory inputs is the dominant driving factor in the magnitudes of the multisensory product.
Collapse
Affiliation(s)
- Eva C Bach
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - John W Vaughan
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Barry E Stein
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
28
|
Xu J, Bi T, Keniston L, Zhang J, Zhou X, Yu L. Deactivation of Association Cortices Disrupted the Congruence of Visual and Auditory Receptive Fields in Superior Colliculus Neurons. Cereb Cortex 2017; 27:5568-5578. [PMID: 27797831 DOI: 10.1093/cercor/bhw324] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Indexed: 11/13/2022] Open
Abstract
Physiological and behavioral studies in cats show that corticotectal inputs play a critical role in the information-processing capabilities of neurons in the deeper layers of the superior colliculus (SC). Among them, the sensory inputs from functionally related associational cortices are especially critical for SC multisensory integration. However, the underlying mechanism supporting this influence is still unclear. Here, results demonstrate that deactivation of relevant cortices can both dislocate SC visual and auditory spatial receptive fields (RFs) and decrease their overall size, resulting in reduced alignment. Further analysis demonstrated that this RF separation is significantly correlated with the decrement of neurons' multisensory enhancement and is most pronounced in low stimulus intensity conditions. In addition, cortical deactivation could influence the degree of stimulus effectiveness, thereby illustrating the means by which higher order cortices may modify the multisensory activity of SC.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853, USA
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China.,Collaborative Innovation Center for Brain Science, East China Normal University, Shanghai 200062, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| |
Collapse
|
29
|
Matusz PJ, Wallace MT, Murray MM. A multisensory perspective on object memory. Neuropsychologia 2017; 105:243-252. [PMID: 28400327 PMCID: PMC5632572 DOI: 10.1016/j.neuropsychologia.2017.04.008] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 04/04/2017] [Accepted: 04/05/2017] [Indexed: 12/20/2022]
Abstract
Traditional studies of memory and object recognition involved objects presented within a single sensory modality (i.e., purely visual or purely auditory objects). However, in naturalistic settings, objects are often evaluated and processed in a multisensory manner. This begets the question of how object representations that combine information from the different senses are created and utilised by memory functions. Here we review research that has demonstrated that a single multisensory exposure can influence memory for both visual and auditory objects. In an old/new object discrimination task, objects that were presented initially with a task-irrelevant stimulus in another sense were better remembered compared to stimuli presented alone, most notably when the two stimuli were semantically congruent. The brain discriminates between these two types of object representations within the first 100ms post-stimulus onset, indicating early "tagging" of objects/events by the brain based on the nature of their initial presentation context. Interestingly, the specific brain networks supporting the improved object recognition vary based on a variety of factors, including the effectiveness of the initial multisensory presentation and the sense that is task-relevant. We specify the requisite conditions for multisensory contexts to improve object discrimination following single exposures, and the individual differences that exist with respect to these improvements. Our results shed light onto how memory operates on the multisensory nature of object representations as well as how the brain stores and retrieves memories of objects.
Collapse
Affiliation(s)
- Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology & Neurorehabilitation Service & Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology & Neurorehabilitation Service & Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Lausanne, Switzerland.
| |
Collapse
|
30
|
Multisensory Integration Uses a Real-Time Unisensory-Multisensory Transform. J Neurosci 2017; 37:5183-5194. [PMID: 28450539 DOI: 10.1523/jneurosci.2767-16.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 03/01/2017] [Accepted: 03/06/2017] [Indexed: 11/21/2022] Open
Abstract
The manner in which the brain integrates different sensory inputs to facilitate perception and behavior has been the subject of numerous speculations. By examining multisensory neurons in cat superior colliculus, the present study demonstrated that two operational principles are sufficient to understand how this remarkable result is achieved: (1) unisensory signals are integrated continuously and in real time as soon as they arrive at their common target neuron and (2) the resultant multisensory computation is modified in shape and timing by a delayed, calibrating inhibition. These principles were tested for descriptive sufficiency by embedding them in a neurocomputational model and using it to predict a neuron's moment-by-moment multisensory response given only knowledge of its responses to the individual modality-specific component cues. The predictions proved to be highly accurate, reliable, and unbiased and were, in most cases, not statistically distinguishable from the neuron's actual instantaneous multisensory response at any phase throughout its entire duration. The model was also able to explain why different multisensory products are often observed in different neurons at different time points, as well as the higher-order properties of multisensory integration, such as the dependency of multisensory products on the temporal alignment of crossmodal cues. These observations not only reveal this fundamental integrative operation, but also identify quantitatively the multisensory transform used by each neuron. As a result, they provide a means of comparing the integrative profiles among neurons and evaluating how they are affected by changes in intrinsic or extrinsic factors.SIGNIFICANCE STATEMENT Multisensory integration is the process by which the brain combines information from multiple sensory sources (e.g., vision and audition) to maximize an organism's ability to identify and respond to environmental stimuli. The actual transformative process by which the neural products of multisensory integration are achieved is poorly understood. By focusing on the millisecond-by-millisecond differences between a neuron's unisensory component responses and its integrated multisensory response, it was found that this multisensory transform can be described by two basic principles: unisensory information is integrated in real time and the multisensory response is shaped by calibrating inhibition. It is now possible to use these principles to predict a neuron's multisensory response accurately armed only with knowledge of its unisensory responses.
Collapse
|
31
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
32
|
Song YH, Kim JH, Jeong HW, Choi I, Jeong D, Kim K, Lee SH. A Neural Circuit for Auditory Dominance over Visual Perception. Neuron 2017; 93:940-954.e6. [DOI: 10.1016/j.neuron.2017.01.006] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 10/27/2016] [Accepted: 01/06/2017] [Indexed: 11/26/2022]
|
33
|
O’Hare L. Multisensory Integration in Migraine: Recent Developments. Multisens Res 2017; 30:549-563. [DOI: 10.1163/22134808-00002570] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Accepted: 04/03/2017] [Indexed: 01/22/2023]
Abstract
There are well-documented unimodal sensory differences in migraine compared to control groups both during, and between migraine attacks. There is also some evidence of multisensory integration differences in migraine groups compared to control groups, however the literature on this topic is more limited. There are interesting avenues in the area of visual–vestibular integration, which might have practical implications, e.g., motion sickness and nausea in migraine. Recent work has been investigating the possibility of visual–auditory integration in migraine, and found possible differences in the susceptibility to the sound-induced flash illusion in particular, which could give insights into relative excitability of different areas of the cortex, and also into mechanisms for the illusions themselves. This review updates the most recent literature and also highlights potentially fruitful areas of research to understand one of the most common neurological disorders.
Collapse
|
34
|
Kardamakis AA, Pérez-Fernández J, Grillner S. Spatiotemporal interplay between multisensory excitation and recruited inhibition in the lamprey optic tectum. eLife 2016; 5. [PMID: 27635636 PMCID: PMC5026466 DOI: 10.7554/elife.16472] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/14/2016] [Indexed: 11/23/2022] Open
Abstract
Animals integrate the different senses to facilitate event-detection for navigation in their environment. In vertebrates, the optic tectum (superior colliculus) commands gaze shifts by synaptic integration of different sensory modalities. Recent works suggest that tectum can elaborate gaze reorientation commands on its own, rather than merely acting as a relay from upstream/forebrain circuits to downstream premotor centers. We show that tectal circuits can perform multisensory computations independently and, hence, configure final motor commands. Single tectal neurons receive converging visual and electrosensory inputs, as investigated in the lamprey - a phylogenetically conserved vertebrate. When these two sensory inputs overlap in space and time, response enhancement of output neurons occurs locally in the tectum, whereas surrounding areas and temporally misaligned inputs are inhibited. Retinal and electrosensory afferents elicit local monosynaptic excitation, quickly followed by inhibition via recruitment of GABAergic interneurons. Multisensory inputs can thus regulate event-detection within tectum through local inhibition without forebrain control. DOI:http://dx.doi.org/10.7554/eLife.16472.001 Many events occur around us simultaneously, which we detect through our senses. A critical task is to decide which of these events is the most important to look at in a given moment of time. This problem is solved by an ancient area of the brain called the optic tectum (known as the superior colliculus in mammals). The different senses are represented as superimposed maps in the optic tectum. Events that occur in different locations activate different areas of the map. Neurons in the optic tectum combine the responses from different senses to direct the animal’s attention and increase how reliably important events are detected. If an event is simultaneously registered by two senses, then certain neurons in the optic tectum will enhance their activity. By contrast, if two senses provide conflicting information about how different events progress, then these same neurons will be silenced. While this phenomenon of ‘multisensory integration’ is well described, little is known about how the optic tectum performs this integration. Kardamakis, Pérez-Fernández and Grillner have now studied multisensory integration in fish called lampreys, which belong to the oldest group of backboned animals. These fish can navigate using electroreception – the ability to detect electrical signals from the environment. Experiments that examined the connections between neurons in the optic tectum and monitored their activity revealed a neural circuit that consists of two types of neurons: inhibitory interneurons, and projecting neurons that connect the optic tectum to different motor centers in the brainstem. The circuit contains neurons that can receive inputs from both vision and electroreception when these senses are both activated from the same point in space. Incoming signals from the two senses activate the areas on the sensory maps that correspond to the location where the event occurred. This triggers the activity of the interneurons, which immediately send ‘stop’ signals. Thus, while an area of the sensory map and its output neurons are activated, the surrounding areas of the tectum are inhibited. Overall, the findings presented by Kardamakis, Pérez-Fernández and Grillner suggest that the optic tectum can direct attention to a particular event without requiring input from other brain areas. This ability has most likely been preserved throughout evolution. Future studies will aim to determine how the commands generated by the optic tectum circuit are translated into movements. DOI:http://dx.doi.org/10.7554/eLife.16472.002
Collapse
Affiliation(s)
| | | | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
35
|
Bertini C, Grasso PA, Làdavas E. The role of the retino-colliculo-extrastriate pathway in visual awareness and visual field recovery. Neuropsychologia 2016; 90:72-9. [DOI: 10.1016/j.neuropsychologia.2016.05.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Revised: 05/06/2016] [Accepted: 05/10/2016] [Indexed: 01/10/2023]
|
36
|
Abstract
The hypothesis that highly overlapping networks underlie brain functions (neural reuse) is decisively supported by three decades of multisensory research. Multisensory areas process information from more than one sensory modality and therefore represent the best examples of neural reuse. Recent evidence of multisensory processing in primary visual cortices further indicates that neural reuse is a basic feature of the brain.
Collapse
|
37
|
Abstract
AbstractMore than 35 years ago, Meltzoff and Moore (1977) published their famous article, “Imitation of facial and manual gestures by human neonates.” Their central conclusion, that neonates can imitate, was and continues to be controversial. Here, we focus on an often-neglected aspect of this debate, namely, neonatal spontaneous behaviors themselves. We present a case study of a paradigmatic orofacial “gesture,” namely tongue protrusion and retraction (TP/R). Against the background of new research on mammalian aerodigestive development, we ask: How does the human aerodigestive system develop, and what role does TP/R play in the neonate's emerging system of aerodigestion? We show that mammalian aerodigestion develops in two phases: (1) from the onset of isolated orofacial movementsin uteroto the postnatal mastery of suckling at 4 months after birth; and (2) thereafter, from preparation to the mastery of mastication and deglutition of solid foods. Like other orofacial stereotypies, TP/R emerges in the first phase and vanishes prior to the second. Based upon recent advances in activity-driven early neural development, we suggest a sequence of three developmental events in which TP/R might participate: the acquisition of tongue control, the integration of the central pattern generator (CPG) for TP/R with other aerodigestive CPGs, and the formation of connections within the cortical maps of S1 and M1. If correct, orofacial stereotypies are crucial to the maturation of aerodigestion in the neonatal period but also unlikely to co-occur with imitative behavior.
Collapse
|
38
|
Rowland BA, Stanford TR, Stein BE. A Model of the Neural Mechanisms Underlying Multisensory Integration in the Superior Colliculus. Perception 2016; 36:1431-43. [DOI: 10.1068/p5842] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Much of the information about multisensory integration is derived from studies of the cat superior colliculus (SC), a midbrain structure involved in orientation behaviors. This integration is apparent in the enhanced responses of SC neurons to cross-modal stimuli, responses that exceed those to any of the modality-specific component stimuli. The simplest model of multisensory integration is one in which the SC neuron simply sums its various sensory inputs. However, a number of empirical findings reveal the inadequacy of such a model; for example, the finding that deactivation of cortico-collicular inputs eliminates the enhanced response to a cross-modal stimulus without eliminating responses to the modality-specific component stimuli. These and other empirical findings inform a computational model that accounts for all of the most fundamental aspects of SC multisensory integration. The model is presented in two forms: an algebraic form that conveys the essential insights, and a compartmental form that represents the neuronal computations in a more biologically realistic way.
Collapse
Affiliation(s)
- Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, USA
| |
Collapse
|
39
|
Yau JM, DeAngelis GC, Angelaki DE. Dissecting neural circuits for multisensory integration and crossmodal processing. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140203. [PMID: 26240418 DOI: 10.1098/rstb.2014.0203] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
40
|
Felch DL, Khakhalin AS, Aizenman CD. Multisensory integration in the developing tectum is constrained by the balance of excitation and inhibition. eLife 2016; 5. [PMID: 27218449 PMCID: PMC4912350 DOI: 10.7554/elife.15600] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration (MSI) is the process that allows the brain to bind together spatiotemporally congruent inputs from different sensory modalities to produce single salient representations. While the phenomenology of MSI in vertebrate brains is well described, relatively little is known about cellular and synaptic mechanisms underlying this phenomenon. Here we use an isolated brain preparation to describe cellular mechanisms underlying development of MSI between visual and mechanosensory inputs in the optic tectum of Xenopus tadpoles. We find MSI is highly dependent on the temporal interval between crossmodal stimulus pairs. Over a key developmental period, the temporal window for MSI significantly narrows and is selectively tuned to specific interstimulus intervals. These changes in MSI correlate with developmental increases in evoked synaptic inhibition, and inhibitory blockade reverses observed developmental changes in MSI. We propose a model in which development of recurrent inhibition mediates development of temporal aspects of MSI in the tectum.
Collapse
Affiliation(s)
- Daniel L Felch
- Department of Neuroscience, Brown University, Providence, United States.,Department of Cell and Molecular Biology, Tulane University, New Orleans, United States
| | - Arseny S Khakhalin
- Department of Neuroscience, Brown University, Providence, United States.,Department of Biology, Bard College, New York, United States
| | - Carlos D Aizenman
- Department of Neuroscience, Brown University, Providence, United States
| |
Collapse
|
41
|
Tang X, Wu J, Shen Y. The interactions of multisensory integration with endogenous and exogenous attention. Neurosci Biobehav Rev 2015; 61:208-24. [PMID: 26546734 DOI: 10.1016/j.neubiorev.2015.11.002] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 11/01/2015] [Accepted: 11/02/2015] [Indexed: 11/24/2022]
Abstract
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner.
Collapse
Affiliation(s)
- Xiaoyu Tang
- College of Psychology, Liaoning Normal University, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, 116029, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan
| | - Jinglong Wu
- Key Laboratory of Biomimetic Robots and System, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan.
| | - Yong Shen
- Neurodegenerative Disease Research Center, School of Life Sciences, University of Science and Technology of China, CAS Key Laboratory of Brain Functions and Disease, Hefei, China; Center for Advanced Therapeutic Strategies for Brain Disorders, Roskamp Institute, Sarasota, FL 34243, USA
| |
Collapse
|
42
|
Dundon NM, Bertini C, Làdavas E, Sabel BA, Gall C. Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings. Front Behav Neurosci 2015; 9:192. [PMID: 26283935 PMCID: PMC4515568 DOI: 10.3389/fnbeh.2015.00192] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2015] [Accepted: 07/09/2015] [Indexed: 12/16/2022] Open
Abstract
Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: “compensation” and “restoration”. Existing methods can be classified into three groups: Visual Scanning Training (VST), Audio-Visual Scanning Training (AViST) and Vision Restoration Training (VRT). VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local “within-system” interactions (i.e., relying on plasticity within peri-lesional spared tissue) and changes in more global “between-system” networks (i.e., recruiting alternative visual pathways) contribute to both vision restoration and compensatory rehabilitation, which ultimately have implications for the rehabilitation of cognitive functions.
Collapse
Affiliation(s)
- Neil M Dundon
- Department of Psychology, University of Bologna Bologna, Italy ; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna Cesena, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna Bologna, Italy ; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna Cesena, Italy
| | - Elisabetta Làdavas
- Department of Psychology, University of Bologna Bologna, Italy ; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna Cesena, Italy
| | - Bernhard A Sabel
- Medical Faculty, Institute of Medical Psychology, Otto-von-Guericke University of Magdeburg Magdeburg, Germany
| | - Carolin Gall
- Medical Faculty, Institute of Medical Psychology, Otto-von-Guericke University of Magdeburg Magdeburg, Germany
| |
Collapse
|
43
|
Jiang H, Stein BE, McHaffie JG. Multisensory training reverses midbrain lesion-induced changes and ameliorates haemianopia. Nat Commun 2015; 6:7263. [PMID: 26021613 PMCID: PMC6193257 DOI: 10.1038/ncomms8263] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 04/23/2015] [Indexed: 11/09/2022] Open
Abstract
Failure to attend to visual cues is a common consequence of visual cortex injury. Here, we report on a behavioural strategy whereby cross-modal (auditory-visual) training reinstates visuomotor competencies in animals rendered haemianopic by complete unilateral visual cortex ablation. The re-emergence of visual behaviours is correlated with the reinstatement of visual responsiveness in deep layer neurons of the ipsilesional superior colliculus (SC). This functional recovery is produced by training-induced alterations in descending influences from association cortex that allowed these midbrain neurons to once again transform visual cues into appropriate orientation behaviours. The findings underscore the inherent plasticity and functional breadth of phylogenetically older visuomotor circuits that can express visual capabilities thought to have been subsumed by more recently evolved brain regions. These observations suggest the need for reevaluating current concepts of functional segregation in the visual system and have important implications for strategies aimed at ameliorating trauma-induced visual deficits in humans.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina 27157-1010 USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina 27157-1010 USA
| | - John G McHaffie
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina 27157-1010 USA
| |
Collapse
|
44
|
Yu L, Xu J, Rowland BA, Stein BE. Multisensory Plasticity in Superior Colliculus Neurons is Mediated by Association Cortex. Cereb Cortex 2014; 26:1130-7. [PMID: 25552270 DOI: 10.1093/cercor/bhu295] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The ability to integrate information from different senses, and thereby facilitate detecting and localizing events, normally develops gradually in cat superior colliculus (SC) neurons as experience with cross-modal events is acquired. Here, we demonstrate that the portal for this experience-based change is association cortex. Unilaterally deactivating this cortex whenever visual-auditory events were present resulted in the failure of ipsilateral SC neurons to develop the ability to integrate those cross-modal inputs, even though they retained the ability to respond to them. In contrast, their counterparts in the opposite SC developed this capacity normally. The deficits were eliminated by providing cross-modal experience when cortex was active. These observations underscore the collaborative developmental processes that take place among different levels of the neuraxis to adapt the brain's multisensory (and sensorimotor) circuits to the environment in which they will be used.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| |
Collapse
|
45
|
Ursino M, Cuppini C, Magosso E. Neurocomputational approaches to modelling multisensory integration in the brain: A review. Neural Netw 2014; 60:141-65. [DOI: 10.1016/j.neunet.2014.08.003] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Revised: 08/05/2014] [Accepted: 08/07/2014] [Indexed: 10/24/2022]
|
46
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 214] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
47
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 235] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|
48
|
Abstract
Detecting and locating environmental events are markedly enhanced by the midbrain's ability to integrate visual and auditory cues. Its capacity for multisensory integration develops in cats 1-4 months after birth but only after acquiring extensive visual-auditory experience. However, briefly deactivating specific regions of association cortex during this period induced long-term disruption of this maturational process, such that even 1 year later animals were unable to integrate visual and auditory cues to enhance their behavioral performance. The data from this animal model reveal a window of sensitivity within which association cortex mediates the encoding of cross-modal experience in the midbrain. Surprisingly, however, 3 years later, and without any additional intervention, the capacity appeared fully developed. This suggests that, although sensitivity degrades with age, the potential for acquiring or modifying multisensory integration capabilities extends well into adulthood.
Collapse
|
49
|
Man K, Kaplan J, Damasio H, Damasio A. Neural convergence and divergence in the mammalian cerebral cortex: from experimental neuroanatomy to functional neuroimaging. J Comp Neurol 2014; 521:4097-111. [PMID: 23840023 DOI: 10.1002/cne.23408] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 04/30/2013] [Accepted: 06/28/2013] [Indexed: 11/08/2022]
Abstract
A development essential for understanding the neural basis of complex behavior and cognition is the description, during the last quarter of the twentieth century, of detailed patterns of neuronal circuitry in the mammalian cerebral cortex. This effort established that sensory pathways exhibit successive levels of convergence, from the early sensory cortices to sensory-specific and multisensory association cortices, culminating in maximally integrative regions. It was also established that this convergence is reciprocated by successive levels of divergence, from the maximally integrative areas all the way back to the early sensory cortices. This article first provides a brief historical review of these neuroanatomical findings, which were relevant to the study of brain and mind-behavior relationships and to the proposal of heuristic anatomofunctional frameworks. In a second part, the article reviews new evidence that has accumulated from studies of functional neuroimaging, employing both univariate and multivariate analyses, as well as electrophysiology, in humans and other mammals, that the integration of information across the auditory, visual, and somatosensory-motor modalities proceeds in a content-rich manner. Behaviorally and cognitively relevant information is extracted from and conserved across the different modalities, both in higher order association cortices and in early sensory cortices. Such stimulus-specific information is plausibly relayed along the neuroanatomical pathways alluded to above. The evidence reviewed here suggests the need for further in-depth exploration of the intricate connectivity of the mammalian cerebral cortex in experimental neuroanatomical studies.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | | | | | | |
Collapse
|
50
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 152] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|