1
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
2
|
Chen Z, Yu R, Yu X, Li E, Wang C, Liu Y, Guo T, Chen H. Bioinspired Artificial Motion Sensory System for Rotation Recognition and Rapid Self-Protection. ACS NANO 2022; 16:19155-19164. [PMID: 36269153 DOI: 10.1021/acsnano.2c08328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As one of the most common synergies between the exteroceptors and proprioceptors, the synergy between visual and vestibule enables the human brain to judge the state of human motion, which is essential for motion recognition and human self-protection. Hence, in this work, an artificial motion sensory system (AMSS) based on artificial vestibule and visual is developed, which consists of a tribo-nanogenerator (TENG) as a vestibule that can sense rotation and synaptic transistor array as retina. The principle of temporal congruency has been successfully realized by multisensory input. In addition, pattern recognition results show that the accuracy of multisensory integration is more than 15% higher than that of single sensory. Moreover, due to the rotation recognition and visual recognition functions of AMSS, we realized multimodal information recognition including angles and numbers in the spiking correlated neural network (SCNN), and the accuracy rate reached 89.82%. Besides, the rapid self-protection of a human was successfully realized by AMSS in the case of simulated amusement rides, and the reaction time of multiple motion sensory integration is only one-third of that of a single vestibule. The development of AMSS based on the synergy of simulated vision and vestibule will show great potential in neural robot, artificial limbs, and soft electronics.
Collapse
Affiliation(s)
- Zhenjia Chen
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Rengjian Yu
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Xipeng Yu
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Enlong Li
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Congyong Wang
- Joint School of National University of Singapore and Tianjin University, International Campus of Tianjin University, Binhai New City, Fuzhou350207, China
- Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore117543, Singapore
| | - Yaqian Liu
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Tailiang Guo
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
- Fujian Science & Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou350100, China
| | - Huipeng Chen
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
- Fujian Science & Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou350100, China
| |
Collapse
|
3
|
Gao C, Green JJ, Yang X, Oh S, Kim J, Shinkareva SV. Audiovisual integration in the human brain: a coordinate-based meta-analysis. Cereb Cortex 2022; 33:5574-5584. [PMID: 36336347 PMCID: PMC10152097 DOI: 10.1093/cercor/bhac443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/09/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
People can seamlessly integrate a vast array of information from what they see and hear in the noisy and uncertain world. However, the neural underpinnings of audiovisual integration continue to be a topic of debate. Using strict inclusion criteria, we performed an activation likelihood estimation meta-analysis on 121 neuroimaging experiments with a total of 2,092 participants. We found that audiovisual integration is linked with the coexistence of multiple integration sites, including early cortical, subcortical, and higher association areas. Although activity was consistently found within the superior temporal cortex, different portions of this cortical region were identified depending on the analytical contrast used, complexity of the stimuli, and modality within which attention was directed. The context-dependent neural activity related to audiovisual integration suggests a flexible rather than fixed neural pathway for audiovisual integration. Together, our findings highlight a flexible multiple pathways model for audiovisual integration, with superior temporal cortex as the central node in these neural assemblies.
Collapse
Affiliation(s)
- Chuanji Gao
- Donders Institute for Brain, Cognition and Behaviour, Radboud University , Nijmegen , Netherlands
| | - Jessica J Green
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Xuan Yang
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Sewon Oh
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Jongwan Kim
- Department of Psychology, Jeonbuk National University , Jeonju , South Korea
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| |
Collapse
|
4
|
Dakos AS, Jiang H, Stein BE, Rowland BA. Using the Principles of Multisensory Integration to Reverse Hemianopia. Cereb Cortex 2021; 30:2030-2041. [PMID: 31799618 DOI: 10.1093/cercor/bhz220] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/03/2019] [Accepted: 08/28/2019] [Indexed: 11/14/2022] Open
Abstract
Hemianopia can be rehabilitated by an auditory-visual "training" procedure, which restores visual responsiveness in midbrain neurons indirectly compromised by the cortical lesion and reinstates vision in contralesional space. Presumably, these rehabilitative changes are induced via mechanisms of multisensory integration/plasticity. If so, the paradigm should fail if the stimulus configurations violate the spatiotemporal principles that govern these midbrain processes. To test this possibility, hemianopic cats were provided spatially or temporally noncongruent auditory-visual training. Rehabilitation failed in all cases even after approximately twice the number of training trials normally required for recovery, and even after animals learned to approach the location of the undetected visual stimulus. When training was repeated with these stimuli in spatiotemporal concordance, hemianopia was resolved. The results identify the conditions needed to engage changes in remaining neural circuits required to support vision in the absence of visual cortex, and have implications for rehabilitative strategies in human patients.
Collapse
Affiliation(s)
| | - Huai Jiang
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| | - Barry E Stein
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| | - Benjamin A Rowland
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| |
Collapse
|
5
|
Mohl JT, Pearson JM, Groh JM. Monkeys and humans implement causal inference to simultaneously localize auditory and visual stimuli. J Neurophysiol 2020; 124:715-727. [PMID: 32727263 DOI: 10.1152/jn.00046.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
Collapse
Affiliation(s)
- Jeff T Mohl
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - John M Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Biostatistics and Bioinformatics, Duke University Medical School, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
6
|
Gao C, Weber CE, Wedell DH, Shinkareva SV. An fMRI Study of Affective Congruence across Visual and Auditory Modalities. J Cogn Neurosci 2020; 32:1251-1262. [DOI: 10.1162/jocn_a_01553] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Abstract
Evaluating multisensory emotional content is a part of normal day-to-day interactions. We used fMRI to examine brain areas sensitive to congruence of audiovisual valence and their overlap with areas sensitive to valence. Twenty-one participants watched audiovisual clips with either congruent or incongruent valence across visual and auditory modalities. We showed that affective congruence versus incongruence across visual and auditory modalities is identifiable on a trial-by-trial basis across participants. Representations of affective congruence were widely distributed with some overlap with the areas sensitive to valence. Regions of overlap included bilateral superior temporal cortex and right pregenual anterior cingulate. The overlap between the regions identified here and in the emotion congruence literature lends support to the idea that valence may be a key determinant of affective congruence processing across a variety of discrete emotions.
Collapse
|
7
|
Noel JP, Ishizawa Y, Patel SR, Eskandar EN, Wallace MT. Leveraging Nonhuman Primate Multisensory Neurons and Circuits in Assessing Consciousness Theory. J Neurosci 2019; 39:7485-7500. [PMID: 31358654 PMCID: PMC6750944 DOI: 10.1523/jneurosci.0934-19.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Revised: 06/27/2019] [Accepted: 07/19/2019] [Indexed: 01/03/2023] Open
Abstract
Both the global neuronal workspace (GNW) and integrated information theory (IIT) posit that highly complex and interconnected networks engender perceptual awareness. GNW specifies that activity recruiting frontoparietal networks will elicit a subjective experience, whereas IIT is more concerned with the functional architecture of networks than with activity within it. Here, we argue that according to IIT mathematics, circuits converging on integrative versus convergent yet non-integrative neurons should support a greater degree of consciousness. We test this hypothesis by analyzing a dataset of neuronal responses collected simultaneously from primary somatosensory cortex (S1) and ventral premotor cortex (vPM) in nonhuman primates presented with auditory, tactile, and audio-tactile stimuli as they are progressively anesthetized with propofol. We first describe the multisensory (audio-tactile) characteristics of S1 and vPM neurons (mean and dispersion tendencies, as well as noise-correlations), and functionally label these neurons as convergent or integrative according to their spiking responses. Then, we characterize how these different pools of neurons behave as a function of consciousness. At odds with the IIT mathematics, results suggest that convergent neurons more readily exhibit properties of consciousness (neural complexity and noise correlation) and are more impacted during the loss of consciousness than integrative neurons. Last, we provide support for the GNW by showing that neural ignition (i.e., same trial coactivation of S1 and vPM) was more frequent in conscious than unconscious states. Overall, we contrast GNW and IIT within the same single-unit activity dataset, and support the GNW.SIGNIFICANCE STATEMENT A number of prominent theories of consciousness exist, and a number of these share strong commonalities, such as the central role they ascribe to integration. Despite the important and far reaching consequences developing a better understanding of consciousness promises to bring, for instance in diagnosing disorders of consciousness (e.g., coma, vegetative-state, locked-in syndrome), these theories are seldom tested via invasive techniques (with high signal-to-noise ratios), and never directly confronted within a single dataset. Here, we first derive concrete and testable predictions from the global neuronal workspace and integrated information theory of consciousness. Then, we put these to the test by functionally labeling specific neurons as either convergent or integrative nodes, and examining the response of these neurons during anesthetic-induced loss of consciousness.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, New York 10003,
| | | | - Shaun R Patel
- Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts 02114
| | - Emad N Eskandar
- Leo M. Davidoff Department of Neurological Surgery, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Mark T Wallace
- Department of Hearing and Speech, Vanderbilt University Medical School, Nashville, Tennessee 37235
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37235, and
- Department of Psychiatry and Behavioral Sciences, Vanderbilt Medical School, Nashville, Tennessee 37235
| |
Collapse
|
8
|
Benedek G, Keri S, Nagy A, Braunitzer G, Norita M. A multimodal pathway including the basal ganglia in the feline brain. Physiol Int 2019; 106:95-113. [PMID: 31271309 DOI: 10.1556/2060.106.2019.09] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The purpose of this paper is to give an overview of our present knowledge about the feline tecto-thalamo-basal ganglia cortical sensory pathway. We reviewed morphological and electrophysiological studies of the cortical areas, located in ventral bank of the anterior ectosylvian sulcus as well as the region of the insular cortex, the suprageniculate nucleus of the thalamus, caudate nucleus, and the substantia nigra. Microelectrode studies revealed common receptive field properties in all these structures. The receptive fields were extremely large and multisensory, with pronounced sensitivity to motion of visual stimuli. They often demonstrated directional and velocity selectivity. Preference for small visual stimuli was also a frequent finding. However, orientation sensitivity was absent. It became obvious that the structures of the investigated sensory loop exhibit a unique kind of information processing, not found anywhere else in the feline visual system.
Collapse
Affiliation(s)
- G Benedek
- 1 Department of Physiology, University of Szeged , Szeged, Hungary
| | - S Keri
- 1 Department of Physiology, University of Szeged , Szeged, Hungary.,2 Nyirő Gyula Hospital, Laboratory for Perception & Cognition and Clinical Neuroscience , Budapest, Hungary
| | - A Nagy
- 1 Department of Physiology, University of Szeged , Szeged, Hungary
| | - G Braunitzer
- 3 Department of Anatomy, Niigata University , Niigata, Japan
| | - M Norita
- 3 Department of Anatomy, Niigata University , Niigata, Japan
| |
Collapse
|
9
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
10
|
Gharaei S, Arabzadeh E, Solomon SG. Integration of visual and whisker signals in rat superior colliculus. Sci Rep 2018; 8:16445. [PMID: 30401871 PMCID: PMC6219574 DOI: 10.1038/s41598-018-34661-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 10/16/2018] [Indexed: 12/12/2022] Open
Abstract
Multisensory integration is a process by which signals from different sensory modalities are combined to facilitate detection and localization of external events. One substrate for multisensory integration is the midbrain superior colliculus (SC) which plays an important role in orienting behavior. In rodent SC, visual and somatosensory (whisker) representations are in approximate registration, but whether and how these signals interact is unclear. We measured spiking activity in SC of anesthetized hooded rats, during presentation of visual- and whisker stimuli that were tested simultaneously or in isolation. Visual responses were found in all layers, but were primarily located in superficial layers. Whisker responsive sites were primarily found in intermediate layers. In single- and multi-unit recording sites, spiking activity was usually only sensitive to one modality, when stimuli were presented in isolation. By contrast, we observed robust and primarily suppressive interactions when stimuli were presented simultaneously to both modalities. We conclude that while visual and whisker representations in SC of rat are partially overlapping, there is limited excitatory convergence onto individual sites. Multimodal integration may instead rely on suppressive interactions between modalities.
Collapse
Affiliation(s)
- Saba Gharaei
- Discipline of Physiology, School of Medical Sciences, The University of Sydney, Sydney, Australia. .,Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia. .,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, Australia.
| | - Ehsan Arabzadeh
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, Australia
| | - Samuel G Solomon
- Discipline of Physiology, School of Medical Sciences, The University of Sydney, Sydney, Australia.,Institute of Behavioural Neuroscience, University College London, London, UK
| |
Collapse
|
11
|
Crochet S, Lee SH, Petersen CCH. Neural Circuits for Goal-Directed Sensorimotor Transformations. Trends Neurosci 2018; 42:66-77. [PMID: 30201180 DOI: 10.1016/j.tins.2018.08.011] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2018] [Revised: 08/08/2018] [Accepted: 08/15/2018] [Indexed: 11/19/2022]
Abstract
Precisely wired neuronal circuits process sensory information in a learning- and context-dependent manner in order to govern behavior. Simple sensory decision-making tasks in rodents are now beginning to reveal the contributions of distinct cell types and brain regions participating in the conversion of sensory information into learned goal-directed motor output. Task learning is accompanied by target-specific routing of sensory information to specific downstream cortical regions, with higher-order cortical regions such as the posterior parietal cortex, medial prefrontal cortex, and hippocampus appearing to play important roles in learning- and context-dependent processing of sensory input. An important challenge for future research is to connect cell-type-specific activity in these brain regions with motor neurons responsible for action initiation.
Collapse
Affiliation(s)
- Sylvain Crochet
- Laboratory of Sensory Processing, Brain Mind Institute, Faculty of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Seung-Hee Lee
- Laboratory of Sensory Processing, Brain Mind Institute, Faculty of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland; Department of Biological Sciences, KAIST, Daejeon, Republic of Korea.
| | - Carl C H Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, Faculty of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
12
|
Del Tufo SN, Frost SJ, Hoeft F, Cutting LE, Molfese PJ, Mason GF, Rothman DL, Fulbright RK, Pugh KR. Neurochemistry Predicts Convergence of Written and Spoken Language: A Proton Magnetic Resonance Spectroscopy Study of Cross-Modal Language Integration. Front Psychol 2018; 9:1507. [PMID: 30233445 PMCID: PMC6131664 DOI: 10.3389/fpsyg.2018.01507] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 07/30/2018] [Indexed: 12/26/2022] Open
Abstract
Recent studies have provided evidence of associations between neurochemistry and reading (dis)ability (Pugh et al., 2014). Based on a long history of studies indicating that fluent reading entails the automatic convergence of the written and spoken forms of language and our recently proposed Neural Noise Hypothesis (Hancock et al., 2017), we hypothesized that individual differences in cross-modal integration would mediate, at least partially, the relationship between neurochemical concentrations and reading. Cross-modal integration was measured in 231 children using a two-alternative forced choice cross-modal matching task with three language conditions (letters, words, and pseudowords) and two levels of difficulty within each language condition. Neurometabolite concentrations of Choline (Cho), Glutamate (Glu), gamma-Aminobutyric (GABA), and N- acetyl-aspartate (NAA) were then measured in a subset of this sample (n = 70) with Magnetic Resonance Spectroscopy (MRS). A structural equation mediation model revealed that the effect of cross-modal word matching mediated the relationship between increased Glu (which has been proposed to be an index of neural noise) and poorer reading ability. In addition, the effect of cross-modal word matching fully mediated a relationship between increased Cho and poorer reading ability. Multilevel mixed effects models confirmed that lower Cho predicted faster cross-modal matching reaction time, specifically in the hard word condition. These Cho findings are consistent with previous work in both adults and children showing a negative association between Cho and reading ability. We also found two novel neurochemical relationships. Specifically, lower GABA and higher NAA predicted faster cross-modal matching reaction times. We interpret these results within a biochemical framework in which the ability of neurochemistry to predict reading ability may at least partially be explained by cross-modal integration.
Collapse
Affiliation(s)
- Stephanie N Del Tufo
- Department of Special Education, Peabody College, Vanderbilt University, Nashville, TN, United States.,Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, United States.,Haskins Laboratories, New Haven, CT, United States
| | | | - Fumiko Hoeft
- Haskins Laboratories, New Haven, CT, United States.,Department of Psychiatry, University of California, San Francisco, San Francisco, CA, United States
| | - Laurie E Cutting
- Department of Special Education, Peabody College, Vanderbilt University, Nashville, TN, United States.,Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, United States.,Haskins Laboratories, New Haven, CT, United States.,Peabody College of Education and Human Development, Vanderbilt University, Nashville, TN, United States.,Vanderbilt Kennedy Center, Vanderbilt University, Nashville, TN, United States
| | - Peter J Molfese
- Haskins Laboratories, New Haven, CT, United States.,Section on Functional Imaging Methods, Laboratory of Brain and Cognition, Department of Health and Human Services, National Institutes of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Graeme F Mason
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States.,Department of Psychiatry, Yale University School of Medicine, New Haven, CT, United States
| | - Douglas L Rothman
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States.,Department of Biomedical Engineering, Yale University School of Medicine, New Haven, CT, United States
| | - Robert K Fulbright
- Haskins Laboratories, New Haven, CT, United States.,Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States
| | - Kenneth R Pugh
- Haskins Laboratories, New Haven, CT, United States.,Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
| |
Collapse
|
13
|
Yamasaki D, Miyoshi K, Altmann CF, Ashida H. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object. Perception 2018; 47:751-771. [PMID: 29783921 DOI: 10.1177/0301006618777708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Collapse
Affiliation(s)
| | | | - Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Japan
| | | |
Collapse
|
14
|
Newlands SD, Abbatematteo B, Wei M, Carney LH, Luan H. Convergence of linear acceleration and yaw rotation signals on non-eye movement neurons in the vestibular nucleus of macaques. J Neurophysiol 2018; 119:73-83. [PMID: 28978765 DOI: 10.1152/jn.00382.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Roughly half of all vestibular nucleus neurons without eye movement sensitivity respond to both angular rotation and linear acceleration. Linear acceleration signals arise from otolith organs, and rotation signals arise from semicircular canals. In the vestibular nerve, these signals are carried by different afferents. Vestibular nucleus neurons represent the first point of convergence for these distinct sensory signals. This study systematically evaluated how rotational and translational signals interact in single neurons in the vestibular nuclei: multisensory integration at the first opportunity for convergence between these two independent vestibular sensory signals. Single-unit recordings were made from the vestibular nuclei of awake macaques during yaw rotation, translation in the horizontal plane, and combinations of rotation and translation at different frequencies. The overall response magnitude of the combined translation and rotation was generally less than the sum of the magnitudes in responses to the stimuli applied independently. However, we found that under conditions in which the peaks of the rotational and translational responses were coincident these signals were approximately additive. With presentation of rotation and translation at different frequencies, rotation was attenuated more than translation, regardless of which was at a higher frequency. These data suggest a nonlinear interaction between these two sensory modalities in the vestibular nuclei, in which coincident peak responses are proportionally stronger than other, off-peak interactions. These results are similar to those reported for other forms of multisensory integration, such as audio-visual integration in the superior colliculus. NEW & NOTEWORTHY This is the first study to systematically explore the interaction of rotational and translational signals in the vestibular nuclei through independent manipulation. The results of this study demonstrate nonlinear integration leading to maximum response amplitude when the timing and direction of peak rotational and translational responses are coincident.
Collapse
Affiliation(s)
- Shawn D Newlands
- Department of Otolaryngology, University of Rochester Medical Center , Rochester, New York.,Department of Neuroscience, University of Rochester Medical Center , Rochester, New York
| | - Ben Abbatematteo
- Department of Biomedical Engineering, University of Rochester , Rochester, New York
| | - Min Wei
- Department of Otolaryngology, University of Rochester Medical Center , Rochester, New York
| | - Laurel H Carney
- Department of Biomedical Engineering, University of Rochester , Rochester, New York.,Department of Neuroscience, University of Rochester Medical Center , Rochester, New York
| | - Hongge Luan
- Department of Otolaryngology, University of Rochester Medical Center , Rochester, New York
| |
Collapse
|
15
|
Xu J, Bi T, Keniston L, Zhang J, Zhou X, Yu L. Deactivation of Association Cortices Disrupted the Congruence of Visual and Auditory Receptive Fields in Superior Colliculus Neurons. Cereb Cortex 2017; 27:5568-5578. [PMID: 27797831 DOI: 10.1093/cercor/bhw324] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Indexed: 11/13/2022] Open
Abstract
Physiological and behavioral studies in cats show that corticotectal inputs play a critical role in the information-processing capabilities of neurons in the deeper layers of the superior colliculus (SC). Among them, the sensory inputs from functionally related associational cortices are especially critical for SC multisensory integration. However, the underlying mechanism supporting this influence is still unclear. Here, results demonstrate that deactivation of relevant cortices can both dislocate SC visual and auditory spatial receptive fields (RFs) and decrease their overall size, resulting in reduced alignment. Further analysis demonstrated that this RF separation is significantly correlated with the decrement of neurons' multisensory enhancement and is most pronounced in low stimulus intensity conditions. In addition, cortical deactivation could influence the degree of stimulus effectiveness, thereby illustrating the means by which higher order cortices may modify the multisensory activity of SC.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853, USA
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China.,Collaborative Innovation Center for Brain Science, East China Normal University, Shanghai 200062, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| |
Collapse
|
16
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
17
|
Ohshiro T, Angelaki DE, DeAngelis GC. A Neural Signature of Divisive Normalization at the Level of Multisensory Integration in Primate Cortex. Neuron 2017; 95:399-411.e8. [PMID: 28728025 DOI: 10.1016/j.neuron.2017.06.043] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Revised: 06/19/2017] [Accepted: 06/26/2017] [Indexed: 10/19/2022]
Abstract
Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration.
Collapse
Affiliation(s)
- Tomokazu Ohshiro
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14611, USA; Department of Physiology, Tohoku University School of Medicine, Sendai 980-8575, Japan
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14611, USA.
| |
Collapse
|
18
|
The normal environment delays the development of multisensory integration. Sci Rep 2017; 7:4772. [PMID: 28684852 PMCID: PMC5500544 DOI: 10.1038/s41598-017-05118-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 05/24/2017] [Indexed: 11/08/2022] Open
Abstract
Multisensory neurons in animals whose cross-modal experiences are compromised during early life fail to develop the ability to integrate information across those senses. Consequently, they lack the ability to increase the physiological salience of the events that provide the convergent cross-modal inputs. The present study demonstrates that superior colliculus (SC) neurons in animals whose visual-auditory experience is compromised early in life by noise-rearing can develop visual-auditory multisensory integration capabilities rapidly when periodically exposed to a single set of visual-auditory stimuli in a controlled laboratory paradigm. However, they remain compromised if their experiences are limited to a normal housing environment. These observations seem counterintuitive given that multisensory integrative capabilities ordinarily develop during early life in normal environments, in which a wide variety of sensory stimuli facilitate the functional organization of complex neural circuits at multiple levels of the neuraxis. However, the very richness and inherent variability of sensory stimuli in normal environments will lead to a less regular coupling of any given set of cross-modal cues than does the otherwise "impoverished" laboratory exposure paradigm. That this poses no significant problem for the neonate, but does for the adult, indicates a maturational shift in the requirements for the development of multisensory integration capabilities.
Collapse
|
19
|
Kardamakis AA, Pérez-Fernández J, Grillner S. Spatiotemporal interplay between multisensory excitation and recruited inhibition in the lamprey optic tectum. eLife 2016; 5. [PMID: 27635636 PMCID: PMC5026466 DOI: 10.7554/elife.16472] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/14/2016] [Indexed: 11/23/2022] Open
Abstract
Animals integrate the different senses to facilitate event-detection for navigation in their environment. In vertebrates, the optic tectum (superior colliculus) commands gaze shifts by synaptic integration of different sensory modalities. Recent works suggest that tectum can elaborate gaze reorientation commands on its own, rather than merely acting as a relay from upstream/forebrain circuits to downstream premotor centers. We show that tectal circuits can perform multisensory computations independently and, hence, configure final motor commands. Single tectal neurons receive converging visual and electrosensory inputs, as investigated in the lamprey - a phylogenetically conserved vertebrate. When these two sensory inputs overlap in space and time, response enhancement of output neurons occurs locally in the tectum, whereas surrounding areas and temporally misaligned inputs are inhibited. Retinal and electrosensory afferents elicit local monosynaptic excitation, quickly followed by inhibition via recruitment of GABAergic interneurons. Multisensory inputs can thus regulate event-detection within tectum through local inhibition without forebrain control. DOI:http://dx.doi.org/10.7554/eLife.16472.001 Many events occur around us simultaneously, which we detect through our senses. A critical task is to decide which of these events is the most important to look at in a given moment of time. This problem is solved by an ancient area of the brain called the optic tectum (known as the superior colliculus in mammals). The different senses are represented as superimposed maps in the optic tectum. Events that occur in different locations activate different areas of the map. Neurons in the optic tectum combine the responses from different senses to direct the animal’s attention and increase how reliably important events are detected. If an event is simultaneously registered by two senses, then certain neurons in the optic tectum will enhance their activity. By contrast, if two senses provide conflicting information about how different events progress, then these same neurons will be silenced. While this phenomenon of ‘multisensory integration’ is well described, little is known about how the optic tectum performs this integration. Kardamakis, Pérez-Fernández and Grillner have now studied multisensory integration in fish called lampreys, which belong to the oldest group of backboned animals. These fish can navigate using electroreception – the ability to detect electrical signals from the environment. Experiments that examined the connections between neurons in the optic tectum and monitored their activity revealed a neural circuit that consists of two types of neurons: inhibitory interneurons, and projecting neurons that connect the optic tectum to different motor centers in the brainstem. The circuit contains neurons that can receive inputs from both vision and electroreception when these senses are both activated from the same point in space. Incoming signals from the two senses activate the areas on the sensory maps that correspond to the location where the event occurred. This triggers the activity of the interneurons, which immediately send ‘stop’ signals. Thus, while an area of the sensory map and its output neurons are activated, the surrounding areas of the tectum are inhibited. Overall, the findings presented by Kardamakis, Pérez-Fernández and Grillner suggest that the optic tectum can direct attention to a particular event without requiring input from other brain areas. This ability has most likely been preserved throughout evolution. Future studies will aim to determine how the commands generated by the optic tectum circuit are translated into movements. DOI:http://dx.doi.org/10.7554/eLife.16472.002
Collapse
Affiliation(s)
| | | | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
20
|
Grasso PA, Benassi M, Làdavas E, Bertini C. Audio-visual multisensory training enhances visual processing of motion stimuli in healthy participants: an electrophysiological study. Eur J Neurosci 2016; 44:2748-2758. [PMID: 26921844 DOI: 10.1111/ejn.13221] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Revised: 01/29/2016] [Accepted: 02/19/2016] [Indexed: 11/29/2022]
Abstract
Evidence from electrophysiological and imaging studies suggests that audio-visual (AV) stimuli presented in spatial coincidence enhance activity in the subcortical colliculo-dorsal extrastriate pathway. To test whether repetitive AV stimulation might specifically activate this neural circuit underlying multisensory integrative processes, electroencephalographic data were recorded before and after 2 h of AV training, during the execution of two lateralized visual tasks: a motion discrimination task, relying on activity in the colliculo-dorsal MT pathway, and an orientation discrimination task, relying on activity in the striate and early ventral extrastriate cortices. During training, participants were asked to detect and perform a saccade towards AV stimuli that were disproportionally allocated to one hemifield (the trained hemifield). Half of the participants underwent a training in which AV stimuli were presented in spatial coincidence, while the remaining half underwent a training in which AV stimuli were presented in spatial disparity (32°). Participants who received AV training with stimuli in spatial coincidence had a post-training enhancement of the anterior N1 component in the motion discrimination task, but only in response to stimuli presented in the trained hemifield. However, no effect was found in the orientation discrimination task. In contrast, participants who received AV training with stimuli in spatial disparity showed no effects on either task. The observed N1 enhancement might reflect enhanced discrimination for motion stimuli, probably due to increased activity in the colliculo-dorsal MT pathway induced by multisensory training.
Collapse
Affiliation(s)
- Paolo A Grasso
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy.,CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Europa 980, Cesena 47521, Italy
| | - Mariagrazia Benassi
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy
| | - Elisabetta Làdavas
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy.,CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Europa 980, Cesena 47521, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy.,CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Europa 980, Cesena 47521, Italy
| |
Collapse
|
21
|
Abstract
Cross-modal attention and multisensory integration are very essential for us to perceive the world. The most intuitive feelings about the environment around us are based on what we see and what we hear. Therefore, it is important to understand the interactions between visual inputs and auditory inputs. Previous studies have shown that multisensory integration can be modulated by attention. However, how top-down attention is controlled or allocated across the sensory modalities remains unclear. In this study, we measured the cortical areas activated by the cue-target spatial attention paradigm in both visual and auditory fields using functional MRI. The reaction times of the behavioral results indicated that interactions between the two types of stimuli exist. The imaging results indicated that interactions between multisensory inputs can lead to enhancement or depression of the cortical response with top-down spatial attention. Moreover, the activation of the middle temporal gyrus and insula in tasks with irrelevant stimuli appears to indicate that multisensory integration proceeds automatically.
Collapse
|
22
|
When audiovisual correspondence disturbs visual processing. Exp Brain Res 2016; 234:1325-32. [PMID: 26884130 DOI: 10.1007/s00221-016-4591-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2015] [Accepted: 01/30/2016] [Indexed: 10/22/2022]
Abstract
Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction.
Collapse
|
23
|
Zelic G, Mottet D, Lagarde J. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics. Exp Brain Res 2015; 234:463-74. [DOI: 10.1007/s00221-015-4476-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2014] [Accepted: 10/15/2015] [Indexed: 11/30/2022]
|
24
|
Méndez-Balbuena I, Huidobro N, Silva M, Flores A, Trenado C, Quintanar L, Arias-Carrión O, Kristeva R, Manjarrez E. Effect of mechanical tactile noise on amplitude of visual evoked potentials: multisensory stochastic resonance. J Neurophysiol 2015; 114:2132-43. [PMID: 26156387 DOI: 10.1152/jn.00457.2015] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 07/06/2015] [Indexed: 11/22/2022] Open
Abstract
The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP.
Collapse
Affiliation(s)
| | - Nayeli Huidobro
- Instituto de Fisiología, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico
| | - Mayte Silva
- Instituto de Fisiología, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico
| | - Amira Flores
- Instituto de Fisiología, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico
| | - Carlos Trenado
- Institute of Clinical Neuroscience, Heinrich Heine University, Düsseldorf, Germany
| | - Luis Quintanar
- Facultad de Psicología, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico
| | - Oscar Arias-Carrión
- Unidad de Trastornos del Movimiento y Sueño (TMS), Hospital General Dr. Manuel Gea González/IFC-UNAM, Mexico City, Mexico; and
| | - Rumyana Kristeva
- Department of Neurology, University of Freiburg, Freiburg, Germany
| | - Elias Manjarrez
- Instituto de Fisiología, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico;
| |
Collapse
|
25
|
Ursino M, Cuppini C, Magosso E. Neurocomputational approaches to modelling multisensory integration in the brain: A review. Neural Netw 2014; 60:141-65. [DOI: 10.1016/j.neunet.2014.08.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Revised: 08/05/2014] [Accepted: 08/07/2014] [Indexed: 10/24/2022]
|
26
|
Stone DB, Coffman BA, Bustillo JR, Aine CJ, Stephen JM. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia. Front Hum Neurosci 2014; 8:788. [PMID: 25414652 PMCID: PMC4220133 DOI: 10.3389/fnhum.2014.00788] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Accepted: 09/17/2014] [Indexed: 12/21/2022] Open
Abstract
Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia.
Collapse
Affiliation(s)
- David B Stone
- The Mind Research Network and Lovelace Biomedical and Environmental Research Institute Albuquerque, NM, USA
| | - Brian A Coffman
- The Mind Research Network and Lovelace Biomedical and Environmental Research Institute Albuquerque, NM, USA ; Department of Psychology, Clinical Neuroscience Center, University of New Mexico Albuquerque NM, USA
| | - Juan R Bustillo
- Department of Psychiatry, Health Sciences Center, University of New Mexico Albuquerque, NM, USA
| | - Cheryl J Aine
- Department of Radiology, Health Sciences Center, University of New Mexico Albuquerque, NM, USA
| | - Julia M Stephen
- The Mind Research Network and Lovelace Biomedical and Environmental Research Institute Albuquerque, NM, USA
| |
Collapse
|
27
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
28
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 211] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|
29
|
Kronschnabel J, Brem S, Maurer U, Brandeis D. The level of audiovisual print-speech integration deficits in dyslexia. Neuropsychologia 2014; 62:245-61. [PMID: 25084224 DOI: 10.1016/j.neuropsychologia.2014.07.024] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2013] [Revised: 03/28/2014] [Accepted: 07/22/2014] [Indexed: 01/12/2023]
Abstract
The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No significant differences in basic measures of audiovisual integration emerged. Convergence of hemodynamic and electrophysiological signals appeared to be limited and mainly occurred for highly significant and large effects in visual cortices. The findings suggest efficient superior temporal tuning to audiovisual congruency in controls. In impaired readers, however, grapho-phonological conversion is effortful and inefficient, although basic audiovisual mechanisms seem intact. This unprecedented demonstration of audiovisual deficits in adolescent dyslexics provides critical evidence that the phonological deficit might be explained by impaired audiovisual integration at a phonetic level, especially for naturalistic and word-like stimulation.
Collapse
Affiliation(s)
- Jens Kronschnabel
- University Clinics of Child and Adolescent Psychiatry (UCCAP), University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Silvia Brem
- University Clinics of Child and Adolescent Psychiatry (UCCAP), University of Zurich, Zurich, Switzerland
| | - Urs Maurer
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland; Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Daniel Brandeis
- University Clinics of Child and Adolescent Psychiatry (UCCAP), University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland; Department of Child and Adolescent Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, Mannheim, Germany; Center for Integrative Human Physiology, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
30
|
Makovac E, Gerbino W. Color selectivity of the spatial congruency effect: evidence from the focused attention paradigm. The Journal of General Psychology 2014; 141:18-34. [PMID: 24838018 DOI: 10.1080/00221309.2013.837025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The multisensory response enhancement (MRE), occurring when the response to a visual target integrated with a spatially congruent sound is stronger than the response to the visual target alone, is believed to be mediated by the superior colliculus (SC) (Stein & Meredith, 1993). Here, we used a focused attention paradigm to show that the spatial congruency effect occurs with red (SC-effective) but not blue (SC-ineffective) visual stimuli, when presented with spatially congruent sounds. To isolate the chromatic component of SC-ineffective targets and to demonstrate the selectivity of the spatial congruency effect we used the random luminance modulation technique (Experiment 1) and the tritanopic technique (Experiment 2). Our results indicate that the spatial congruency effect does not require the distribution of attention over different sensory modalities and provide correlational evidence that the SC mediates the effect.
Collapse
|
31
|
Porcu E, Keitel C, Müller MM. Visual, auditory and tactile stimuli compete for early sensory processing capacities within but not between senses. Neuroimage 2014; 97:224-35. [PMID: 24736186 DOI: 10.1016/j.neuroimage.2014.04.024] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Revised: 03/07/2014] [Accepted: 04/06/2014] [Indexed: 11/19/2022] Open
Abstract
We investigated whether unattended visual, auditory and tactile stimuli compete for capacity-limited early sensory processing across senses. In three experiments, we probed competitive audio-visual, visuo-tactile and audio-tactile stimulus interactions. To this end, continuous visual, auditory and tactile stimulus streams ('reference' stimuli) were frequency-tagged to elicit steady-state responses (SSRs). These electrophysiological oscillatory brain responses indexed ongoing stimulus processing in corresponding senses. To induce competition, we introduced transient frequency-tagged stimuli in same and/or different senses ('competitors') during reference presentation. Participants performed a separate visual discrimination task at central fixation to control for attentional biases of sensory processing. A comparison of reference-driven SSR amplitudes between competitor-present and competitor-absent periods revealed reduced amplitudes when a competitor was presented in the same sensory modality as the reference. Reduced amplitudes indicated the competitor's suppressive influence on reference stimulus processing. Crucially, no such suppression was found when a competitor was presented in a different than the reference modality. These results strongly suggest that early sensory competition is exclusively modality-specific and does not extend across senses. We discuss consequences of these findings for modeling the neural mechanisms underlying intermodal attention.
Collapse
Affiliation(s)
- Emanuele Porcu
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19 04109 Leipzig, Germany
| | - Christian Keitel
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19 04109 Leipzig, Germany; Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB Glasgow, UK
| | - Matthias M Müller
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19 04109 Leipzig, Germany.
| |
Collapse
|
32
|
Xu J, Yu L, Rowland BA, Stanford TR, Stein BE. Noise-rearing disrupts the maturation of multisensory integration. Eur J Neurosci 2013; 39:602-13. [PMID: 24251451 DOI: 10.1111/ejn.12423] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Accepted: 10/15/2013] [Indexed: 11/29/2022]
Abstract
It is commonly believed that the ability to integrate information from different senses develops according to associative learning principles as neurons acquire experience with co-active cross-modal inputs. However, previous studies have not distinguished between requirements for co-activation versus co-variation. To determine whether cross-modal co-activation is sufficient for this purpose in visual-auditory superior colliculus (SC) neurons, animals were reared in constant omnidirectional noise. By masking most spatiotemporally discrete auditory experiences, the noise created a sensory landscape that decoupled stimulus co-activation and co-variance. Although a near-normal complement of visual-auditory SC neurons developed, the vast majority could not engage in multisensory integration, revealing that visual-auditory co-activation was insufficient for this purpose. That experience with co-varying stimuli is required for multisensory maturation is consistent with the role of the SC in detecting and locating biologically significant events, but it also seems likely that this is a general requirement for multisensory maturation throughout the brain.
Collapse
Affiliation(s)
- Jinghong Xu
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, 27157, USA
| | | | | | | | | |
Collapse
|
33
|
Ghose D, Wallace MT. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration. Neuroscience 2013; 256:147-62. [PMID: 24183964 DOI: 10.1016/j.neuroscience.2013.10.044] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Revised: 10/08/2013] [Accepted: 10/22/2013] [Indexed: 11/15/2022]
Abstract
Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location - highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC-mediated behaviors.
Collapse
Affiliation(s)
- D Ghose
- Department of Psychology, Vanderbilt University, Nashville, TN, United States; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States.
| | - M T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, United States; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States; Department of Psychiatry, Vanderbilt University, Nashville, TN, United States; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
34
|
Spence C. Just how important is spatial coincidence to multisensory integration? Evaluating the spatial rule. Ann N Y Acad Sci 2013; 1296:31-49. [DOI: 10.1111/nyas.12121] [Citation(s) in RCA: 115] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology; Oxford University
| |
Collapse
|
35
|
Shi Y, Apker G, Buneo CA. Multimodal representation of limb endpoint position in the posterior parietal cortex. J Neurophysiol 2013; 109:2097-107. [DOI: 10.1152/jn.00223.2012] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Understanding the neural representation of limb position is important for comprehending the control of limb movements and the maintenance of body schema, as well as for the development of neuroprosthetic systems designed to replace lost limb function. Multiple subcortical and cortical areas contribute to this representation, but its multimodal basis has largely been ignored. Regarding the parietal cortex, previous results suggest that visual information about arm position is not strongly represented in area 5, although these results were obtained under conditions in which animals were not using their arms to interact with objects in their environment, which could have affected the relative weighting of relevant sensory signals. Here we examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and actively maintained their arm position at multiple locations in a frontal plane. On half of the trials both visual and nonvisual feedback of the endpoint of the arm were available, while on the other trials visual feedback was withheld. Many neurons were tuned to arm position, while a smaller number were modulated by the presence/absence of visual feedback. Visual modulation generally took the form of a decrease in both firing rate and variability with limb vision and was associated with more accurate decoding of position at the population level under these conditions. These findings support a multimodal representation of limb endpoint position in the SPL but suggest that visual signals are relatively weakly represented in this area, and only at the population level.
Collapse
Affiliation(s)
- Ying Shi
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, Arizona
| | - Gregory Apker
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, Arizona
| | - Christopher A. Buneo
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, Arizona
| |
Collapse
|
36
|
Yu L, Xu J, Rowland BA, Stein BE. Development of cortical influences on superior colliculus multisensory neurons: effects of dark-rearing. Eur J Neurosci 2013; 37:1594-601. [PMID: 23534923 DOI: 10.1111/ejn.12182] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2012] [Revised: 02/08/2013] [Accepted: 02/11/2013] [Indexed: 11/27/2022]
Abstract
Rearing cats from birth to adulthood in darkness prevents neurons in the superior colliculus (SC) from developing the capability to integrate visual and non-visual (e.g. visual-auditory) inputs. Presumably, this developmental anomaly is due to a lack of experience with the combination of those cues, which is essential to form associative links between them. The visual-auditory multisensory integration capacity of SC neurons has also been shown to depend on the functional integrity of converging visual and auditory inputs from the ipsilateral association cortex. Disrupting these cortico-collicular projections at any stage of life results in a pattern of outcomes similar to those found after dark-rearing; SC neurons respond to stimuli in both sensory modalities, but cannot integrate the information they provide. Thus, it is possible that dark-rearing compromises the development of these descending tecto-petal connections and the essential influences they convey. However, the results of the present experiments, using cortical deactivation to assess the presence of cortico-collicular influences, demonstrate that dark-rearing does not prevent the association cortex from developing robust influences over SC multisensory responses. In fact, dark-rearing may increase their potency over that observed in normally-reared animals. Nevertheless, their influences are still insufficient to support SC multisensory integration. It appears that cross-modal experience shapes the cortical influence to selectively enhance responses to cross-modal stimulus combinations that are likely to be derived from the same event. In the absence of this experience, the cortex develops an indiscriminate excitatory influence over its multisensory SC target neurons.
Collapse
Affiliation(s)
- Liping Yu
- School of Life Science, East China Normal University, Shanghai, China, 2000062
| | | | | | | |
Collapse
|
37
|
Cuppini C, Magosso E, Rowland B, Stein B, Ursino M. Hebbian mechanisms help explain development of multisensory integration in the superior colliculus: a neural network model. BIOLOGICAL CYBERNETICS 2012; 106:691-713. [PMID: 23011260 PMCID: PMC3552306 DOI: 10.1007/s00422-012-0511-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2011] [Accepted: 07/11/2012] [Indexed: 06/01/2023]
Abstract
The superior colliculus (SC) integrates relevant sensory information (visual, auditory, somatosensory) from several cortical and subcortical structures, to program orientation responses to external events. However, this capacity is not present at birth, and it is acquired only through interactions with cross-modal events during maturation. Mathematical models provide a quantitative framework, valuable in helping to clarify the specific neural mechanisms underlying the maturation of the multisensory integration in the SC. We extended a neural network model of the adult SC (Cuppini et al., Front Integr Neurosci 4:1-15, 2010) to describe the development of this phenomenon starting from an immature state, based on known or suspected anatomy and physiology, in which: (1) AES afferents are present but weak, (2) Responses are driven from non-AES afferents, and (3) The visual inputs have a marginal spatial tuning. Sensory experience was modeled by repeatedly presenting modality-specific and cross-modal stimuli. Synapses in the network were modified by simple Hebbian learning rules. As a consequence of this exposure, (1) Receptive fields shrink and come into spatial register, and (2) SC neurons gained the adult characteristic integrative properties: enhancement, depression, and inverse effectiveness. Importantly, the unique architecture of the model guided the development so that integration became dependent on the relationship between the cortical input and the SC. Manipulations of the statistics of the experience during the development changed the integrative profiles of the neurons, and results matched well with the results of physiological studies.
Collapse
Affiliation(s)
- C Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna, Bologna, Italy.
| | | | | | | | | |
Collapse
|
38
|
Yu L, Rowland BA, Xu J, Stein BE. Multisensory plasticity in adulthood: cross-modal experience enhances neuronal excitability and exposes silent inputs. J Neurophysiol 2012; 109:464-74. [PMID: 23114212 DOI: 10.1152/jn.00739.2012] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Multisensory superior colliculus neurons in cats were found to retain substantial plasticity to short-term, site-specific experience with cross-modal stimuli well into adulthood. Following cross-modal exposure trials, these neurons substantially increased their sensitivity to the cross-modal stimulus configuration as well as to its individual component stimuli. In many cases, the exposure experience also revealed a previously ineffective or "silent" input channel, rendering it overtly responsive. These experience-induced changes required relatively few exposure trials and could be retained for more than 1 h. However, their induction was generally restricted to experience with cross-modal stimuli. Only rarely were they induced by exposure to a modality-specific stimulus and were never induced by stimulating a previously ineffective input channel. This short-term plasticity likely provides substantial benefits to the organism in dealing with ongoing and sequential events that take place at a given location in space and may reflect the ability of multisensory superior colliculus neurons to rapidly alter their response properties to accommodate to changes in environmental challenges and event probabilities.
Collapse
Affiliation(s)
- Liping Yu
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157-1010, USA
| | | | | | | |
Collapse
|
39
|
Ghose D, Barnett ZP, Wallace MT. Impact of response duration on multisensory integration. J Neurophysiol 2012; 108:2534-44. [PMID: 22896723 DOI: 10.1152/jn.00286.2012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multisensory neurons in the superior colliculus (SC) have been shown to have large receptive fields that are heterogeneous in nature. These neurons have the capacity to integrate their different sensory inputs, a process that has been shown to depend on the physical characteristics of the stimuli that are combined (i.e., spatial and temporal relationship and relative effectiveness). Recent work has highlighted the interdependence of these factors in driving multisensory integration, adding a layer of complexity to our understanding of multisensory processes. In the present study our goal was to add to this understanding by characterizing how stimulus location impacts the temporal dynamics of multisensory responses in cat SC neurons. The results illustrate that locations within the spatial receptive fields (SRFs) of these neurons can be divided into those showing short-duration responses and long-duration response profiles. Most importantly, discharge duration appears to be a good determinant of multisensory integration, such that short-duration responses are typically associated with a high magnitude of multisensory integration (i.e., superadditive responses) while long-duration responses are typically associated with low integrative capacity. These results further reinforce the complexity of the integrative features of SC neurons and show that the large SRFs of these neurons are characterized by vastly differing temporal dynamics, dynamics that strongly shape the integrative capacity of these neurons.
Collapse
Affiliation(s)
- Dipanwita Ghose
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240, USA.
| | | | | |
Collapse
|
40
|
Incorporating cross-modal statistics in the development and maintenance of multisensory integration. J Neurosci 2012; 32:2287-98. [PMID: 22396404 DOI: 10.1523/jneurosci.4304-11.2012] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Development of multisensory integration capabilities in superior colliculus (SC) neurons was examined in cats whose visual-auditory experience was restricted to a circumscribed period during early life (postnatal day 30-8 months). Animals were periodically exposed to visual and auditory stimuli appearing either randomly in space and time, or always in spatiotemporal concordance. At all other times animals were maintained in darkness. Physiological testing was initiated at ∼2 years of age. Exposure to random visual and auditory stimuli proved insufficient to spur maturation of the ability to integrate cross-modal stimuli, but exposure to spatiotemporally concordant cross-modal stimuli was highly effective. The multisensory integration capabilities of neurons in the latter group resembled those of normal animals and were retained for >16 months in the absence of subsequent visual-auditory experience. Furthermore, the neurons were capable of integrating stimuli having physical properties differing significantly from those in the exposure set. These observations suggest that acquiring the rudiments of multisensory integration requires little more than exposure to consistent relationships between the modality-specific components of a cross-modal event, and that continued experience with such events is not necessary for their maintenance. Apparently, the statistics of cross-modal experience early in life define the spatial and temporal filters that determine whether the components of cross-modal stimuli are to be integrated or treated as independent events, a crucial developmental process that determines the spatial and temporal rules by which cross-modal stimuli are integrated to enhance both sensory salience and the likelihood of eliciting an SC-mediated motor response.
Collapse
|
41
|
Pluta SR, Rowland BA, Stanford TR, Stein BE. Alterations to multisensory and unisensory integration by stimulus competition. J Neurophysiol 2011; 106:3091-101. [PMID: 21957224 DOI: 10.1152/jn.00509.2011] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.
Collapse
Affiliation(s)
- Scott R Pluta
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston-Salem, NC 27157, USA.
| | | | | | | |
Collapse
|
42
|
Hirokawa J, Sadakane O, Sakata S, Bosch M, Sakurai Y, Yamamori T. Multisensory information facilitates reaction speed by enlarging activity difference between superior colliculus hemispheres in rats. PLoS One 2011; 6:e25283. [PMID: 21966481 PMCID: PMC3180293 DOI: 10.1371/journal.pone.0025283] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2011] [Accepted: 08/31/2011] [Indexed: 11/18/2022] Open
Abstract
Animals can make faster behavioral responses to multisensory stimuli than to unisensory stimuli. The superior colliculus (SC), which receives multiple inputs from different sensory modalities, is considered to be involved in the initiation of motor responses. However, the mechanism by which multisensory information facilitates motor responses is not yet understood. Here, we demonstrate that multisensory information modulates competition among SC neurons to elicit faster responses. We conducted multiunit recordings from the SC of rats performing a two-alternative spatial discrimination task using auditory and/or visual stimuli. We found that a large population of SC neurons showed direction-selective activity before the onset of movement in response to the stimuli irrespective of stimulation modality. Trial-by-trial correlation analysis showed that the premovement activity of many SC neurons increased with faster reaction speed for the contraversive movement, whereas the premovement activity of another population of neurons decreased with faster reaction speed for the ipsiversive movement. When visual and auditory stimuli were presented simultaneously, the premovement activity of a population of neurons for the contraversive movement was enhanced, whereas the premovement activity of another population of neurons for the ipsiversive movement was depressed. Unilateral inactivation of SC using muscimol prolonged reaction times of contraversive movements, but it shortened those of ipsiversive movements. These findings suggest that the difference in activity between the SC hemispheres regulates the reaction speed of motor responses, and multisensory information enlarges the activity difference resulting in faster responses.
Collapse
Affiliation(s)
- Junya Hirokawa
- Division of Brain Biology, National Institute for Basic Biology, Okazaki, Japan
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| | - Osamu Sadakane
- Division of Brain Biology, National Institute for Basic Biology, Okazaki, Japan
| | - Shuzo Sakata
- Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, New Jersey, United States of America
- Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, United Kingdom
| | - Miquel Bosch
- Division of Brain Biology, National Institute for Basic Biology, Okazaki, Japan
- The Picower Institute for Learning and Memory, RIKEN-MIT Neuroscience Research Center, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Yoshio Sakurai
- Department of Psychology, Kyoto University, Kyoto, Japan
- Core Research for Evolution Science and Technology, Japan Science and Technology Agency, Kawaguchi, Japan
| | - Tetsuo Yamamori
- Division of Brain Biology, National Institute for Basic Biology, Okazaki, Japan
| |
Collapse
|
43
|
Cuppini C, Stein BE, Rowland BA, Magosso E, Ursino M. A computational study of multisensory maturation in the superior colliculus (SC). Exp Brain Res 2011; 213:341-9. [PMID: 21556818 DOI: 10.1007/s00221-011-2714-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2010] [Accepted: 04/26/2011] [Indexed: 10/18/2022]
Abstract
Multisensory neurons in cat SC exhibit significant postnatal maturation. The first multisensory neurons to appear have large receptive fields (RFs) and cannot integrate information across sensory modalities. During the first several months of postnatal life RFs contract, responses become more robust and neurons develop the capacity for multisensory integration. Recent data suggest that these changes depend on both sensory experience and active inputs from association cortex. Here, we extend a computational model we developed (Cuppini et al. in Front Integr Neurosci 22: 4-6, 2010) using a limited set of biologically realistic assumptions to describe how this maturational process might take place. The model assumes that during early life, cortical-SC synapses are present but not active and that responses are driven by non-cortical inputs with very large RFs. Sensory experience is modeled by a "training phase" in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. Cortical-SC synaptic weights are modified during this period as a result of Hebbian rules of potentiation and depression. The result is that RFs are reduced in size and neurons become capable of responding in adult-like fashion to modality-specific and cross-modal stimuli.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna, Bologna, Italy.
| | | | | | | | | |
Collapse
|
44
|
A normalization model of multisensory integration. Nat Neurosci 2011; 14:775-82. [PMID: 21552274 PMCID: PMC3102778 DOI: 10.1038/nn.2815] [Citation(s) in RCA: 174] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2010] [Accepted: 03/21/2011] [Indexed: 11/08/2022]
Abstract
Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration shown by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which uses a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the important features of multisensory integration by neurons.
Collapse
|
45
|
Cuppini C, Magosso E, Ursino M. Organization, maturation, and plasticity of multisensory integration: insights from computational modeling studies. Front Psychol 2011; 2:77. [PMID: 21687448 PMCID: PMC3110383 DOI: 10.3389/fpsyg.2011.00077] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2010] [Accepted: 04/12/2011] [Indexed: 11/15/2022] Open
Abstract
In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | |
Collapse
|
46
|
Stein BE, Rowland BA. Organization and plasticity in multisensory integration: early and late experience affects its governing principles. PROGRESS IN BRAIN RESEARCH 2011; 191:145-63. [PMID: 21741550 DOI: 10.1016/b978-0-444-53752-2.00007-2] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Neurons in the midbrain superior colliculus (SC) have the ability to integrate information from different senses to profoundly increase their sensitivity to external events. This not only enhances an organism's ability to detect and localize these events, but to program appropriate motor responses to them. The survival value of this process of multisensory integration is self-evident, and its physiological and behavioral manifestations have been studied extensively in adult and developing cats and monkeys. These studies have revealed, that contrary to expectations based on some developmental theories this process is not present in the newborn's brain. The data show that is acquired only gradually during postnatal life as a consequence of at least two factors: the maturation of cooperative interactions between association cortex and the SC, and extensive experience with cross-modal cues. Using these factors, the brain is able to craft the underlying neural circuits and the fundamental principles that govern multisensory integration so that they are adapted to the ecological circumstances in which they will be used.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA.
| | | |
Collapse
|
47
|
Sabes PN. Sensory integration for reaching: models of optimality in the context of behavior and the underlying neural circuits. PROGRESS IN BRAIN RESEARCH 2011; 191:195-209. [PMID: 21741553 DOI: 10.1016/b978-0-444-53752-2.00004-7] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Although multisensory integration has been well modeled at the behavioral level, the link between these behavioral models and the underlying neural circuits is still not clear. This gap is even greater for the problem of sensory integration during movement planning and execution. The difficulty lies in applying simple models of sensory integration to the complex computations that are required for movement control and to the large networks of brain areas that perform these computations. Here I review psychophysical, computational, and physiological work on multisensory integration during movement planning, with an emphasis on goal-directed reaching. I argue that sensory transformations must play a central role in any modeling effort. In particular, the statistical properties of these transformations factor heavily into the way in which downstream signals are combined. As a result, our models of optimal integration are only expected to apply "locally," that is, independently for each brain area. I suggest that local optimality can be reconciled with globally optimal behavior if one views the collection of parietal sensorimotor areas not as a set of task-specific domains, but rather as a palette of complex, sensorimotor representations that are flexibly combined to drive downstream activity and behavior.
Collapse
Affiliation(s)
- Philip N Sabes
- Department of Physiology, Keck Center for Integrative Neuroscience, University of California, San Francisco, CA, USA.
| |
Collapse
|
48
|
Cuppini C, Ursino M, Magosso E, Rowland BA, Stein BE. An emergent model of multisensory integration in superior colliculus neurons. Front Integr Neurosci 2010; 4:6. [PMID: 20431725 PMCID: PMC2861478 DOI: 10.3389/fnint.2010.00006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Accepted: 03/03/2010] [Indexed: 11/21/2022] Open
Abstract
Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | | | | | |
Collapse
|
49
|
Adult plasticity in multisensory neurons: short-term experience-dependent changes in the superior colliculus. J Neurosci 2010; 29:15910-22. [PMID: 20016107 DOI: 10.1523/jneurosci.4041-09.2009] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Multisensory neurons in the superior colliculus (SC) have the capability to integrate signals that belong to the same event, despite being conveyed by different senses. They develop this capability during early life as experience is gained with the statistics of cross-modal events. These adaptations prepare the SC to deal with the cross-modal events that are likely to be encountered throughout life. Here, we found that neurons in the adult SC can also adapt to experience with sequentially ordered cross-modal (visual-auditory or auditory-visual) cues, and that they do so over short periods of time (minutes), as if adapting to a particular stimulus configuration. This short-term plasticity was evident as a rapid increase in the magnitude and duration of responses to the first stimulus, and a shortening of the latency and increase in magnitude of the responses to the second stimulus when they are presented in sequence. The result was that the two responses appeared to merge. These changes were stable in the absence of experience with competing stimulus configurations, outlasted the exposure period, and could not be induced by equivalent experience with sequential within-modal (visual-visual or auditory-auditory) stimuli. A parsimonious interpretation is that the additional SC activity provided by the second stimulus became associated with, and increased the potency of, the afferents responding to the preceding stimulus. This interpretation is consistent with the principle of spike-timing-dependent plasticity, which may provide the basic mechanism for short term or long term plasticity and be operative in both the adult and neonatal SC.
Collapse
|
50
|
Stein BE, Perrault TJ, Stanford TR, Rowland BA. Postnatal experiences influence how the brain integrates information from different senses. Front Integr Neurosci 2009; 3:21. [PMID: 19838323 PMCID: PMC2762369 DOI: 10.3389/neuro.07.021.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2009] [Accepted: 08/11/2009] [Indexed: 11/20/2022] Open
Abstract
Sensory processing disorder (SPD) is characterized by anomalous reactions to, and integration of, sensory cues. Although the underlying etiology of SPD is unknown, one brain region likely to reflect these sensory and behavioral anomalies is the superior colliculus (SC), a structure involved in the synthesis of information from multiple sensory modalities and the control of overt orientation responses. In the present review we describe normal functional properties of this structure, the manner in which its individual neurons integrate cues from different senses, and the overt SC-mediated behaviors that are believed to manifest this “multisensory integration.” Of particular interest here is how SC neurons develop their capacity to engage in multisensory integration during early postnatal life as a consequence of early sensory experience, and the intimate communication between cortex and the midbrain that makes this developmental process possible.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine Winston-Salem, NC, USA
| | | | | | | |
Collapse
|