1
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
2
|
Development of the Mechanisms Governing Midbrain Multisensory Integration. J Neurosci 2018; 38:3453-3465. [PMID: 29496891 DOI: 10.1523/jneurosci.2631-17.2018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 12/15/2017] [Accepted: 01/19/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to integrate information across multiple senses enhances the brain's ability to detect, localize, and identify external events. This process has been well documented in single neurons in the superior colliculus (SC), which synthesize concordant combinations of visual, auditory, and/or somatosensory signals to enhance the vigor of their responses. This increases the physiological salience of crossmodal events and, in turn, the speed and accuracy of SC-mediated behavioral responses to them. However, this capability is not an innate feature of the circuit and only develops postnatally after the animal acquires sufficient experience with covariant crossmodal events to form links between their modality-specific components. Of critical importance in this process are tectopetal influences from association cortex. Recent findings suggest that, despite its intuitive appeal, a simple generic associative rule cannot explain how this circuit develops its ability to integrate those crossmodal inputs to produce enhanced multisensory responses. The present neurocomputational model explains how this development can be understood as a transition from a default state in which crossmodal SC inputs interact competitively to one in which they interact cooperatively. Crucial to this transition is the operation of a learning rule requiring coactivation among tectopetal afferents for engagement. The model successfully replicates findings of multisensory development in normal cats and cats of either sex reared with special experience. In doing so, it explains how the cortico-SC projections can use crossmodal experience to craft the multisensory integration capabilities of the SC and adapt them to the environment in which they will be used.SIGNIFICANCE STATEMENT The brain's remarkable ability to integrate information across the senses is not present at birth, but typically develops in early life as experience with crossmodal cues is acquired. Recent empirical findings suggest that the mechanisms supporting this development must be more complex than previously believed. The present work integrates these data with what is already known about the underlying circuit in the midbrain to create and test a mechanistic model of multisensory development. This model represents a novel and comprehensive framework that explains how midbrain circuits acquire multisensory experience and reveals how disruptions in this neurotypic developmental trajectory yield divergent outcomes that will affect the multisensory processing capabilities of the mature brain.
Collapse
|
3
|
Veale R, Hafed ZM, Yoshida M. How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160113. [PMID: 28044023 PMCID: PMC5206280 DOI: 10.1098/rstb.2016.0113] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2016] [Indexed: 01/07/2023] Open
Abstract
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a 'saliency map' topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Richard Veale
- Department of System Neuroscience, National Institute for Physiological Sciences, Okazaki, Japan
| | - Ziad M Hafed
- Physiology of Active Vision Laboratory, Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen, Tuebingen, Germany
| | - Masatoshi Yoshida
- Department of System Neuroscience, National Institute for Physiological Sciences, Okazaki, Japan
- School of Life Science, The Graduate University for Advanced Studies, Hayama, Japan
| |
Collapse
|
4
|
Magosso E, Bertini C, Cuppini C, Ursino M. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction. Neuropsychologia 2016; 91:120-140. [DOI: 10.1016/j.neuropsychologia.2016.07.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Revised: 06/17/2016] [Accepted: 07/12/2016] [Indexed: 11/16/2022]
|
5
|
Kardamakis AA, Pérez-Fernández J, Grillner S. Spatiotemporal interplay between multisensory excitation and recruited inhibition in the lamprey optic tectum. eLife 2016; 5. [PMID: 27635636 PMCID: PMC5026466 DOI: 10.7554/elife.16472] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/14/2016] [Indexed: 11/23/2022] Open
Abstract
Animals integrate the different senses to facilitate event-detection for navigation in their environment. In vertebrates, the optic tectum (superior colliculus) commands gaze shifts by synaptic integration of different sensory modalities. Recent works suggest that tectum can elaborate gaze reorientation commands on its own, rather than merely acting as a relay from upstream/forebrain circuits to downstream premotor centers. We show that tectal circuits can perform multisensory computations independently and, hence, configure final motor commands. Single tectal neurons receive converging visual and electrosensory inputs, as investigated in the lamprey - a phylogenetically conserved vertebrate. When these two sensory inputs overlap in space and time, response enhancement of output neurons occurs locally in the tectum, whereas surrounding areas and temporally misaligned inputs are inhibited. Retinal and electrosensory afferents elicit local monosynaptic excitation, quickly followed by inhibition via recruitment of GABAergic interneurons. Multisensory inputs can thus regulate event-detection within tectum through local inhibition without forebrain control. DOI:http://dx.doi.org/10.7554/eLife.16472.001 Many events occur around us simultaneously, which we detect through our senses. A critical task is to decide which of these events is the most important to look at in a given moment of time. This problem is solved by an ancient area of the brain called the optic tectum (known as the superior colliculus in mammals). The different senses are represented as superimposed maps in the optic tectum. Events that occur in different locations activate different areas of the map. Neurons in the optic tectum combine the responses from different senses to direct the animal’s attention and increase how reliably important events are detected. If an event is simultaneously registered by two senses, then certain neurons in the optic tectum will enhance their activity. By contrast, if two senses provide conflicting information about how different events progress, then these same neurons will be silenced. While this phenomenon of ‘multisensory integration’ is well described, little is known about how the optic tectum performs this integration. Kardamakis, Pérez-Fernández and Grillner have now studied multisensory integration in fish called lampreys, which belong to the oldest group of backboned animals. These fish can navigate using electroreception – the ability to detect electrical signals from the environment. Experiments that examined the connections between neurons in the optic tectum and monitored their activity revealed a neural circuit that consists of two types of neurons: inhibitory interneurons, and projecting neurons that connect the optic tectum to different motor centers in the brainstem. The circuit contains neurons that can receive inputs from both vision and electroreception when these senses are both activated from the same point in space. Incoming signals from the two senses activate the areas on the sensory maps that correspond to the location where the event occurred. This triggers the activity of the interneurons, which immediately send ‘stop’ signals. Thus, while an area of the sensory map and its output neurons are activated, the surrounding areas of the tectum are inhibited. Overall, the findings presented by Kardamakis, Pérez-Fernández and Grillner suggest that the optic tectum can direct attention to a particular event without requiring input from other brain areas. This ability has most likely been preserved throughout evolution. Future studies will aim to determine how the commands generated by the optic tectum circuit are translated into movements. DOI:http://dx.doi.org/10.7554/eLife.16472.002
Collapse
Affiliation(s)
| | | | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
6
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 211] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|
7
|
|
8
|
Casey MC, Sowden PT. Modeling learned categorical perception in human vision. Neural Netw 2012; 33:114-26. [PMID: 22622262 DOI: 10.1016/j.neunet.2012.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 05/01/2012] [Accepted: 05/01/2012] [Indexed: 11/15/2022]
Abstract
A long standing debate in cognitive neuroscience has been the extent to which perceptual processing is influenced by prior knowledge and experience with a task. A converging body of evidence now supports the view that a task does influence perceptual processing, leaving us with the challenge of understanding the locus of, and mechanisms underpinning, these influences. An exemplar of this influence is learned categorical perception (CP), in which there is superior perceptual discrimination of stimuli that are placed in different categories. Psychophysical experiments on humans have attempted to determine whether early cortical stages of visual analysis change as a result of learning a categorization task. However, while some results indicate that changes in visual analysis occur, the extent to which earlier stages of processing are changed is still unclear. To explore this issue, we develop a biologically motivated neural model of hierarchical vision processes consisting of a number of interconnected modules representing key stages of visual analysis, with each module learning to exhibit desired local properties through competition. With this system level model, we evaluate whether a CP effect can be generated with task influence to only the later stages of visual analysis. Our model demonstrates that task learning in just the later stages is sufficient for the model to exhibit the CP effect, demonstrating the existence of a mechanism that requires only a high-level of task influence. However, the effect generalizes more widely than is found with human participants, suggesting that changes to earlier stages of analysis may also be involved in the human CP effect, even if these are not fundamental to the development of CP. The model prompts a hybrid account of task-based influences on perception that involves both modifications to the use of the outputs from early perceptual analysis along with the possibility of changes to the nature of that early analysis itself.
Collapse
|
9
|
Mathews Z, i Badia SB, Verschure PF. PASAR: An integrated model of prediction, anticipation, sensation, attention and response for artificial sensorimotor systems. Inf Sci (N Y) 2012. [DOI: 10.1016/j.ins.2011.09.042] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
10
|
|
11
|
Cuppini C, Stein BE, Rowland BA, Magosso E, Ursino M. A computational study of multisensory maturation in the superior colliculus (SC). Exp Brain Res 2011; 213:341-9. [PMID: 21556818 DOI: 10.1007/s00221-011-2714-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2010] [Accepted: 04/26/2011] [Indexed: 10/18/2022]
Abstract
Multisensory neurons in cat SC exhibit significant postnatal maturation. The first multisensory neurons to appear have large receptive fields (RFs) and cannot integrate information across sensory modalities. During the first several months of postnatal life RFs contract, responses become more robust and neurons develop the capacity for multisensory integration. Recent data suggest that these changes depend on both sensory experience and active inputs from association cortex. Here, we extend a computational model we developed (Cuppini et al. in Front Integr Neurosci 22: 4-6, 2010) using a limited set of biologically realistic assumptions to describe how this maturational process might take place. The model assumes that during early life, cortical-SC synapses are present but not active and that responses are driven by non-cortical inputs with very large RFs. Sensory experience is modeled by a "training phase" in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. Cortical-SC synaptic weights are modified during this period as a result of Hebbian rules of potentiation and depression. The result is that RFs are reduced in size and neurons become capable of responding in adult-like fashion to modality-specific and cross-modal stimuli.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna, Bologna, Italy.
| | | | | | | | | |
Collapse
|
12
|
Bolognini N, Olgiati E, Rossetti A, Maravita A. Enhancing multisensory spatial orienting by brain polarization of the parietal cortex. Eur J Neurosci 2010; 31:1800-6. [PMID: 20584184 DOI: 10.1111/j.1460-9568.2010.07211.x] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique that induces polarity-specific excitability changes in the human brain, therefore altering physiological, perceptual and higher-order cognitive processes. Here we investigated the possibility of enhancing attentional orienting within and across different sensory modalities, namely visual and auditory, by polarization of the posterior parietal cortex (PPC), given the putative involvement of this area in both unisensory and multisensory spatial processing. In different experiments, we applied anodal or sham tDCS to the right PPC and, for control, anodal stimulation of the right occipital cortex. Using a redundant signal effect (RSE) task, we found that anodal tDCS over the right PPC significantly speeded up responses to contralateral targets, regardless of the stimulus modality. Furthermore, the effect was dependant on the nature of the audiovisual enhancement, being stronger when subserved by a probabilistic mechanism induced by blue visual stimuli, which probably involves processing in the PPC. Hence, up-regulating the level of excitability in the PPC by tDCS appears a successful approach for enhancing spatial orienting to unisensory and crossmodal stimuli. Moreover, audiovisual interactions mostly occurring at a cortical level can be selectively enhanced by anodal PPC tDCS, whereas multisensory integration of stimuli, which is also largely mediated at a subcortical level, appears less susceptible to polarization of the cortex.
Collapse
Affiliation(s)
- Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca, Viale dell'Innovazione 10, 20126 Milano, Italy.
| | | | | | | |
Collapse
|
13
|
Stein BE, Burr D, Constantinidis C, Laurienti PJ, Alex Meredith M, Perrault TJ, Ramachandran R, Röder B, Rowland BA, Sathian K, Schroeder CE, Shams L, Stanford TR, Wallace MT, Yu L, Lewkowicz DJ. Semantic confusion regarding the development of multisensory integration: a practical solution. Eur J Neurosci 2010; 31:1713-20. [PMID: 20584174 PMCID: PMC3055172 DOI: 10.1111/j.1460-9568.2010.07206.x] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
There is now a good deal of data from neurophysiological studies in animals and behavioral studies in human infants regarding the development of multisensory processing capabilities. Although the conclusions drawn from these different datasets sometimes appear to conflict, many of the differences are due to the use of different terms to mean the same thing and, more problematic, the use of similar terms to mean different things. Semantic issues are pervasive in the field and complicate communication among groups using different methods to study similar issues. Achieving clarity of communication among different investigative groups is essential for each to make full use of the findings of others, and an important step in this direction is to identify areas of semantic confusion. In this way investigators can be encouraged to use terms whose meaning and underlying assumptions are unambiguous because they are commonly accepted. Although this issue is of obvious importance to the large and very rapidly growing number of researchers working on multisensory processes, it is perhaps even more important to the non-cognoscenti. Those who wish to benefit from the scholarship in this field but are unfamiliar with the issues identified here are most likely to be confused by semantic inconsistencies. The current discussion attempts to document some of the more problematic of these, begin a discussion about the nature of the confusion and suggest some possible solutions.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Cuppini C, Ursino M, Magosso E, Rowland BA, Stein BE. An emergent model of multisensory integration in superior colliculus neurons. Front Integr Neurosci 2010; 4:6. [PMID: 20431725 PMCID: PMC2861478 DOI: 10.3389/fnint.2010.00006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Accepted: 03/03/2010] [Indexed: 11/21/2022] Open
Abstract
Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | | | | | |
Collapse
|
15
|
Adult plasticity in multisensory neurons: short-term experience-dependent changes in the superior colliculus. J Neurosci 2010; 29:15910-22. [PMID: 20016107 DOI: 10.1523/jneurosci.4041-09.2009] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Multisensory neurons in the superior colliculus (SC) have the capability to integrate signals that belong to the same event, despite being conveyed by different senses. They develop this capability during early life as experience is gained with the statistics of cross-modal events. These adaptations prepare the SC to deal with the cross-modal events that are likely to be encountered throughout life. Here, we found that neurons in the adult SC can also adapt to experience with sequentially ordered cross-modal (visual-auditory or auditory-visual) cues, and that they do so over short periods of time (minutes), as if adapting to a particular stimulus configuration. This short-term plasticity was evident as a rapid increase in the magnitude and duration of responses to the first stimulus, and a shortening of the latency and increase in magnitude of the responses to the second stimulus when they are presented in sequence. The result was that the two responses appeared to merge. These changes were stable in the absence of experience with competing stimulus configurations, outlasted the exposure period, and could not be induced by equivalent experience with sequential within-modal (visual-visual or auditory-auditory) stimuli. A parsimonious interpretation is that the additional SC activity provided by the second stimulus became associated with, and increased the potency of, the afferents responding to the preceding stimulus. This interpretation is consistent with the principle of spike-timing-dependent plasticity, which may provide the basic mechanism for short term or long term plasticity and be operative in both the adult and neonatal SC.
Collapse
|
16
|
Barutchu A, Danaher J, Crewther SG, Innes-Brown H, Shivdasani MN, Paolini AG. Audiovisual integration in noise by children and adults. J Exp Child Psychol 2010; 105:38-50. [DOI: 10.1016/j.jecp.2009.08.005] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2008] [Revised: 08/31/2009] [Accepted: 08/31/2009] [Indexed: 11/28/2022]
|
17
|
Stein BE, Perrault TJ, Stanford TR, Rowland BA. Postnatal experiences influence how the brain integrates information from different senses. Front Integr Neurosci 2009; 3:21. [PMID: 19838323 PMCID: PMC2762369 DOI: 10.3389/neuro.07.021.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2009] [Accepted: 08/11/2009] [Indexed: 11/20/2022] Open
Abstract
Sensory processing disorder (SPD) is characterized by anomalous reactions to, and integration of, sensory cues. Although the underlying etiology of SPD is unknown, one brain region likely to reflect these sensory and behavioral anomalies is the superior colliculus (SC), a structure involved in the synthesis of information from multiple sensory modalities and the control of overt orientation responses. In the present review we describe normal functional properties of this structure, the manner in which its individual neurons integrate cues from different senses, and the overt SC-mediated behaviors that are believed to manifest this “multisensory integration.” Of particular interest here is how SC neurons develop their capacity to engage in multisensory integration during early postnatal life as a consequence of early sensory experience, and the intimate communication between cortex and the midbrain that makes this developmental process possible.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine Winston-Salem, NC, USA
| | | | | | | |
Collapse
|
18
|
Multisensory integration in the superior colliculus requires synergy among corticocollicular inputs. J Neurosci 2009; 29:6580-92. [PMID: 19458228 DOI: 10.1523/jneurosci.0525-09.2009] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Influences from the visual (AEV), auditory (FAES), and somatosensory (SIV) divisions of the cat anterior ectosylvian sulcus (AES) play a critical role in rendering superior colliculus (SC) neurons capable of multisensory integration. However, it is not known whether this is accomplished via their independent sensory-specific action or via some cross-modal cooperative action that emerges as a consequence of their convergence on SC neurons. Using visual-auditory SC neurons as a model, we examined how selective and combined deactivation of FAES and AEV affected SC multisensory (visual-auditory) and unisensory (visual-visual) integration capabilities. As noted earlier, multisensory integration yielded SC responses that were significantly greater than those evoked by the most effective individual component stimulus. This multisensory "response enhancement" was more evident when the component stimuli were weakly effective. Conversely, unisensory integration was dominated by the lack of response enhancement. During cryogenic deactivation of FAES and/or AEV, the unisensory responses of SC neurons were only modestly affected; however, their multisensory response enhancement showed a significant downward shift and was eliminated. The shift was similar in magnitude for deactivation of either AES subregion and, in general, only marginally greater when both were deactivated simultaneously. These data reveal that SC multisensory integration is dependent on the cooperative action of distinct subsets of unisensory corticofugal afferents, afferents whose sensory combination matches the multisensory profile of their midbrain target neurons, and whose functional synergy is specific to rendering SC neurons capable of synthesizing information from those particular senses.
Collapse
|
19
|
Bolognini N, Miniussi C, Savazzi S, Bricolo E, Maravita A. TMS modulation of visual and auditory processing in the posterior parietal cortex. Exp Brain Res 2009; 195:509-17. [DOI: 10.1007/s00221-009-1820-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2008] [Accepted: 04/17/2009] [Indexed: 11/28/2022]
|
20
|
Elliott T, Kuang X, Shadbolt NR, Zauner KP. Adaptation in multisensory neurons: impact on cross-modal enhancement. NETWORK (BRISTOL, ENGLAND) 2009; 20:1-31. [PMID: 19229731 DOI: 10.1080/09548980902751752] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Adaptation is a ubiquitous property of sensory neurons. Multisensory neurons, receiving convergent input from different sensory modalities, also likely exhibit adaptation. The responses of multisensory superior colliculus neurons have been extensively studied, but the impact of adaptation on these responses has not been examined. Multisensory neurons in the superior colliculus exhibit cross-modal enhancement, an often non-linear and non-additive increase in response when a stimulus in one modality is paired with a stimulus in a different modality. We examine the possible impact of adaptation on cross-modal enhancement within the framework of a simple model of adaptation for a neuron employing a saturating, logistic response function. We consider how adaptation to an input's mean and standard deviation affects cross-modal enhancement, and also how the statistical correlations between two different modalities influence cross-modal enhancement. We determine the optimal bimodal stimuli to present a bimodal neuron that evoke the largest changes in cross-modal enhancement under adaptation to input statistics. The model requires separate gains for each modality, unless the statistics specific to each modality have been standardised by prior adaptation in earlier, unisensory neurons. The model also predicts that increasing the correlation coefficient between two modalities reduces a multisensory neuron's overall gain.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, UK.
| | | | | | | |
Collapse
|
21
|
Abstract
This chapter reviews several highly convergent behavioral findings that provide strong evidence for the existence of multimodal integration systems subserving spatial representation in humans. These systems generally function through the multisensory coding of visuoauditory and visuotactile events but vary in their specific functional and anatomical characteristics. The chapter will also consider the adaptive advantages of multisensory integration systems; these systems might modulate the level of activation in cortical areas in short- and long-term ways, thereby providing a mechanism for permanent recovery from sensory and spatial deficits.
Collapse
Affiliation(s)
- Elisabetta Làdavas
- Dipartimento di Psicologia, Università di Bologna, 40127 Bologna, Italy.
| |
Collapse
|
22
|
Alvarado JC, Rowland BA, Stanford TR, Stein BE. A neural network model of multisensory integration also accounts for unisensory integration in superior colliculus. Brain Res 2008; 1242:13-23. [PMID: 18486113 DOI: 10.1016/j.brainres.2008.03.074] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2008] [Revised: 03/17/2008] [Accepted: 03/19/2008] [Indexed: 10/22/2022]
Abstract
Sensory integration is a characteristic feature of superior colliculus (SC) neurons. A recent neural network model of single-neuron integration derived a set of basic biological constraints sufficient to replicate a number of physiological findings pertaining to multisensory responses. The present study examined the accuracy of this model in predicting the responses of SC neurons to pairs of visual stimuli placed within their receptive fields. The accuracy of this model was compared to that of three other computational models (additive, averaging and maximum operator) previously used to fit these data. Each neuron's behavior was assessed by examining its mean responses to the component stimuli individually and together, and each model's performance was assessed to determine how close its prediction came to the actual mean response of each neuron and the magnitude of its predicted residual error. Predictions from the additive model significantly overshot the actual responses of SC neurons and predictions from the averaging model significantly undershot them. Only the predictions of the maximum operator and neural network model were not significantly different from the actual responses. However, the neural network model outperformed even the maximum operator model in predicting the responses of these neurons. The neural network model is derived from a larger model that also has substantial predictive power in multisensory integration, and provides a single computational vehicle for assessing the responses of SC neurons to different combinations of cross-modal and within-modal stimuli of different efficacies.
Collapse
Affiliation(s)
- Juan Carlos Alvarado
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA.
| | | | | | | |
Collapse
|
23
|
Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci 2008; 9:255-66. [PMID: 18354398 DOI: 10.1038/nrn2331] [Citation(s) in RCA: 899] [Impact Index Per Article: 56.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For thousands of years science philosophers have been impressed by how effectively the senses work together to enhance the salience of biologically meaningful events. However, they really had no idea how this was accomplished. Recent insights into the underlying physiological mechanisms reveal that, in at least one circuit, this ability depends on an intimate dialogue among neurons at multiple levels of the neuraxis; this dialogue cannot take place until long after birth and might require a specific kind of experience. Understanding the acquisition and usage of multisensory integration in the midbrain and cerebral cortex of mammals has been aided by a multiplicity of approaches. Here we examine some of the fundamental advances that have been made and some of the challenging questions that remain.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | |
Collapse
|
24
|
Nodal FR, Bajo VM, Parsons CH, Schnupp JW, King AJ. Sound localization behavior in ferrets: comparison of acoustic orientation and approach-to-target responses. Neuroscience 2007; 154:397-408. [PMID: 18281159 DOI: 10.1016/j.neuroscience.2007.12.022] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2007] [Revised: 12/06/2007] [Accepted: 12/10/2007] [Indexed: 10/22/2022]
Abstract
Auditory localization experiments typically either require subjects to judge the location of a sound source from a discrete set of response alternatives or involve measurements of the accuracy of orienting responses made toward the source location. To compare the results obtained by both methods, we trained ferrets by positive conditioning to stand on a platform at the center of a circular arena prior to stimulus presentation and then approach the source of a broadband noise burst delivered from 1 of 12 loudspeakers arranged at 30 degrees intervals in the horizontal plane. Animals were rewarded for making a correct choice. We also obtained a non-categorized measure of localization accuracy by recording head-orienting movements made during the first second following stimulus onset. The accuracy of the approach-to-target responses declined as the stimulus duration was reduced, particularly for lateral and posterior locations, although responses to sounds presented in the frontal region of space and directly behind the animal remained quite accurate. Head movements had a latency of approximately 200 ms and varied systematically in amplitude with stimulus direction. However, the final head bearing progressively undershot the target with increasing eccentricity and rarely exceeded 60 degrees to each side of the midline. In contrast to the approach-to-target responses, the accuracy of the head orienting responses did not change much with stimulus duration, suggesting that the improvement in percent correct scores with longer stimuli was due, at least in part, to re-sampling of the acoustical stimulus after the initial head turn had been made. Nevertheless, for incorrect trials, head orienting responses were more closely correlated with the direction approached by the animals than with the actual target direction, implying that at least part of the neural circuitry for translating sensory spatial signals into motor commands is shared by these two behaviors.
Collapse
Affiliation(s)
- F R Nodal
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | | | | | |
Collapse
|
25
|
Tremblay C, Champoux F, Voss P, Bacon BA, Lepore F, Théoret H. Speech and non-speech audio-visual illusions: a developmental study. PLoS One 2007; 2:e742. [PMID: 17710142 PMCID: PMC1937019 DOI: 10.1371/journal.pone.0000742] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2007] [Accepted: 07/16/2007] [Indexed: 11/19/2022] Open
Abstract
It is well known that simultaneous presentation of incongruent audio and visual stimuli can lead to illusory percepts. Recent data suggest that distinct processes underlie non-specific intersensory speech as opposed to non-speech perception. However, the development of both speech and non-speech intersensory perception across childhood and adolescence remains poorly defined. Thirty-eight observers aged 5 to 19 were tested on the McGurk effect (an audio-visual illusion involving speech), the Illusory Flash effect and the Fusion effect (two audio-visual illusions not involving speech) to investigate the development of audio-visual interactions and contrast speech vs. non-speech developmental patterns. Whereas the strength of audio-visual speech illusions varied as a direct function of maturational level, performance on non-speech illusory tasks appeared to be homogeneous across all ages. These data support the existence of independent maturational processes underlying speech and non-speech audio-visual illusory effects.
Collapse
Affiliation(s)
- Corinne Tremblay
- Department of Psychology, University of Montreal, Montreal, Canada
- Research Center, Sainte-Justine Hospital, Montreal, Canada
| | - François Champoux
- Speech Language Pathology and Audiology, University of Montreal, Montreal, Canada
| | - Patrice Voss
- Department of Psychology, University of Montreal, Montreal, Canada
| | - Benoit A. Bacon
- Department of Psychology, Bishop's University, Sherbrooke, Quebec, Canada
| | - Franco Lepore
- Department of Psychology, University of Montreal, Montreal, Canada
- Research Center, Sainte-Justine Hospital, Montreal, Canada
| | - Hugo Théoret
- Department of Psychology, University of Montreal, Montreal, Canada
- Research Center, Sainte-Justine Hospital, Montreal, Canada
- * To whom correspondence should be addressed. E-mail:
| |
Collapse
|
26
|
Wallace MT, Carriere BN, Perrault TJ, Vaughan JW, Stein BE. The development of cortical multisensory integration. J Neurosci 2006; 26:11844-9. [PMID: 17108157 PMCID: PMC6674880 DOI: 10.1523/jneurosci.3295-06.2006] [Citation(s) in RCA: 98] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2006] [Revised: 09/07/2006] [Accepted: 10/10/2006] [Indexed: 11/21/2022] Open
Abstract
Although there are many perceptual theories that posit particular maturational profiles in higher-order (i.e., cortical) multisensory regions, our knowledge of multisensory development is primarily derived from studies of a midbrain structure, the superior colliculus. Therefore, the present study examined the maturation of multisensory processes in an area of cat association cortex [i.e., the anterior ectosylvian sulcus (AES)] and found that these processes are rudimentary during early postnatal life and develop only gradually thereafter. The AES comprises separate visual, auditory, and somatosensory regions, along with many multisensory neurons at the intervening borders between them. During early life, sensory responsiveness in AES appears in an orderly sequence. Somatosensory neurons are present at 4 weeks of age and are followed by auditory and multisensory (somatosensory-auditory) neurons. Visual neurons and visually responsive multisensory neurons are first seen at 12 weeks of age. The earliest multisensory neurons are strikingly immature, lacking the ability to synthesize the cross-modal information they receive. With postnatal development, multisensory integrative capacity matures. The delayed maturation of multisensory neurons and multisensory integration in AES suggests that the higher-order processes dependent on these circuits appear comparatively late in ontogeny.
Collapse
Affiliation(s)
- Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, Tennessee 37232, USA.
| | | | | | | | | |
Collapse
|
27
|
Jiang W, Jiang H, Rowland BA, Stein BE. Multisensory orientation behavior is disrupted by neonatal cortical ablation. J Neurophysiol 2006; 97:557-62. [PMID: 16971678 DOI: 10.1152/jn.00591.2006] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The integration of visual and auditory information can significantly amplify the sensory responses of superior colliculus (SC) neurons and the behaviors that depend on them. This response amplification depends on the development of SC inputs that are derived from two regions of cortex: the anterior ectosylvian sulcus (AES) and the rostral lateral suprasylvian sulcus (rLS). Neonatal ablation of these cortico-collicular areas has been shown to disrupt the development of the multisensory enhancement capabilities of SC neurons and the present results demonstrate that it also precludes the development of the normal multisensory enhancements in orientation behavior. Animals with neonatal ablation of AES and rLS were tested at maturity and found unable to benefit from the combination of visual and auditory cues in their efforts to localize targets in contralesional space. In contrast, their ipsilesional multisensory orientation capabilities were indistinguishable from those of normal animals. However, when only one of these cortical areas was removed during early life, later behavioral consequences were negligible. Whether similar compensatory processes would occur in adult animals remains to be determined. These observations, coupled with those from previous studies, also suggest that a surprisingly high proportion of SC neurons capable of multisensory integration must be present for orientation behavior benefits to be realized. Compensatory mechanisms can achieve this if early lesions spare either AES or rLS, but even the impressive plasticity of the neonatal brain cannot compensate for the early loss of both of them.
Collapse
Affiliation(s)
- Wan Jiang
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| | | | | | | |
Collapse
|
28
|
Bajo VM, Nodal FR, Bizley JK, Moore DR, King AJ. The ferret auditory cortex: descending projections to the inferior colliculus. Cereb Cortex 2006; 17:475-91. [PMID: 16581982 PMCID: PMC7116556 DOI: 10.1093/cercor/bhj164] [Citation(s) in RCA: 116] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Descending corticofugal projections are thought to play a critical role in shaping the responses of subcortical neurons. Here, we examine the origins and targets of ferret auditory corticocollicular projections. We show that the ectosylvian gyrus (EG), where the auditory cortex is located, can be subdivided into middle, anterior, and posterior regions according to the pattern of cytochrome oxidase staining and immunoreactivity for the neurofilament antibody SMI32. Injection of retrograde tracers in the inferior colliculus (IC) labeled large layer V pyramidal cells throughout the EG and adjacent sulci. Each region of the EG has a different pattern of descending projections. Neurons in the primary auditory fields in the middle EG project to the lateral nucleus (LN) of the ipsilateral IC and bilaterally to the dorsal cortex and dorsal part of the central nucleus (CN). The projection to these dorsomedial regions of the IC is predominantly ipsilateral and topographically organized. The secondary cortical fields in the posterior EG target the same midbrain areas but exclude the CN of the IC. A smaller projection to the ipsilateral LN also arises from the anterior EG, which is the only region of auditory cortex to target tegmental areas surrounding the IC, including the superior colliculus, periaqueductal gray, intercollicular tegmentum, and cuneiform nucleus. This pattern of corticocollicular connectivity is consistent with regional differences in physiological properties and provides another basis for subdividing ferret auditory cortex into functionally distinct areas.
Collapse
Affiliation(s)
- Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | | | | | |
Collapse
|