1
|
Keum D, Medina AE. The effect of developmental alcohol exposure on multisensory integration is larger in deeper cortical layers. Alcohol 2024:S0741-8329(24)00032-6. [PMID: 38417561 DOI: 10.1016/j.alcohol.2024.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 03/01/2024]
Abstract
Fetal Alcohol Spectrum Disorders (FASD) are one of the most common causes of mental disability in the world. Despite efforts to increase public awareness of the risks of drinking during pregnancy, epidemiological studies indicate a prevalence of 1-6% in all births. There is growing evidence that deficits in sensory processing may contribute to social problems observed in FASD. Multisensory (MS) integration occurs when a combination of inputs from two sensory modalities leads to enhancement or suppression of neuronal firing. MS enhancement is usually linked to processes that facilitate cognition and reaction time, whereas MS suppression has been linked to filtering unwanted sensory information. The rostral portion of the posterior parietal cortex (PPr) of the ferret is an area that shows robust visual-tactile integration and displays both MS enhancement and suppression. Recently, our lab demonstrated that ferrets exposed to alcohol during the "third trimester equivalent" of human gestation show less MS enhancement and more MS suppression in PPr than controls. Here we complement these findings by comparing in vivo electrophysiological recordings from channels located in shallow and deep cortical layers. We observed that while the effects of alcohol (less MS enhancement and more MS suppression) were found in all layers, the magnitude of these effects were more pronounced in putative layers V-VI. These findings extend our knowledge on the sensory deficits of FASD.
Collapse
Affiliation(s)
- Dongil Keum
- Department of Pediatrics, University of Maryland, School of Medicine. 655 Baltimore, St. Baltimore, MD, 21230
| | - Alexandre E Medina
- Department of Pediatrics, University of Maryland, School of Medicine. 655 Baltimore, St. Baltimore, MD, 21230.
| |
Collapse
|
2
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
3
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized (‘attenuated’ or ‘habituated’) or sensitized (‘potentiated’) to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual–auditory stimuli, followed by a fourth stimulus that was either the same or different (‘switch’). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual–auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron’s transform, nor are they dependent on the neuron’s output.
Collapse
|
4
|
Kane I, Hansen J, Lewis R. A novel, interactive game to improve understanding of respiratory control pathways in first-year medical students. ADVANCES IN PHYSIOLOGY EDUCATION 2022; 46:71-76. [PMID: 34735305 DOI: 10.1152/advan.00078.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 10/29/2021] [Indexed: 06/13/2023]
Abstract
The physiology of respiration is a challenging subject for many medical students. To assist students, we have developed an active learning game that physically places students within a model outlining the respiratory control pathway. Participants were provided with a vodcast describing the physiology of respiratory control and instructed to view this before the activity. Once in the classroom, groups of students sat at tables marked to represent components of the respiratory control pathway (e.g., apneustic center, diaphragm etc.). Tables were connected with green and red ropes indicating excitatory or inhibitory effects, respectively. Students were presented with various scenarios (e.g., diabetic ketoacidosis) and asked to predict and illustrate the scenario's effect on subsequent steps in the respiratory pathway by waving the appropriate connecting rope. The next table would continue the pattern to simulate the collective physiological adaptation of the respiratory pathway. Thirty first-year medical students participated in this study. Following the activity, 25 out of the 30 participants completed an optional survey. The survey aimed to assess the benefits of adding this activity to our first-year medical curriculum to build a foundational understanding of the physiology of respiration. Responses were overwhelmingly favorable, and participants reported that playing the game significantly improved their perceived understanding of the physiology of respiratory control. All but one of the participants recommended using the activity in future classes. Because the small size of the study group may limit generalizability, future larger scale studies are planned.
Collapse
Affiliation(s)
- Imogen Kane
- Department of Basic Medical Sciences, A.T. Still University, School of Osteopathic Medicine in Arizona, Mesa, Arizona
| | - Jeffrey Hansen
- Department of Basic Medical Sciences, A.T. Still University, School of Osteopathic Medicine in Arizona, Mesa, Arizona
| | - Robert Lewis
- Department of Basic Medical Sciences, A.T. Still University, School of Osteopathic Medicine in Arizona, Mesa, Arizona
| |
Collapse
|
5
|
Siemann JK, Veenstra-VanderWeele J, Wallace MT. Approaches to Understanding Multisensory Dysfunction in Autism Spectrum Disorder. Autism Res 2020; 13:1430-1449. [PMID: 32869933 PMCID: PMC7721996 DOI: 10.1002/aur.2375] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 12/14/2022]
Abstract
Abnormal sensory responses are a DSM-5 symptom of autism spectrum disorder (ASD), and research findings demonstrate altered sensory processing in ASD. Beyond difficulties with processing information within single sensory domains, including both hypersensitivity and hyposensitivity, difficulties in multisensory processing are becoming a core issue of focus in ASD. These difficulties may be targeted by treatment approaches such as "sensory integration," which is frequently applied in autism treatment but not yet based on clear evidence. Recently, psychophysical data have emerged to demonstrate multisensory deficits in some children with ASD. Unlike deficits in social communication, which are best understood in humans, sensory and multisensory changes offer a tractable marker of circuit dysfunction that is more easily translated into animal model systems to probe the underlying neurobiological mechanisms. Paralleling experimental paradigms that were previously applied in humans and larger mammals, we and others have demonstrated that multisensory function can also be examined behaviorally in rodents. Here, we review the sensory and multisensory difficulties commonly found in ASD, examining laboratory findings that relate these findings across species. Next, we discuss the known neurobiology of multisensory integration, drawing largely on experimental work in larger mammals, and extensions of these paradigms into rodents. Finally, we describe emerging investigations into multisensory processing in genetic mouse models related to autism risk. By detailing findings from humans to mice, we highlight the advantage of multisensory paradigms that can be easily translated across species, as well as the potential for rodent experimental systems to reveal opportunities for novel treatments. LAY SUMMARY: Sensory and multisensory deficits are commonly found in ASD and may result in cascading effects that impact social communication. By using similar experiments to those in humans, we discuss how studies in animal models may allow an understanding of the brain mechanisms that underlie difficulties in multisensory integration, with the ultimate goal of developing new treatments. Autism Res 2020, 13: 1430-1449. © 2020 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Justin K Siemann
- Department of Biological Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Jeremy Veenstra-VanderWeele
- Department of Psychiatry, Columbia University, Center for Autism and the Developing Brain, New York Presbyterian Hospital, and New York State Psychiatric Institute, New York, New York, USA
| | - Mark T Wallace
- Department of Psychiatry, Vanderbilt University, Nashville, Tennessee, USA
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
6
|
Stein BE, Rowland BA. Using superior colliculus principles of multisensory integration to reverse hemianopia. Neuropsychologia 2020; 141:107413. [PMID: 32113921 DOI: 10.1016/j.neuropsychologia.2020.107413] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Revised: 02/04/2020] [Accepted: 02/24/2020] [Indexed: 11/18/2022]
Abstract
The diversity of our senses conveys many advantages; it enables them to compensate for one another when needed, and the information they provide about a common event can be integrated to facilitate its processing and, ultimately, adaptive responses. These cooperative interactions are produced by multisensory neurons. A well-studied model in this context is the multisensory neuron in the output layers of the superior colliculus (SC). These neurons integrate and amplify their cross-modal (e.g., visual-auditory) inputs, thereby enhancing the physiological salience of the initiating event and the probability that it will elicit SC-mediated detection, localization, and orientation behavior. Repeated experience with the same visual-auditory stimulus can also increase the neuron's sensitivity to these individual inputs. This observation raised the possibility that such plasticity could be engaged to restore visual responsiveness when compromised. For example, unilateral lesions of visual cortex compromise the visual responsiveness of neurons in the multisensory output layers of the ipsilesional SC and produces profound contralesional blindness (hemianopia). The possibility that multisensory plasticity could restore the visual responses of these neurons, and reverse blindness, was tested in the cat model of hemianopia. Hemianopic subjects were repeatedly presented with spatiotemporally congruent visual-auditory stimulus pairs in the blinded hemifield on a daily or weekly basis. After several weeks of this multisensory exposure paradigm, visual responsiveness was restored in SC neurons and behavioral responses were elicited by visual stimuli in the previously blind hemifield. The constraints on the effectiveness of this procedure proved to be the same as those constraining SC multisensory plasticity: whereas repetitions of a congruent visual-auditory stimulus was highly effective, neither exposure to its individual component stimuli, nor to these stimuli in non-congruent configurations was effective. The restored visual responsiveness proved to be robust, highly competitive with that in the intact hemifield, and sufficient to support visual discrimination.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA.
| |
Collapse
|
7
|
Fu D, Weber C, Yang G, Kerzel M, Nan W, Barros P, Wu H, Liu X, Wermter S. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective. Front Integr Neurosci 2020; 14:10. [PMID: 32174816 PMCID: PMC7056875 DOI: 10.3389/fnint.2020.00010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 02/11/2020] [Indexed: 11/13/2022] Open
Abstract
Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Cornelius Weber
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Guochun Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Weizhi Nan
- Department of Psychology, Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Pablo Barros
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Haiyan Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
8
|
Xu X, Hanganu-Opatz IL, Bieler M. Cross-Talk of Low-Level Sensory and High-Level Cognitive Processing: Development, Mechanisms, and Relevance for Cross-Modal Abilities of the Brain. Front Neurorobot 2020; 14:7. [PMID: 32116637 PMCID: PMC7034303 DOI: 10.3389/fnbot.2020.00007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 12/18/2022] Open
Abstract
The emergence of cross-modal learning capabilities requires the interaction of neural areas accounting for sensory and cognitive processing. Convergence of multiple sensory inputs is observed in low-level sensory cortices including primary somatosensory (S1), visual (V1), and auditory cortex (A1), as well as in high-level areas such as prefrontal cortex (PFC). Evidence shows that local neural activity and functional connectivity between sensory cortices participate in cross-modal processing. However, little is known about the functional interplay between neural areas underlying sensory and cognitive processing required for cross-modal learning capabilities across life. Here we review our current knowledge on the interdependence of low- and high-level cortices for the emergence of cross-modal processing in rodents. First, we summarize the mechanisms underlying the integration of multiple senses and how cross-modal processing in primary sensory cortices might be modified by top-down modulation of the PFC. Second, we examine the critical factors and developmental mechanisms that account for the interaction between neuronal networks involved in sensory and cognitive processing. Finally, we discuss the applicability and relevance of cross-modal processing for brain-inspired intelligent robotics. An in-depth understanding of the factors and mechanisms controlling cross-modal processing might inspire the refinement of robotic systems by better mimicking neural computations.
Collapse
Affiliation(s)
- Xiaxia Xu
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Malte Bieler
- Laboratory for Neural Computation, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| |
Collapse
|
9
|
Kim S, Kim J. Effects of Multimodal Association on Ambiguous Perception in Binocular Rivalry. Perception 2019; 48:796-819. [DOI: 10.1177/0301006619867023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
When two eyes view dissimilar images, an observer typically reports ambiguous perception called binocular rivalry where the subjective perception fluctuates between the two inputs. This perceptual instability is often comprised of exclusive dominance of each image and a transition state called piecemeal state where the two images are intermingled in patchwork manner. Herein, we investigated the effects of multimodal association of sensory congruent pair, arbitrary pair, and reverse pair on piecemeal state in order to see how each level of association affects the ambiguous perception during binocular rivalry. To induce the multisensory associations, we designed a matching task with audiovisual feedback where subjects were required to respond according to given pairing rules. We found that explicit audiovisual associations can substantially affect the piecemeal state during binocular rivalry and that this congruency effect that reduces the amount of visual ambiguity originates primarily from explicit audiovisual association training rather than common sensory features. Furthermore, when one information is associated with multiple information, recent and preexisting associations work collectively to influence the perceptual ambiguity during rivalry. Our findings show that learned multimodal association directly affects the temporal dynamics of ambiguous perception during binocular rivalry by modulating not only the exclusive dominance but also the piecemeal state in a systematic manner.
Collapse
Affiliation(s)
- Sungyong Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jeounghoon Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea; School of Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
10
|
Lee ES, Lee JY, Kim GH, Jeon CJ. Identification of calretinin-expressing retinal ganglion cells projecting to the mouse superior colliculus. Cell Tissue Res 2018; 376:153-163. [PMID: 30506393 DOI: 10.1007/s00441-018-2964-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 11/12/2018] [Indexed: 11/29/2022]
Abstract
In mice, retinal ganglion cells (RGCs), which consist of around 30 subtypes, exclusively transmit retinal information to the relevant brain systems through parallel visual pathways. The superior colliculus (SC) receives the vast majority of this information from several RGC subtypes. The objective of the current study is to identify the types of calretinin (CR)-expressing RGCs that project to the SC in mice. To label RGCs, we performed CR immunoreactivity in the mouse retina after injections of fluorescent dye, dextran into mouse SC. Subsequently, the neurons double-labeled for dextran and CR were iontophoretically injected with the lipophilic dye, DiI, to characterize the detailed morphological properties of these cells. The analysis of various morphological parameters, including dendritic arborization, dendritic field size and stratification, indicated that, of the ten different types of CR-expressing RGCs in the retina, the double-labeled cells consisted of at least eight types of RGCs that projected to the SC. These cells tended to have small-medium field sizes. However, except for dendritic field size, the cells did not exhibit consistent characteristics for the other morphometric parameters examined. The combination of a tracer and single-cell injections after immunohistochemistry for a particular molecule provided valuable data that confirmed the presence of distinct subtypes of RGCs within multiple-labeled RGCs that projected to specific brain regions.
Collapse
Affiliation(s)
- Eun-Shil Lee
- Department of Biology, School of Life Sciences, BK 21 Plus KNU Creative BioResearch Group, College of Natural Sciences, and Brain Science and Engineering Institute, Kyungpook National University, Daegu, 41566, South Korea
| | - Jea-Young Lee
- Center of Excellence for Aging and Brain Repair, USF Health, University of South Florida, Tampa, FL, 33612, USA
| | - Gil Hyun Kim
- Department of Biology, School of Life Sciences, BK 21 Plus KNU Creative BioResearch Group, College of Natural Sciences, and Brain Science and Engineering Institute, Kyungpook National University, Daegu, 41566, South Korea
| | - Chang-Jin Jeon
- Department of Biology, School of Life Sciences, BK 21 Plus KNU Creative BioResearch Group, College of Natural Sciences, and Brain Science and Engineering Institute, Kyungpook National University, Daegu, 41566, South Korea.
| |
Collapse
|
11
|
Mahoney JR, Verghese J. Visual-Somatosensory Integration and Quantitative Gait Performance in Aging. Front Aging Neurosci 2018; 10:377. [PMID: 30538628 PMCID: PMC6277592 DOI: 10.3389/fnagi.2018.00377] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 10/30/2018] [Indexed: 11/18/2022] Open
Abstract
Background: The ability to integrate information across sensory modalities is an integral aspect of mobility. Yet, the association between visual-somatosensory (VS) integration and gait performance has not been well-established in aging. Methods: A total of 333 healthy older adults (mean age 76.53 ± 6.22; 53% female) participated in a visual-somatosensory simple reaction time task and underwent quantitative gait assessment using an instrumented walkway. Magnitude of VS integration was assessed using probability models, and then categorized into four integration classifications (superior, good, poor, or deficient). Associations of VS integration with three independent gait factors (Pace, Rhythm, and Variability derived by factor analysis method) were tested at cross-section using linear regression analyses. Given overlaps in neural circuitry necessary for both multisensory integration and goal-directed locomotion, we hypothesized that VS integration would be significantly associated with pace but not rhythm which is a more automatic process controlled mainly through brainstem and spinal networks. Results: In keeping with our hypothesis, magnitude of VS integration was a strong predictor of pace (β = 0.12, p < 0.05) but not rhythm (β = −0.01, p = 0.83) in fully-adjusted models. While there was a trend for the association of magnitude of VS integration with variability (β = −0.11, p = 0.051), post-hoc testing of individual gait variables that loaded highest on the variability factor revealed that stride length variability (β = −0.13, p = 0.03) and not swing time variability (β = −0.08, p = 0.15) was significantly associated with magnitude of VS integration. Of the cohort, 29% had superior, 26% had good, 29% had poor, and 16% had deficient VS integration effects. Conclusions: Worse VS integration in aging is associated with worse spatial but not temporal aspects of gait performance.
Collapse
Affiliation(s)
- Jeannette R Mahoney
- Division of Cognitive and Motor Aging, Department of Neurology, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Joe Verghese
- Division of Cognitive and Motor Aging, Department of Neurology, Albert Einstein College of Medicine, Bronx, NY, United States.,Division of Geriatrics, Department of Medicine, Albert Einstein College of Medicine, Bronx, NY, United States
| |
Collapse
|
12
|
Alcaro A, Carta S, Panksepp J. The Affective Core of the Self: A Neuro-Archetypical Perspective on the Foundations of Human (and Animal) Subjectivity. Front Psychol 2017; 8:1424. [PMID: 28919868 PMCID: PMC5586212 DOI: 10.3389/fpsyg.2017.01424] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 08/07/2017] [Indexed: 01/25/2023] Open
Abstract
Psychologists usually considered the "Self" as an object of experience appearing when the individual perceives its existence within the conscious field. In accordance with such a view, the self-representing capacity of the human mind has been related to corticolimbic learning processes taking place within individual development. On the other hand, Carl Gustav Jung considered the Self as the core of our personality, in its conscious and unconscious aspects, as well as in its actual and potential forms. According to Jung, the Self originates from an inborn dynamic structure integrating the essential drives of our "brain-mind," and leading both to instinctual behavioral actions and to archetypal psychological experiences. Interestingly, recent neuroethological studies indicate that our subjective identity rests on ancient neuropsychic processes that humans share with other animals as part of their inborn constitutional repertoire. Indeed, brain activity within subcortical midline structures (SCMSs) is intrinsically related to the emergence of prototypical affective states, that not only influence our behavior in a flexible way, but alter our conscious field, giving rise to specific feelings or moods, which constitute the first form of self-orientation in the world. Moreover, such affective dynamics play a central role in the organization of individual personality and in the evolution of all other (more sophisticated) psychological functions. Therefore, on the base of the convergence between contemporary cutting-edge scientific research and some psychological intuitions of Jung, we intend here to explore the first neuroevolutional layer of human mind, that we call the affective core of the Self.
Collapse
Affiliation(s)
- Antonio Alcaro
- Santa Lucia Foundation, European Centre for Brain ResearchRome, Italy.,Associazione Italiana Gestalt Analitica (AIGA)Rome, Italy
| | - Stefano Carta
- Department of Pedagogy, Psychology, and Philosophy, University of CagliariCagliari, Italy
| | - Jaak Panksepp
- Department of Integrative Physiology and Neuroscience, College of Veterinary Medicine, Washington State University, PullmanWA, United States
| |
Collapse
|
13
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS One 2017; 12:e0183789. [PMID: 28859092 PMCID: PMC5578485 DOI: 10.1371/journal.pone.0183789] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 08/11/2017] [Indexed: 11/18/2022] Open
Abstract
Effects of different visual displays on the time and precision of bare-handed or tool-mediated eye-hand coordination were investigated in a pick-and-place-task with complete novices. All of them scored well above average in spatial perspective taking ability and performed the task with their dominant hand. Two groups of novices, four men and four women in each group, had to place a small object in a precise order on the centre of five targets on a Real-world Action Field (RAF), as swiftly as possible and as precisely as possible, using a tool or not (control). Each individual session consisted of four visual display conditions. The order of conditions was counterbalanced between individuals and sessions. Subjects looked at what their hands were doing 1) directly in front of them (“natural” top-down view) 2) in top-down 2D fisheye view 3) in top-down undistorted 2D view or 4) in 3D stereoscopic top-down view (head-mounted OCULUS DK 2). It was made sure that object movements in all image conditions matched the real-world movements in time and space. One group was looking at the 2D images with the monitor positioned sideways (sub-optimal); the other group was looking at the monitor placed straight ahead of them (near-optimal). All image viewing conditions had significantly detrimental effects on time (seconds) and precision (pixels) of task execution when compared with “natural” direct viewing. More importantly, we find significant trade-offs between time and precision between and within groups, and significant interactions between viewing conditions and manipulation conditions. The results shed new light on controversial findings relative to visual display effects on eye-hand coordination, and lead to conclude that differences in camera systems and adaptive strategies of novices are likely to explain these.
Collapse
Affiliation(s)
- Anil U. Batmaz
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Michel de Mathelin
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Birgitta Dresp-Langley
- ICube Lab Cognitive Science Department, Centre National de la Recherche Scientifique, 1 Place de l'Hôpital, Strasbourg, France
- * E-mail:
| |
Collapse
|
14
|
Siemann JK, Muller CL, Forsberg CG, Blakely RD, Veenstra-VanderWeele J, Wallace MT. An autism-associated serotonin transporter variant disrupts multisensory processing. Transl Psychiatry 2017; 7:e1067. [PMID: 28323282 PMCID: PMC5416665 DOI: 10.1038/tp.2017.17] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Revised: 12/29/2016] [Accepted: 01/09/2017] [Indexed: 01/29/2023] Open
Abstract
Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.
Collapse
Affiliation(s)
- J K Siemann
- Neuroscience Program, Vanderbilt University, Nashville, TN, USA
| | - C L Muller
- Neuroscience Program, Vanderbilt University, Nashville, TN, USA
| | - C G Forsberg
- Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
| | - R D Blakely
- Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
- Silvio O. Conte Center for Neuroscience Research, Vanderbilt University, Nashville, TN, USA
- Department of Biomedical Science, Charles E. Schmidt College of Medicine, Jupiter, FL, USA
- Florida Atlantic University Brain Institute, Florida Atlantic University, Jupiter, FL, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - J Veenstra-VanderWeele
- Silvio O. Conte Center for Neuroscience Research, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry, Sackler Institute for Developmental Psychobiology, Columbia University, New York, NY, USA
- Center for Autism and The Developing Brain, New York Presbyterian Hospital, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | - M T Wallace
- Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
- Silvio O. Conte Center for Neuroscience Research, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
15
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Getting nowhere fast: trade-off between speed and precision in training to execute image-guided hand-tool movements. BMC Psychol 2016; 4:55. [PMID: 27842577 PMCID: PMC5109684 DOI: 10.1186/s40359-016-0161-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Accepted: 10/27/2016] [Indexed: 12/02/2022] Open
Abstract
Background The speed and precision with which objects are moved by hand or hand-tool interaction under image guidance depend on a specific type of visual and spatial sensorimotor learning. Novices have to learn to optimally control what their hands are doing in a real-world environment while looking at an image representation of the scene on a video monitor. Previous research has shown slower task execution times and lower performance scores under image-guidance compared with situations of direct action viewing. The cognitive processes for overcoming this drawback by training are not yet understood. Methods We investigated the effects of training on the time and precision of direct view versus image guided object positioning on targets of a Real-world Action Field (RAF). Two men and two women had to learn to perform the task as swiftly and as precisely as possible with their dominant hand, using a tool or not and wearing a glove or not. Individuals were trained in sessions of mixed trial blocks with no feed-back. Results As predicted, image-guidance produced significantly slower times and lesser precision in all trainees and sessions compared with direct viewing. With training, all trainees get faster in all conditions, but only one of them gets reliably more precise in the image-guided conditions. Speed-accuracy trade-offs in the individual performance data show that the highest precision scores and steepest learning curve, for time and precision, were produced by the slowest starter. Fast starters produced consistently poorer precision scores in all sessions. The fastest starter showed no sign of stable precision learning, even after extended training. Conclusions Performance evolution towards optimal precision is compromised when novices start by going as fast as they can. The findings have direct implications for individual skill monitoring in training programmes for image-guided technology applications with human operators.
Collapse
Affiliation(s)
- Anil Ufuk Batmaz
- Laboratoire ICube UMR 7357 CNRS-University of Strasbourg, 2, rue Boussingault, 67000, Strasbourg, France
| | - Michel de Mathelin
- Laboratoire ICube UMR 7357 CNRS-University of Strasbourg, 2, rue Boussingault, 67000, Strasbourg, France
| | - Birgitta Dresp-Langley
- Laboratoire ICube UMR 7357 CNRS-University of Strasbourg, 2, rue Boussingault, 67000, Strasbourg, France.
| |
Collapse
|
16
|
Wu C, Stefanescu RA, Martel DT, Shore SE. Listening to another sense: somatosensory integration in the auditory system. Cell Tissue Res 2015; 361:233-50. [PMID: 25526698 PMCID: PMC4475675 DOI: 10.1007/s00441-014-2074-7] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 11/18/2014] [Indexed: 12/19/2022]
Abstract
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.
Collapse
Affiliation(s)
- Calvin Wu
- Department of Otolaryngology, Kresge Hearing Research Institute, University of Michigan, Ann Arbor, MI, 48109, USA
| | | | | | | |
Collapse
|
17
|
Siemann JK, Muller CL, Bamberger G, Allison JD, Veenstra-VanderWeele J, Wallace MT. A novel behavioral paradigm to assess multisensory processing in mice. Front Behav Neurosci 2015; 8:456. [PMID: 25628549 PMCID: PMC4290729 DOI: 10.3389/fnbeh.2014.00456] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Accepted: 12/19/2014] [Indexed: 11/13/2022] Open
Abstract
Human psychophysical and animal behavioral studies have illustrated the benefits that can be conferred from having information available from multiple senses. Given the central role of multisensory integration for perceptual and cognitive function, it is important to design behavioral paradigms for animal models to provide mechanistic insights into the neural bases of these multisensory processes. Prior studies have focused on large mammals, yet the mouse offers a host of advantages, most importantly the wealth of available genetic manipulations relevant to human disease. To begin to employ this model species for multisensory research it is necessary to first establish and validate a robust behavioral assay for the mouse. Two common mouse strains (C57BL/6J and 129S6/SvEv) were first trained to respond to unisensory (visual and auditory) stimuli separately. Once trained, performance with paired audiovisual stimuli was then examined with a focus on response accuracy and behavioral gain. Stimulus durations varied from 50 ms to 1 s in order to modulate the effectiveness of the stimuli and to determine if the well-established "principle of inverse effectiveness" held in this model. Response accuracy in the multisensory condition was greater than for either unisensory condition for all stimulus durations, with significant gains observed at the 300 ms and 100 ms durations. Main effects of stimulus duration, stimulus modality and a significant interaction between these factors were observed. The greatest behavioral gain was seen for the 100 ms duration condition, with a trend observed that as the stimuli became less effective, larger behavioral gains were observed upon their pairing (i.e., inverse effectiveness). These results are the first to validate the mouse as a species that shows demonstrable behavioral facilitations under multisensory conditions and provides a platform for future mechanistically directed studies to examine the neural bases of multisensory integration.
Collapse
Affiliation(s)
- Justin K Siemann
- Multisensory Research Laboratory, Neuroscience Program, Vanderbilt University Nashville, TN, USA
| | | | - Gary Bamberger
- Computer Software Engineering Department, MED Associates Inc. St. Albans, VT, USA
| | - John D Allison
- Murine Neurobehavior Core, Vanderbilt University Nashville, TN, USA
| | - Jeremy Veenstra-VanderWeele
- Center for Autism and the Developing Brain, and Department of Psychiatry, Sackler Institute for Developmental Psychobiology, Columbia University New York, NY, USA
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Nashville, TN, USA ; Department of Psychology, Vanderbilt University Nashville, TN, USA ; Department of Psychiatry, Vanderbilt University Nashville, TN, USA
| |
Collapse
|
18
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
19
|
Kristjánsson T, Thorvaldsson TP, Kristjánsson A. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation. Multisens Res 2014; 27:91-110. [PMID: 25296473 DOI: 10.1163/22134808-00002448] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.
Collapse
|
20
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 211] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|
21
|
Dissociation of psychophysical and EEG steady-state response measures of cross-modal temporal correspondence for amplitude modulated acoustic and vibrotactile stimulation. Int J Psychophysiol 2013; 89:433-43. [PMID: 23770083 DOI: 10.1016/j.ijpsycho.2013.06.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2013] [Revised: 05/29/2013] [Accepted: 06/04/2013] [Indexed: 11/21/2022]
Abstract
Research examining multisensory integration suggests that the correspondence of stimulus characteristics across modalities (cross-modal correspondence) can have a dramatic influence on both neurophysiological and perceptual responses to multimodal stimulation. The current study extends prior research by examining the cross-modal correspondence of amplitude modulation rate for simultaneous acoustic and vibrotactile stimulation using EEG and perceptual measures of sensitivity to amplitude modulation. To achieve this, psychophysical thresholds and steady-state responses (SSRs) were measured for acoustic and vibrotactile amplitude modulated (AM) stimulation for 21 and 40 Hz AM rates as a function of the cross-modal correspondence. The study design included three primary conditions to determine whether the changes in the SSR and psychophysical thresholds were due to the cross-modal temporal correspondence of amplitude modulated stimuli: NONE (AM in one modality only), SAME (the same AM rate for each modality) and DIFF (different AM rates for each modality). The results of the psychophysical analysis showed that AM detection thresholds for the simultaneous AM conditions (i.e., SAME and DIFF) were significantly higher (i.e., lower sensitivity) than AM detection thresholds for the stimulation of a single modality (i.e., NONE). SSR results showed significant effects of SAME and DIFF conditions on SSR activity. The different pattern of results for perceptual and SSR measures of cross-modal correspondence of AM rate indicates a dissociation between entrained cortical activity (i.e., SSR) and perception.
Collapse
|
22
|
Dissociation of reach-related and visual signals in the human superior colliculus. Neuroimage 2013; 82:61-7. [PMID: 23727531 DOI: 10.1016/j.neuroimage.2013.05.101] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2013] [Revised: 05/06/2013] [Accepted: 05/23/2013] [Indexed: 11/20/2022] Open
Abstract
Electrophysiological and micro-stimulation studies in non-human animal species indicated that the superior colliculus (SC) plays a role in the control of upper limb movements. In our previous work we found reach-related signals in the deep superior colliculus in humans. Here we show that also signals in more dorsal locations are correlated with the execution of arm movements. We instructed healthy participants to reach for visual targets either presented in the left or in the right visual hemifield during an fMRI measurement. Visual stimulation was dissociated from movement execution using a pro- and anti-reaching task. Thereby, we successfully differentiated between signals at these locations induced by the visual input of target presentations on the one hand and by the execution of arm movements on the other hand. Extending our previous report, the results of this study are in good agreement with the observed anatomical distribution of reach-related neurons in macaques. Obviously, reach-related signals can be found across a considerable depth range also in humans.
Collapse
|
23
|
McBride S, Huelse M, Lee M. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system. PLoS One 2013; 8:e54585. [PMID: 23437044 PMCID: PMC3577816 DOI: 10.1371/journal.pone.0054585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2012] [Accepted: 12/14/2012] [Indexed: 11/18/2022] Open
Abstract
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
Collapse
Affiliation(s)
- Sebastian McBride
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom.
| | | | | |
Collapse
|
24
|
Abstract
Neurophysiological studies in nonhuman species indicated that neurons in the superior colliculus (SC) are involved in the control of upper limb movements. These findings suggested that the SC represents a crucial hub in a general sensorimotor network, including skeletomotor as much as oculomotor functions. In contrast to the SC in the various animal models, the human SC is largely unknown territory. In particular, it is unknown whether findings of reach-related activity in the nonhuman SC can be extrapolated to humans. Using fMRI we found signal increases at superficial/intermediate and deep locations at the SC during the execution of arm movements. In contrast, signals related to saccade execution were confined to the superficial and intermediate locations. Although targets for reaching were presented in the left and right hemifields under central fixation, we found a lateralization of reach-related signals with respect to the active arm. In contrast, saccade-related activity was bilateral, in agreement with the bilateral target presentation and the resulting directions of saccades. Our results suggest that the human SC not only contributes to the coordination of eye movements and spatial shifts of attentions but also to the sensorimotor control of arm movements.
Collapse
|
25
|
Xiao J, Zhou X, Jiang T, Zhi ZN, Li Q, Qu J, Chen JG. Unilateral cerebral ischemia inhibits optomotor responses of the ipsilateral eye in mice. J Integr Neurosci 2012; 11:193-200. [PMID: 22744825 DOI: 10.1142/s0219635212500148] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2011] [Accepted: 02/03/2012] [Indexed: 11/18/2022] Open
Abstract
A reduction in blood flow to the brain causes stroke and damage to neuronal networks. Cerebral ischemia is frequently associated with loss of visual functions. Because retinal and small cerebral vessels are vulnerable to similar risk factors, the loss of vision could result from concurrent retinal ischemia, and it is not clear if visual functions may be inhibited by cerebral ischemia directly. In this study, the distal middle cerebral artery in the right hemisphere of mice was occluded to produce unilateral cerebral ischemia and subsequent infarction. The layer V neurons expressing YFP in transgenic yellow fluorescent protein in transgenic B6.Cg-Tg(Thy1-YFPH)2Jrs/J mice disappeared in the motor and somatosensory cortex, but not in the visual area. The latencies of flash visual evoked potential recorded from two hemispheres were imbalanced, but did not differ markedly from the latencies recorded in controls. However, the optomotor responses of the ipsilateral eye were significantly reduced by 48 h after occlusion. Our results suggest that focused cerebral ischemia may inhibit ipsilateral eye movement in the absence of damage to the visual cortex. This study may provide a platform for further investigation of the relationship between cortical ischemia and visual function.
Collapse
Affiliation(s)
- Jian Xiao
- State Key Laboratory Cultivation Base, Ministry of Health, School of Ophthalmology and Optometry, Wenzhou Medical College, Zhejiang, PR China
| | | | | | | | | | | | | |
Collapse
|
26
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/9781439812174-10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
27
|
Perrault T, Rowland B, Stein B. The Organization and Plasticity of Multisensory Integration in the Midbrain. Front Neurosci 2011. [DOI: 10.1201/b11092-20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
28
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/9781439812174-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
29
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/b11092-10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
30
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/b11092-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
31
|
Perrault T, Rowland B, Stein B. The Organization and Plasticity of Multisensory Integration in the Midbrain. Front Neurosci 2011. [DOI: 10.1201/9781439812174-20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
32
|
Innes-Brown H, Barutchu A, Shivdasani MN, Crewther DP, Grayden DB, Paolini AG. Susceptibility to the flash-beep illusion is increased in children compared to adults. Dev Sci 2011; 14:1089-99. [DOI: 10.1111/j.1467-7687.2011.01059.x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
33
|
Meredith MA, Kryklywy J, McMillan AJ, Malhotra S, Lum-Tai R, Lomber SG. Crossmodal reorganization in the early deaf switches sensory, but not behavioral roles of auditory cortex. Proc Natl Acad Sci U S A 2011; 108:8856-61. [PMID: 21555555 PMCID: PMC3102418 DOI: 10.1073/pnas.1018519108] [Citation(s) in RCA: 99] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
It is well known that early disruption of sensory input from one modality can induce crossmodal reorganization of a deprived cortical area, resulting in compensatory abilities in the remaining senses. Compensatory effects, however, occur in selected cortical regions and it is not known whether such compensatory phenomena have any relation to the original function of the reorganized area. In the cortex of hearing cats, the auditory field of the anterior ectosylvian sulcus (FAES) is largely responsive to acoustic stimulation and its unilateral deactivation results in profound contralateral acoustic orienting deficits. Given these functional and behavioral roles, the FAES was studied in early-deafened cats to examine its crossmodal sensory properties as well as to assess the behavioral role of that reorganization. Recordings in the FAES of early-deafened adults revealed robust responses to visual stimulation as well as receptive fields that collectively represented the contralateral visual field. A second group of early-deafened cats was trained to localize visual targets in a perimetry array. In these animals, cooling loops were surgically placed on the FAES to reversibly deactivate the region, which resulted in substantial contralateral visual orienting deficits. These results demonstrate that crossmodal plasticity can substitute one sensory modality for another while maintaining the functional repertoire of the reorganized region.
Collapse
Affiliation(s)
- M Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, VA 23298, USA.
| | | | | | | | | | | |
Collapse
|
34
|
Raij T, Ahveninen J, Lin FH, Witzel T, Jääskeläinen IP, Letham B, Israeli E, Sahyoun C, Vasios C, Stufflebeam S, Hämäläinen M, Belliveau JW. Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices. Eur J Neurosci 2010; 31:1772-82. [PMID: 20584181 DOI: 10.1111/j.1460-9568.2010.07213.x] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects the influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl's gyrus) the onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30-35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory and 74 ms in the visual cortex, i.e., 3-21 ms after inputs from the two modalities converged.
Collapse
Affiliation(s)
- Tommi Raij
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Bldg 149, 13 St, Charlestown, MA, USA.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
35
|
Manger PR, Restrepo CE, Innocenti GM. The superior colliculus of the ferret: Cortical afferents and efferent connections to dorsal thalamus. Brain Res 2010; 1353:74-85. [PMID: 20682301 DOI: 10.1016/j.brainres.2010.07.085] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2010] [Revised: 07/23/2010] [Accepted: 07/23/2010] [Indexed: 10/19/2022]
|
36
|
Royal DW, Krueger J, Fister MC, Wallace MT. Adult plasticity of spatiotemporal receptive fields of multisensory superior colliculus neurons following early visual deprivation. Restor Neurol Neurosci 2010; 28:259-70. [PMID: 20404413 DOI: 10.3233/rnn-2010-0488] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE Previous work has established that the integrative capacity of multisensory neurons in the superior colliculus (SC) matures over a protracted period of postnatal life (Wallace and Stein, 1997), and that the development of normal patterns of multisensory integration depends critically on early sensory experience (Wallace et al., 2004). Although these studies demonstrated the importance of early sensory experience in the creation of mature multisensory circuits, it remains unknown whether the reestablishment of sensory experience in adulthood can reverse these effects and restore integrative capacity. METHODS The current study tested this hypothesis in cats that were reared in absolute darkness until adulthood and then returned to a normal housing environment for an equivalent period of time. Single unit extracellular recordings targeted multisensory neurons in the deep layers of the SC, and analyses were focused on both conventional measures of multisensory integration and on more recently developed methods designed to characterize spatiotemporal receptive fields (STRF). RESULTS Analysis of the STRF structure and integrative capacity of multisensory SC neurons revealed significant modifications in the temporal response dynamics of multisensory responses (e.g., discharge durations, peak firing rates, and mean firing rates), as well as significant changes in rates of spontaneous activation and degrees of multisensory integration. CONCLUSIONS These results emphasize the importance of early sensory experience in the establishment of normal multisensory processing architecture and highlight the limited plastic potential of adult multisensory circuits.
Collapse
Affiliation(s)
- David W Royal
- Kennedy Center for Research on Human Development, Nashville, Tennessee 37232, USA.
| | | | | | | |
Collapse
|
37
|
Stein BE, Stanford TR, Rowland BA. The neural basis of multisensory integration in the midbrain: its organization and maturation. Hear Res 2009; 258:4-15. [PMID: 19345256 PMCID: PMC2787841 DOI: 10.1016/j.heares.2009.03.012] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2009] [Revised: 03/13/2009] [Accepted: 03/16/2009] [Indexed: 11/20/2022]
Abstract
Multisensory integration describes a process by which information from different sensory systems is combined to influence perception, decisions, and overt behavior. Despite a widespread appreciation of its utility in the adult, its developmental antecedents have received relatively little attention. Here we review what is known about the development of multisensory integration, with a focus on the circuitry and experiential antecedents of its development in the model system of the multisensory (i.e., deep) layers of the superior colliculus. Of particular interest here are two sets of experimental observations: (1) cortical influences appear essential for multisensory integration in the SC, and (2) postnatal experience guides its maturation. The current belief is that the experience normally gained during early life is instantiated in the cortico-SC projection, and that this is the primary route by which ecological pressures adapt SC multisensory integration to the particular environment in which it will be used.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Medical Center Blvd., Winston-Salem, NC 27157-1010, USA.
| | | | | |
Collapse
|
38
|
Stein BE, Perrault TJ, Stanford TR, Rowland BA. Postnatal experiences influence how the brain integrates information from different senses. Front Integr Neurosci 2009; 3:21. [PMID: 19838323 PMCID: PMC2762369 DOI: 10.3389/neuro.07.021.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2009] [Accepted: 08/11/2009] [Indexed: 11/20/2022] Open
Abstract
Sensory processing disorder (SPD) is characterized by anomalous reactions to, and integration of, sensory cues. Although the underlying etiology of SPD is unknown, one brain region likely to reflect these sensory and behavioral anomalies is the superior colliculus (SC), a structure involved in the synthesis of information from multiple sensory modalities and the control of overt orientation responses. In the present review we describe normal functional properties of this structure, the manner in which its individual neurons integrate cues from different senses, and the overt SC-mediated behaviors that are believed to manifest this “multisensory integration.” Of particular interest here is how SC neurons develop their capacity to engage in multisensory integration during early postnatal life as a consequence of early sensory experience, and the intimate communication between cortex and the midbrain that makes this developmental process possible.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine Winston-Salem, NC, USA
| | | | | | | |
Collapse
|
39
|
Hollich G, Prince CG. Comparing infants’ preference for correlated audiovisual speech with signal-level computational models. Dev Sci 2009; 12:379-87. [DOI: 10.1111/j.1467-7687.2009.00823.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
Meehan SK, Staines WR. Task-relevance and temporal synchrony between tactile and visual stimuli modulates cortical activity and motor performance during sensory-guided movement. Hum Brain Mapp 2009; 30:484-96. [PMID: 18095277 DOI: 10.1002/hbm.20520] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Sensory-guided movements require the analysis and integration of task-relevant sensory inputs from multiple modalities. This article sought to: (1) assess effects of intermodal temporal synchrony upon modulation of primary somatosensory cortex (S1) during continuous sensorimotor transformations, (2) identify cortical areas sensitive to temporal synchrony, and (3) provide further insight into the reduction of S1 activity during continuous vibrotactile tracking previously observed by our group (Meehan and Staines 2007: Brain Res 1138:148-158). Functional MRI was acquired while participants received simultaneous bimodal (visuospatial/vibrotactile) stimulation and continuously tracked random changes in one modality, by applying graded force to a force-sensing resistor. Effects of intermodal synchrony were investigated, unbeknownst to the participants, by varying temporal synchrony so that sensorimotor transformations dictated by the distracter modality either conflicted (low synchrony) or supplemented (high synchrony) those of the target modality. Temporal synchrony differentially influenced tracking performance dependent upon tracking modality. Physiologically, synchrony did not influence S1 activation; however, the insula and superior temporal gyrus were influenced regardless of tracking modality. The left temporal-parietal junction demonstrated increased activation during high synchrony specific to vibrotactile tracking. The superior parietal lobe and superior temporal gyrus demonstrated increased activation during low synchrony specific to visuospatial tracking. As previously reported, vibrotactile tracking resulted in decreased S1 activation relative to when it was task-irrelevant. We conclude that while temporal synchrony is represented at higher levels than S1, interactions between inter- and intramodal mechanisms determines sensory processing at the level of S1.
Collapse
Affiliation(s)
- Sean K Meehan
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
41
|
Zahar Y, Reches A, Gutfreund Y. Multisensory enhancement in the optic tectum of the barn owl: spike count and spike timing. J Neurophysiol 2009; 101:2380-94. [PMID: 19261710 DOI: 10.1152/jn.91193.2008] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Temporal and spatial correlations between auditory and visual stimuli facilitate the perception of unitary events and improve behavioral responses. However, it is not clear how combined visual and auditory information is processed in single neurons. Here we studied responses of multisensory neurons in the barn owl's optic tectum (the avian homologue of the superior colliculus) to visual, auditory, and bimodal stimuli. We specifically focused on responses to sequences of repeated stimuli. We first report that bimodal stimulation tends to elicit more spikes than in the responses to its unimodal components (a phenomenon known as multisensory enhancement). However, this tendency was found to be history-dependent; multisensory enhancement was mostly apparent in the first stimulus of the sequence and to a much lesser extent in the subsequent stimuli. Next, a vector-strength analysis was applied to quantify the phase locking of the responses to the stimuli. We report that in a substantial number of multisensory neurons responses to sequences of bimodal stimuli elicited spike trains that were better phase locked to the stimulus than spike trains elicited by stimulating with the unimodal counterparts (visual or auditory). We conclude that multisensory enhancement can be manifested in better phase locking to the stimulus as well as in more spikes.
Collapse
Affiliation(s)
- Yael Zahar
- Dept. of Physiology and Biophysics, The Bruce Rappaport Medical School, The Technion, Haifa 31096, Israel
| | | | | |
Collapse
|
42
|
Shiraiwa T. Multimodal chemosensory integration through the maxillary palp in Drosophila. PLoS One 2008; 3:e2191. [PMID: 18478104 PMCID: PMC2364657 DOI: 10.1371/journal.pone.0002191] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2007] [Accepted: 04/01/2008] [Indexed: 01/17/2023] Open
Abstract
Drosophila melanogaster has an olfactory organ called the maxillary palp. It is smaller and numerically simpler than the antenna, and its specific role in behavior has long been unclear. Because of its proximity to the mouthparts, I explored the possibility of a role in taste behavior. Maxillary palp was tuned to mediate odor-induced taste enhancement: a sucrose solution was more appealing when simultaneously presented with the odorant 4-methylphenol. The same result was observed with other odors that stimulate other types of olfactory receptor neuron in the maxillary palp. When an antennal olfactory receptor was genetically introduced in the maxillary palp, the fly interpreted a new odor as a sweet-enhancing smell. These results all point to taste enhancement as a function of the maxillary palp. It also opens the door for studying integration of multiple senses in a model organism.
Collapse
Affiliation(s)
- Takashi Shiraiwa
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, Connecticut, United States of America.
| |
Collapse
|
43
|
Steenken R, Diederich A, Colonius H. Time course of auditory masker effects: tapping the locus of audiovisual integration? Neurosci Lett 2008; 435:78-83. [PMID: 18355963 DOI: 10.1016/j.neulet.2008.02.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2007] [Revised: 01/22/2008] [Accepted: 02/06/2008] [Indexed: 11/28/2022]
Abstract
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, University of Oldenburg, P.O. Box 2503, 26111 Oldenburg, Germany.
| | | | | |
Collapse
|
44
|
Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci 2008; 9:255-66. [PMID: 18354398 DOI: 10.1038/nrn2331] [Citation(s) in RCA: 899] [Impact Index Per Article: 56.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For thousands of years science philosophers have been impressed by how effectively the senses work together to enhance the salience of biologically meaningful events. However, they really had no idea how this was accomplished. Recent insights into the underlying physiological mechanisms reveal that, in at least one circuit, this ability depends on an intimate dialogue among neurons at multiple levels of the neuraxis; this dialogue cannot take place until long after birth and might require a specific kind of experience. Understanding the acquisition and usage of multisensory integration in the midbrain and cerebral cortex of mammals has been aided by a multiplicity of approaches. Here we examine some of the fundamental advances that have been made and some of the challenging questions that remain.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | |
Collapse
|
45
|
Valsecchi M, Turatto M. Microsaccadic responses in a bimodal oddball task. PSYCHOLOGICAL RESEARCH 2008; 73:23-33. [PMID: 18320216 DOI: 10.1007/s00426-008-0142-x] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2007] [Accepted: 02/14/2008] [Indexed: 11/26/2022]
Abstract
In a visual oddball task the presentation of rare targets induces a prolonged microsaccadic inhibition as compared to standards. Here, we replicated this effect also in the auditory modality. In addition, although auditory standards induced a more limited modulation of microsaccadic frequency as compared to visual standards, auditory oddballs induced a prolonged microsaccadic inhibition. With bimodal standard stimuli the microsaccadic response was determined by the attended modality, resembling that produced by attended unimodal stimuli. The present findings support the idea that the microsaccadic response to oddball and standard stimuli is partly driven by cognitive mechanisms common to both the visual and the auditory modality, and that microsaccades can be used as an implicit behavioral measure of ongoing cognitive processes.
Collapse
Affiliation(s)
- Matteo Valsecchi
- Department of Cognitive Sciences and Education, University of Trento, Corso Bettini, 31, 38068, Rovereto, Italy.
| | | |
Collapse
|
46
|
Bresciani JP, Dammeier F, Ernst MO. Tri-modal integration of visual, tactile and auditory signals for the perception of sequences of events. Brain Res Bull 2008; 75:753-60. [PMID: 18394521 DOI: 10.1016/j.brainresbull.2008.01.009] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the interactions between visual, tactile and auditory sensory signals for the perception of sequences of events. Sequences of flashes, taps and beeps were presented simultaneously. For each session, subjects were instructed to count the number of events presented in one modality (Target) and to ignore the stimuli presented in the other modalities (Background). The number of events presented in the background sequence could differ from the number of events in the target sequence. For each session, we quantified the Background-evoked bias by comparing subjects' responses with and without Background (Target presented alone). Nine combinations between vision, touch and audition were tested. In each session but two, the Background significantly biased the Target. Vision was the most susceptible to Background-evoked bias and the least efficient in biasing the other two modalities. By contrast, audition was the least susceptible to Background-evoked bias and the most efficient in biasing the other two modalities. These differences were strongly correlated to the relative reliability of each modality. In line with this, the evoked biases were larger when the Background consisted of two instead of only one modality. These results show that for the perception of sequences of events: (1) vision, touch and audition are automatically integrated; (2) the respective contributions of the three modalities to the integrated percept differ; (3) the relative contribution of each modality depends on its relative reliability (1/variability); (4) task-irrelevant stimuli have more weight when presented in two rather than only one modality.
Collapse
Affiliation(s)
- Jean-Pierre Bresciani
- Max-Planck-Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tuebingen, Germany
| | | | | |
Collapse
|
47
|
Chapman CR, Tuckett RP, Song CW. Pain and stress in a systems perspective: reciprocal neural, endocrine, and immune interactions. THE JOURNAL OF PAIN 2008; 9:122-45. [PMID: 18088561 PMCID: PMC2278005 DOI: 10.1016/j.jpain.2007.09.006] [Citation(s) in RCA: 300] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2007] [Revised: 08/28/2007] [Accepted: 09/30/2007] [Indexed: 12/31/2022]
Abstract
UNLABELLED This paper advances a psychophysiological systems view of pain in which physical injury, or wounding, generates a complex stress response that extends beyond the nervous system and contributes to the experience of pain. Through a common chemical language comprising neurotransmitters, peptides, endocannabinoids, cytokines, and hormones, an ensemble of interdependent nervous, endocrine, and immune processes operates in concert to cope with the injury. These processes act as a single agent and comprise a supersystem. Acute pain in its multiple dimensions, and the related symptoms that commonly occur with it, are products of the supersystem. Chronic pain can develop as a result of unusual stress. Social stressors can compound the stress resulting from a wound or act alone to dysregulate the supersystem. When the supersystem suffers dysregulation, health, function, and sense of well-being suffer. Some chronic pain conditions are the product of supersystem dysregulation. Individuals vary and are vulnerable to dysregulation and dysfunction in particular organ systems due to the unique interactions of genetic, epigenetic and environmental factors, as well as the past experiences that characterize each person. PERSPECTIVE Acute tissue injury activates an ensemble of interdependent nervous, endocrine, and immune processes that operate in concert and comprise a supersystem. Some chronic pain conditions result from supersystem dysregulation. Individuals vary and are vulnerable to dysregulation due to the unique interactions of genetic, epigenetic, and environmental factors and past experiences that characterize each person. This perspective can potentially assist clinicians in assessing and managing chronic pain patients.
Collapse
Affiliation(s)
- C Richard Chapman
- Pain Research Center, Department of Anesthesiology, University of Utah, Salt Lake City, Utah 84108, USA.
| | | | | |
Collapse
|
48
|
Ross LA, Saint-Amour D, Leavitt VM, Molholm S, Javitt DC, Foxe JJ. Impaired multisensory processing in schizophrenia: deficits in the visual enhancement of speech comprehension under noisy environmental conditions. Schizophr Res 2007; 97:173-83. [PMID: 17928202 DOI: 10.1016/j.schres.2007.08.008] [Citation(s) in RCA: 142] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2007] [Revised: 08/07/2007] [Accepted: 08/10/2007] [Indexed: 12/01/2022]
Abstract
BACKGROUND Viewing a speaker's articulatory movements substantially improves a listener's ability to understand spoken words, especially under noisy environmental conditions. In this study we investigated the ability of patients with schizophrenia to integrate visual and auditory speech. Our objective was to determine to what extent they experience benefit from visual articulation and to detail under what listening conditions they might show the greatest impairments. METHODS We assessed the ability to recognize auditory and audiovisual speech in different levels of noise in 18 patients with schizophrenia and compared their performance with that of 18 healthy volunteers. We used a large set of monosyllabic words as our stimuli in order to more closely approximate performance in everyday situations. RESULTS Patients with schizophrenia showed deficits in their ability to derive benefit from visual articulatory motion. This impairment was most pronounced at signal-to-noise levels where multisensory gain is known to be maximal in healthy control subjects. A surprising finding was that despite known early auditory sensory processing deficits and reports of impairments in speech processing in schizophrenia, patients' performance in unisensory auditory speech perception remained fully intact. CONCLUSIONS Thus, the results showed a specific deficit in multisensory speech processing in the absence of any measurable deficit in unisensory speech processing and suggest that sensory integration dysfunction may be an important and, to date, rather overlooked aspect of schizophrenia.
Collapse
Affiliation(s)
- Lars A Ross
- Program in Cognitive Neuroscience, Department of Psychology, The City College of City University of New York, 138th St. and Convent Avenue, New York, New York 10031, USA
| | | | | | | | | | | |
Collapse
|
49
|
Rowland BA, Stein BE. Multisensory integration produces an initial response enhancement. Front Integr Neurosci 2007; 1:4. [PMID: 18958232 PMCID: PMC2526011 DOI: 10.3389/neuro.07.004.2007] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2007] [Accepted: 10/29/2007] [Indexed: 11/18/2022] Open
Abstract
The brain has evolved the ability to integrate information across the senses in order to improve the detection and disambiguation of biologically significant events. This multisensory synthesis of information leads to faster (and more accurate) behavioral responses, yet the underlying neural mechanisms by which these responses are speeded are as yet unclear. The aim of these experiments was to evaluate the temporal properties of multisensory enhancement in the physiological responses of neurons in the superior colliculus (SC). Of specific interest was the temporal evolution of their responses to individual modality-specific stimuli as well as to cross-modal combinations of these stimuli. The results demonstrate that cross-modal stimuli typically elicit faster, more robust, and more reliable physiological responses than do their modality-specific component stimuli. Response measures sensitive to the time domain showed that these multisensory responses were enhanced from their very onset, and that the acceleration of the enhancement was greatest within the first 40ms (or 50% of the response). The latter half of the multisensory response was typically only as robust and informative as predicted by a linear combination of the unisensory component responses. These results may reveal some of the key physiological changes underlying many of the SC-mediated behavioral benefits of multisensory integration.
Collapse
Affiliation(s)
- Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine USA
| | | |
Collapse
|
50
|
Bresciani JP, Ernst MO. Signal reliability modulates auditory-tactile integration for event counting. Neuroreport 2007; 18:1157-61. [PMID: 17589318 DOI: 10.1097/wnr.0b013e3281ace0ca] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Sequences of auditory beeps and tactile taps were simultaneously presented and participants were instructed to focus on one of these modalities and to ignore the other. We tested whether (i) the two sensory channels bias one another and (ii) the interaction depends on the relative reliability of the channels. Audition biased tactile perception and touch biased auditory perception. Lowering the reliability of the auditory channel (i.e. the intensity of the beeps) decreased the effect of audition on touch and increased the effect of touch on audition. These results show that simultaneous auditory and tactile stimuli tend to be automatically integrated in a reliability-dependent manner.
Collapse
|