1
|
de Paz C, Travieso D. A direct comparison of sound and vibration as sources of stimulation for a sensory substitution glove. Cogn Res Princ Implic 2023; 8:41. [PMID: 37402032 DOI: 10.1186/s41235-023-00495-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/18/2023] [Indexed: 07/05/2023] Open
Abstract
Sensory substitution devices (SSDs) facilitate the detection of environmental information through enhancement of touch and/or hearing capabilities. Research has demonstrated that several tasks can be successfully completed using acoustic, vibrotactile, and multimodal devices. The suitability of a substituting modality is also mediated by the type of information required to perform the specific task. The present study tested the adequacy of touch and hearing in a grasping task by utilizing a sensory substitution glove. The substituting modalities inform, through increases in stimulation intensity, about the distance between the fingers and the objects. A psychophysical experiment of magnitude estimation was conducted. Forty blindfolded sighted participants discriminated equivalently the intensity of both vibrotactile and acoustic stimulation, although they experienced some difficulty with the more intense stimuli. Additionally, a grasping task involving cylindrical objects of varying diameters, distances and orientations was performed. Thirty blindfolded sighted participants were divided into vibration, sound, or multimodal groups. High performance was achieved (84% correct grasps) with equivalent success rate between groups. Movement variables showed more precision and confidence in the multimodal condition. Through a questionnaire, the multimodal group indicated their preference for using a multimodal SSD in daily life and identified vibration as their primary source of stimulation. These results demonstrate that there is an improvement in performance with specific-purpose SSDs, when the necessary information for a task is identified and coupled with the delivered stimulation. Furthermore, the results suggest that it is possible to achieve functional equivalence between substituting modalities when these previous steps are met.
Collapse
Affiliation(s)
- Carlos de Paz
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain
| | - David Travieso
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain.
| |
Collapse
|
2
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
3
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
4
|
Zerr M, Freihorst C, Schütz H, Sinke C, Müller A, Bleich S, Münte TF, Szycik GR. Brief Sensory Training Narrows the Temporal Binding Window and Enhances Long-Term Multimodal Speech Perception. Front Psychol 2019; 10:2489. [PMID: 31749748 PMCID: PMC6848860 DOI: 10.3389/fpsyg.2019.02489] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 10/22/2019] [Indexed: 11/13/2022] Open
Abstract
Our ability to integrate multiple sensory-based representations of our surrounding supplies us with a more holistic view of our world. There are many complex algorithms our nervous system uses to construct a coherent perception. An indicator to solve this 'binding problem' are the temporal characteristics with the specificity that environmental information has different propagation speeds (e.g., sound and electromagnetic waves) and sensory processing time and thus the temporal relationship of a stimulus pair derived from the same event must be flexibly adjusted by our brain. This tolerance can be conceptualized in the form of the cross-modal temporal binding window (TBW). Several studies showed the plasticity of the TBW and its importance concerning audio-visual illusions, synesthesia, as well as psychiatric disturbances. Using three audio-visual paradigms, we investigated the importance of length (short vs. long) as well as modality (uni- vs. multimodal) of a perceptual training aiming at reducing the TBW in a healthy population. We also investigated the influence of the TBW on speech intelligibility, where participants had to integrate auditory and visual speech information from a videotaped speaker. We showed that simple sensory trainings can change the TBW and are capable of optimizing speech perception at a very naturalistic level. While the training-length had no different effect on the malleability of the TBW, the multisensory trainings induced a significantly stronger narrowing of the TBW than their unisensory counterparts. Furthermore, a narrowing of the TBW was associated with a better performance in speech perception, meaning that participants showed a greater capacity for integrating informations from different sensory modalities in situations with one modality impaired. All effects persisted at least seven days. Our findings show the significance of multisensory temporal processing regarding ecologically valid measures and have important clinical implications for interventions that may be used to alleviate debilitating conditions (e.g., autism, schizophrenia), in which multisensory temporal function is shown to be impaired.
Collapse
Affiliation(s)
- Michael Zerr
- Department of Psychosomatic Medicine and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christina Freihorst
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Helene Schütz
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Astrid Müller
- Department of Psychosomatic Medicine and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Thomas F Münte
- Department of Neurology, University of Lübeck, Lübeck, Germany.,Institute of Psychology II, University of Lübeck, Lübeck, Germany
| | - Gregor R Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|
5
|
McCracken HS, Murphy BA, Glazebrook CM, Burkitt JJ, Karellas AM, Yielder PC. Audiovisual Multisensory Integration and Evoked Potentials in Young Adults With and Without Attention-Deficit/Hyperactivity Disorder. Front Hum Neurosci 2019; 13:95. [PMID: 30941026 PMCID: PMC6433696 DOI: 10.3389/fnhum.2019.00095] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 02/28/2019] [Indexed: 11/13/2022] Open
Abstract
The purpose of this study was to assess how young adults with attention-deficit/hyperactivity disorder (ADHD) process audiovisual (AV) multisensory stimuli using behavioral and neurological measures. Adults with a clinical diagnosis of ADHD (n = 10) and neurotypical controls (n = 11) completed a simple response time task, consisting of auditory, visual, and AV multisensory conditions. Continuous 64-electrode electroencephalography (EEG) was collected to assess neurological responses to each condition. The AV multisensory condition resulted in the shortest response times for both populations. Analysis using the race model (Miller, 1982) demonstrated that those with ADHD had violation of the race model earlier in the response, which may be a marker for impulsivity. EEG analysis revealed that both groups had early multisensory integration (MSI) occur following multisensory stimulus onset. There were also significant group differences in event-related potentials (ERPs) in frontal, parietal, and occipital brain regions, which are regions reported to be altered in those with ADHD. This study presents results examining multisensory processing in the population of adults with ADHD, and can be used as a foundation for future ADHD research using developmental research designs as well as the development of novel technological supports.
Collapse
Affiliation(s)
- Heather S McCracken
- Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada
| | - Bernadette A Murphy
- Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada
| | - Cheryl M Glazebrook
- Faculty of Kinesiology and Recreation Management, University of Manitoba, Winnipeg, MB, Canada.,Health, Leisure & Human Performance Institute, University of Manitoba, Winnipeg, MB, Canada
| | - James J Burkitt
- Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada
| | - Antonia M Karellas
- Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada
| | - Paul C Yielder
- Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada.,Faculty of Health, School of Medicine, Deakin University, Waurn Ponds, VIC, Australia
| |
Collapse
|
6
|
Audiovisual Lexical Retrieval Deficits Following Left Hemisphere Stroke. Brain Sci 2018; 8:brainsci8120206. [PMID: 30486517 PMCID: PMC6316523 DOI: 10.3390/brainsci8120206] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 11/18/2018] [Accepted: 11/27/2018] [Indexed: 11/27/2022] Open
Abstract
Binding sensory features of multiple modalities of what we hear and see allows formation of a coherent percept to access semantics. Previous work on object naming has focused on visual confrontation naming with limited research in nonverbal auditory or multisensory processing. To investigate neural substrates and sensory effects of lexical retrieval, we evaluated healthy adults (n = 118) and left hemisphere stroke patients (LHD, n = 42) in naming manipulable objects across auditory (sound), visual (picture), and multisensory (audiovisual) conditions. LHD patients were divided into cortical, cortical–subcortical, or subcortical lesions (CO, CO–SC, SC), and specific lesion location investigated in a predictive model. Subjects produced lower accuracy in auditory naming relative to other conditions. Controls demonstrated greater naming accuracy and faster reaction times across all conditions compared to LHD patients. Naming across conditions was most severely impaired in CO patients. Both auditory and visual naming accuracy were impacted by temporal lobe involvement, although auditory naming was sensitive to lesions extending subcortically. Only controls demonstrated significant improvement over visual naming with the addition of auditory cues (i.e., multisensory condition). Results support overlapping neural networks for visual and auditory modalities related to semantic integration in lexical retrieval and temporal lobe involvement, while multisensory integration was impacted by both occipital and temporal lobe lesion involvement. The findings support modality specificity in naming and suggest that auditory naming is mediated by a distributed cortical–subcortical network overlapping with networks mediating spatiotemporal aspects of skilled movements producing sound.
Collapse
|
7
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
8
|
Lange J, Kapala K, Krause H, Baumgarten TJ, Schnitzler A. Rapid temporal recalibration to visuo-tactile stimuli. Exp Brain Res 2017; 236:347-354. [PMID: 29143125 PMCID: PMC5809529 DOI: 10.1007/s00221-017-5132-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Accepted: 11/10/2017] [Indexed: 11/28/2022]
Abstract
For a comprehensive understanding of the environment, the brain must constantly decide whether the incoming information originates from the same source and needs to be integrated into a coherent percept. This integration process is believed to be mediated by temporal integration windows. If presented with temporally asynchronous stimuli for a few minutes, the brain adapts to this new temporal relation by recalibrating the temporal integration windows. Such recalibration can occur even more rapidly after exposure to just a single trial of asynchronous stimulation. While rapid recalibration has been demonstrated for audio-visual stimuli, evidence for rapid recalibration of visuo-tactile stimuli is lacking. Here, we investigated rapid recalibration in the visuo-tactile domain. Subjects received visual and tactile stimuli with different stimulus onset asynchronies (SOA) and were asked to report whether the visuo-tactile stimuli were presented simultaneously. Our results demonstrate visuo-tactile rapid recalibration by revealing that subjects' simultaneity reports were modulated by the temporal order of stimulation in the preceding trial. This rapid recalibration effect, however, was only significant if the SOA in the preceding trial was smaller than 100 ms, while rapid recalibration could not be demonstrated for SOAs larger than 100 ms. Since rapid recalibration in the audio-visual domain has been demonstrated for SOAs larger than 100 ms, we propose that visuo-tactile recalibration works at shorter SOAs, and thus faster time scales than audio-visual rapid recalibration.
Collapse
Affiliation(s)
- Joachim Lange
- Medical Faculty, Institute of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Düsseldorf, Germany.
| | - Katharina Kapala
- Medical Faculty, Institute of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Düsseldorf, Germany
| | - Holger Krause
- Medical Faculty, Institute of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Düsseldorf, Germany
| | - Thomas J Baumgarten
- Medical Faculty, Institute of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Düsseldorf, Germany
| | - Alfons Schnitzler
- Medical Faculty, Institute of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Düsseldorf, Germany
| |
Collapse
|
9
|
Costantini M, Migliorati D, Donno B, Sirota M, Ferri F. Expected but omitted stimuli affect crossmodal interaction. Cognition 2017; 171:52-64. [PMID: 29107888 DOI: 10.1016/j.cognition.2017.10.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 10/19/2017] [Accepted: 10/20/2017] [Indexed: 11/29/2022]
Abstract
One of the most important ability of our brain is to integrate input from different sensory modalities to create a coherent representation of the environment. Does expectation affect such multisensory integration? In this paper, we tackled this issue by taking advantage from the crossmodal congruency effect (CCE). Participants made elevation judgments to visual target while ignoring tactile distractors. We manipulated the expectation of the tactile distractor by pairing the tactile stimulus to the index finger with a high-frequency tone and the tactile stimulus to the thumb with a low-frequency tone in 80% of the trials. In the remaining trials we delivered the tone and the visual target, but the tactile distractor was omitted (Study 1). Results fully replicated the basic crossmodal congruency effect. Strikingly, the CCE was observed, though at a lesser degree, also when the tactile distractor was not presented but merely expected. The contingencies between tones and tactile distractors were reversed in a follow-up study (Study 2), and the effect was further tested in two conceptual replications using different combinations of stimuli (Studies 5 and 6). Two control studies ruled out alternative explanations of the observed effect that would not involve a role for tactile distractors (Studies 3, 4). Two additional control studies unequivocally proved the dependency of the CCE on the spatial and temporal expectation of the distractors (Study 7, 8). An internal small-scale meta-analysis showed that the crossmodal congruency effect with predicted distractors is a robust medium size effect. Our findings reveal that multisensory integration, one of the most basic and ubiquitous mechanisms to encode external events, benefits from expectation of sensory input.
Collapse
Affiliation(s)
- Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy.
| | - Daniele Migliorati
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Brunella Donno
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Miroslav Sirota
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
| | - Francesca Ferri
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK.
| |
Collapse
|
10
|
Stevenson RA, Baum SH, Krueger J, Newhouse PA, Wallace MT. Links between temporal acuity and multisensory integration across life span. J Exp Psychol Hum Percept Perform 2017; 44:106-116. [PMID: 28447850 DOI: 10.1037/xhp0000424] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, Brain and Mind Institute, University of Western Ontario
| | - Sarah H Baum
- Department of Psychology, University of Washington
| | | | - Paul A Newhouse
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center
| | | |
Collapse
|
11
|
Abstract
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action “matched” the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Collapse
|
12
|
Mahoney JR, Molholm S, Butler JS, Sehatpour P, Gomez-Ramirez M, Ritter W, Foxe JJ. Keeping in touch with the visual system: spatial alignment and multisensory integration of visual-somatosensory inputs. Front Psychol 2015; 6:1068. [PMID: 26300797 PMCID: PMC4525670 DOI: 10.3389/fpsyg.2015.01068] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Accepted: 07/13/2015] [Indexed: 11/21/2022] Open
Abstract
Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration (MSI) at very early sensory processing levels. While MSI is said to be governed by stimulus properties including space, time, and magnitude, violations of these rules have been documented. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory (VS) integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing VS interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V + S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55 ms). In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to VS pairings.
Collapse
Affiliation(s)
- Jeannette R Mahoney
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg NY, USA ; Division of Cognitive and Motor Aging, Department of Neurology, Albert Einstein College of Medicine, New York NY, USA
| | - Sophie Molholm
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg NY, USA ; The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, New York NY, USA ; The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Center, Albert Einstein College of Medicine, New York NY, USA
| | - John S Butler
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, New York NY, USA
| | - Pejman Sehatpour
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg NY, USA
| | - Manuel Gomez-Ramirez
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg NY, USA
| | - Walter Ritter
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg NY, USA ; The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, New York NY, USA
| | - John J Foxe
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg NY, USA ; The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, New York NY, USA ; The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Center, Albert Einstein College of Medicine, New York NY, USA
| |
Collapse
|
13
|
Deviance detection in auditory subcortical structures: what can we learn from neurochemistry and neural connectivity? Cell Tissue Res 2015; 361:215-32. [DOI: 10.1007/s00441-015-2134-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 01/22/2015] [Indexed: 12/18/2022]
|
14
|
Stitt I, Galindo-Leon E, Pieper F, Hollensteiner KJ, Engler G, Engel AK. Auditory and visual interactions between the superior and inferior colliculi in the ferret. Eur J Neurosci 2015; 41:1311-20. [PMID: 25645363 DOI: 10.1111/ejn.12847] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Revised: 12/09/2014] [Accepted: 01/03/2015] [Indexed: 11/27/2022]
Abstract
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space.
Collapse
Affiliation(s)
- Iain Stitt
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Edgar Galindo-Leon
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Florian Pieper
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Karl J Hollensteiner
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Gerhard Engler
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| |
Collapse
|
15
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
16
|
Connolly K. Multisensory perception as an associative learning process. Front Psychol 2014; 5:1095. [PMID: 25309498 PMCID: PMC4176039 DOI: 10.3389/fpsyg.2014.01095] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2014] [Accepted: 09/10/2014] [Indexed: 11/21/2022] Open
Abstract
Suppose that you are at a live jazz show. The drummer begins a solo. You see the cymbal jolt and you hear the clang. But in addition seeing the cymbal jolt and hearing the clang, you are also aware that the jolt and the clang are part of the same event. Casey O’Callaghan (forthcoming) calls this awareness “intermodal feature binding awareness.” Psychologists have long assumed that multimodal perceptions such as this one are the result of a automatic feature binding mechanism (see Pourtois et al., 2000; Vatakis and Spence, 2007; Navarra et al., 2012). I present new evidence against this. I argue that there is no automatic feature binding mechanism that couples features like the jolt and the clang together. Instead, when you experience the jolt and the clang as part of the same event, this is the result of an associative learning process. The cymbal’s jolt and the clang are best understood as a single learned perceptual unit, rather than as automatically bound. I outline the specific learning process in perception called “unitization,” whereby we come to “chunk” the world into multimodal units. Unitization has never before been applied to multimodal cases. Yet I argue that this learning process can do the same work that intermodal binding would do, and that this issue has important philosophical implications. Specifically, whether we take multimodal cases to involve a binding mechanism or an associative process will have impact on philosophical issues from Molyneux’s question to the question of how active or passive we consider perception to be.
Collapse
Affiliation(s)
- Kevin Connolly
- Philosophy and Institute for Research in Cognitive Science, University of Pennsylvania Philadelphia, PA, USA
| |
Collapse
|
17
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|
18
|
Abstract
Individuals are constantly bombarded by sensory stimuli across multiple modalities that must be integrated efficiently. Multisensory integration (MSI) is said to be governed by stimulus properties including space, time, and magnitude. While there is a paucity of research detailing MSI in aging, we have demonstrated that older adults reveal the greatest reaction time (RT) benefit when presented with simultaneous visual-somatosensory (VS) stimuli. To our knowledge, the differential RT benefit of visual and somatosensory stimuli presented within and across spatial hemifields has not been investigated in aging. Eighteen older adults (Mean = 74 years; 11 female), who were determined to be non-demented and without medical or psychiatric conditions that may affect their performance, participated in this study. Participants received eight randomly presented stimulus conditions (four unisensory and four multisensory) and were instructed to make speeded foot-pedal responses as soon as they detected any stimulation, regardless of stimulus type and location of unisensory inputs. Results from a linear mixed effect model, adjusted for speed of processing and other covariates, revealed that RTs to all multisensory pairings were significantly faster than those elicited to averaged constituent unisensory conditions (p < 0.01). Similarly, race model violation did not differ based on unisensory spatial location (p = 0.41). In summary, older adults demonstrate significant VS multisensory RT effects to stimuli both within and across spatial hemifields.
Collapse
|
19
|
Learning to associate auditory and visual stimuli: behavioral and neural mechanisms. Brain Topogr 2013; 28:479-93. [PMID: 24276220 DOI: 10.1007/s10548-013-0333-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2013] [Accepted: 11/11/2013] [Indexed: 12/20/2022]
Abstract
The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time and hazard function measure known as capacity (e.g., Townsend and AshbyCognitive Theory 200-239, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean global field power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain.
Collapse
|
20
|
Olcese U, Iurilli G, Medini P. Cellular and synaptic architecture of multisensory integration in the mouse neocortex. Neuron 2013; 79:579-93. [PMID: 23850594 DOI: 10.1016/j.neuron.2013.06.010] [Citation(s) in RCA: 121] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/28/2013] [Indexed: 10/26/2022]
Abstract
Multisensory integration (MI) is crucial for sensory processing, but it is unclear how MI is organized in cortical microcircuits. Whole-cell recordings in a mouse visuotactile area located between primary visual and somatosensory cortices revealed that spike responses were less bimodal than synaptic responses but displayed larger multisensory enhancement. MI was layer and cell type specific, with multisensory enhancement being rare in the major class of inhibitory interneurons and in the output infragranular layers. Optogenetic manipulation of parvalbumin-positive interneuron activity revealed that the scarce MI of interneurons enables MI in neighboring pyramids. Finally, single-cell resolution calcium imaging revealed a gradual merging of modalities: unisensory neurons had higher densities toward the borders of the primary cortices, but were located in unimodal clusters in the middle of the cortical area. These findings reveal the role of different neuronal subcircuits in the synaptic process of MI in the rodent parietal cortex.
Collapse
Affiliation(s)
- Umberto Olcese
- Department of Neuroscience and Brain Technologies, Istituto Italiano di Tecnologia, Via Morego, 30, 16163 Genova, Italy
| | | | | |
Collapse
|
21
|
Stevenson RA, Wallace MT. Multisensory temporal integration: task and stimulus dependencies. Exp Brain Res 2013; 227:249-61. [PMID: 23604624 PMCID: PMC3711231 DOI: 10.1007/s00221-013-3507-3] [Citation(s) in RCA: 157] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Accepted: 03/28/2013] [Indexed: 12/19/2022]
Abstract
The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the importance of temporal factors in multisensory integration under both normal and atypical circumstances such as autism and dyslexia, the TBW has been measured with a variety of experimental protocols that differ according to criterion, task, and stimulus type, making comparisons across experiments difficult. In the current study, we attempt to elucidate the role that these various factors play in the measurement of this important construct. The results show a strong effect of stimulus type, with the TBW assessed with speech stimuli being both larger and more symmetrical than that seen using simple and complex non-speech stimuli. These effects are robust across task and statistical criteria and are highly consistent within individuals, suggesting substantial overlap in the neural and cognitive operations that govern multisensory temporal processes.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 7110 MRB III BioSci Bldg 465, 21st Ave South, Nashville, TN 37232, USA.
| | | |
Collapse
|
22
|
Stevenson RA, Wilson MM, Powers AR, Wallace MT. The effects of visual training on multisensory temporal processing. Exp Brain Res 2013; 225:479-89. [PMID: 23307155 PMCID: PMC3606590 DOI: 10.1007/s00221-012-3387-y] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2012] [Accepted: 12/17/2012] [Indexed: 10/27/2022]
Abstract
The importance of multisensory integration for human behavior and perception is well documented, as is the impact that temporal synchrony has on driving such integration. Thus, the more temporally coincident two sensory inputs from different modalities are, the more likely they will be perceptually bound. This temporal integration process is captured by the construct of the temporal binding window-the range of temporal offsets within which an individual is able to perceptually bind inputs across sensory modalities. Recent work has shown that this window is malleable and can be narrowed via a multisensory perceptual feedback training process. In the current study, we seek to extend this by examining the malleability of the multisensory temporal binding window through changes in unisensory experience. Specifically, we measured the ability of visual perceptual feedback training to induce changes in the multisensory temporal binding window. Visual perceptual training with feedback successfully improved temporal visual processing, and more importantly, this visual training increased the temporal precision across modalities, which manifested as a narrowing of the multisensory temporal binding window. These results are the first to establish the ability of unisensory temporal training to modulate multisensory temporal processes, findings that can provide mechanistic insights into multisensory integration and which may have a host of practical applications.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Medical Research Building III, Suite 7110C, Nashville, TN, USA.
| | | | | | | |
Collapse
|
23
|
Foxworthy WA, Allman BL, Keniston LP, Meredith MA. Multisensory and unisensory neurons in ferret parietal cortex exhibit distinct functional properties. Eur J Neurosci 2012; 37:910-23. [PMID: 23279600 DOI: 10.1111/ejn.12085] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2012] [Revised: 10/18/2012] [Accepted: 11/12/2012] [Indexed: 11/26/2022]
Abstract
Despite the fact that unisensory and multisensory neurons are comingled in every neural structure in which they have been identified, no systematic comparison of their response features has been conducted. Towards that goal, the present study was designed to examine and compare measures of response magnitude, latency, duration and spontaneous activity in unisensory and bimodal neurons from the ferret parietal cortex. Using multichannel single-unit recording, bimodal neurons were observed to demonstrate significantly higher response levels and spontaneous discharge rates than did their unisensory counterparts. These results suggest that, rather than merely reflect different connectional arrangements, unisensory and multisensory neurons are likely to differ at the cellular level. Thus, it can no longer be assumed that the different populations of bimodal and unisensory neurons within a neural region respond similarly to a given external stimulus.
Collapse
Affiliation(s)
- W Alex Foxworthy
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, VA 23298, USA
| | | | | | | |
Collapse
|
24
|
Stevenson RA, Fister JK, Barnett ZP, Nidiffer AR, Wallace MT. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Exp Brain Res 2012; 219:121-37. [PMID: 22447249 DOI: 10.1007/s00221-012-3072-1] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2011] [Accepted: 03/06/2012] [Indexed: 12/19/2022]
Abstract
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | | | | | | | | |
Collapse
|
25
|
Stevenson RA, Zemtsov RK, Wallace MT. Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. J Exp Psychol Hum Percept Perform 2012; 38:1517-29. [PMID: 22390292 DOI: 10.1037/a0027339] [Citation(s) in RCA: 164] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of one sensory modality to modulate perception in a second modality. Such multisensory integration is highly dependent upon the temporal relationship of the different sensory inputs, with perceptual binding occurring within a limited range of asynchronies known as the temporal binding window (TBW). Previous studies have shown that this window is highly variable across individuals, but it is unclear how these variations in the TBW relate to an individual's ability to integrate multisensory cues. Here we provide evidence linking individual differences in multisensory temporal processes to differences in the individual's audiovisual integration of illusory stimuli. Our data provide strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related. Specifically, the width of right side of an individuals' TBW, where the auditory stimulus follows the visual, is significantly correlated with the strength of illusory percepts, as indexed via both an increase in the strength of binding synchronous sensory signals and in an improvement in correctly dissociating asynchronous signals. These findings are discussed in terms of their possible neurobiological basis, relevance to the development of sensory integration, and possible importance for clinical conditions in which there is growing evidence that multisensory integration is compromised.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center.
| | | | | |
Collapse
|
26
|
Stevenson RA, Bushmakin M, Kim S, Wallace MT, Puce A, James TW. Inverse effectiveness and multisensory interactions in visual event-related potentials with audiovisual speech. Brain Topogr 2012; 25:308-26. [PMID: 22367585 DOI: 10.1007/s10548-012-0220-7] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2011] [Accepted: 02/05/2012] [Indexed: 10/28/2022]
Abstract
In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.
| | | | | | | | | | | |
Collapse
|
27
|
Yuan X, Li B, Bi C, Yin H, Huang X. Audiovisual Temporal Recalibration: Space-Based versus Context-Based. Perception 2012; 41:1218-33. [PMID: 23469702 DOI: 10.1068/p7243] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Recalibration of perceived simultaneity has been widely accepted to minimise delay between multisensory signals owing to different physical and neural conduct times. With concurrent exposure, temporal recalibration is either contextually or spatially based. Context-based recalibration was recently described in detail, but evidence for space-based recalibration is scarce. In addition, the competition between these two reference frames is unclear. Here, we examined participants who watched two distinct blob-and-tone couples that laterally alternated with one asynchronous and the other synchronous and then judged their perceived simultaneity and sequence when they swapped positions and varied in timing. For low-level stimuli with abundant auditory location cues space-based aftereffects were significantly more apparent (8.3%) than context-based aftereffects (4.2%), but without such auditory cues space-based aftereffects were less apparent (4.4%) and were numerically smaller than context-based aftereffects (6.0%). These results suggested that stimulus level and auditory location cues were both determinants of the recalibration frame. Through such joint judgments and the simple reaction time task, our results further revealed that criteria from perceived simultaneity to successiveness profoundly shifted without accompanying perceptual latency changes across adaptations, hence implying that criteria shifts, rather than perceptual latency changes, accounted for space-based and context-based temporal recalibration.
Collapse
Affiliation(s)
- Xiangyong Yuan
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Baolin Li
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Cuihua Bi
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Huazhan Yin
- School of Education, Key Lab of Applied Psychology, Chongqing Normal University, Chongqing 400715, China
| | - Xiting Huang
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| |
Collapse
|
28
|
Abstract
Supramodal representation of emotion and its neural substrates have recently attracted attention as a marker of social cognition. However, the question whether perceptual integration of facial and vocal emotions takes place in primary sensory areas, multimodal cortices, or in affective structures remains unanswered yet. Using novel computer-generated stimuli, we combined emotional faces and voices in congruent and incongruent ways and assessed functional brain data (fMRI) during an emotional classification task. Both congruent and incongruent audiovisual stimuli evoked larger responses in thalamus and superior temporal regions compared with unimodal conditions. Congruent emotions were characterized by activation in amygdala, insula, ventral posterior cingulate (vPCC), temporo-occipital, and auditory cortices; incongruent emotions activated a frontoparietal network and bilateral caudate nucleus, indicating a greater processing load in working memory and emotion-encoding areas. The vPCC alone exhibited differential reactions to congruency and incongruency for all emotion categories and can thus be considered a central structure for supramodal representation of complex emotional information. Moreover, the left amygdala reflected supramodal representation of happy stimuli. These findings document that emotional information does not merge at the perceptual audiovisual integration level in unimodal or multimodal areas, but in vPCC and amygdala.
Collapse
|
29
|
Nozaradan S, Peretz I, Mouraux A. Steady-state evoked potentials as an index of multisensory temporal binding. Neuroimage 2011; 60:21-8. [PMID: 22155324 DOI: 10.1016/j.neuroimage.2011.11.065] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2011] [Revised: 11/10/2011] [Accepted: 11/22/2011] [Indexed: 11/28/2022] Open
Abstract
Temporal congruency promotes perceptual binding of multisensory inputs. Here, we used EEG frequency-tagging to track cortical activities elicited by auditory and visual inputs separately, in the form of steady-state evoked potentials (SS-EPs). We tested whether SS-EPs could reveal a dynamic coupling of cortical activities related to the binding of auditory and visual inputs conveying synchronous vs. non-synchronous temporal periodicities, or beats. The temporally congruent audiovisual condition elicited markedly enhanced auditory and visual SS-EPs, as compared to the incongruent condition. Furthermore, an increased inter-trial phase coherence of both SS-EPs was observed in that condition. Taken together, these observations indicate that temporal congruency enhances the processing of multisensory inputs at sensory-specific stages of cortical processing, possibly through a dynamic binding by synchrony of the elicited activities and/or improved dynamic attending. Moreover, we show that EEG frequency-tagging with SS-EPs constitutes an effective tool to explore the neural dynamics of multisensory integration in the human brain.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), Belgium
| | | | | |
Collapse
|
30
|
Marchant JL, Ruff CC, Driver J. Audiovisual synchrony enhances BOLD responses in a brain network including multisensory STS while also enhancing target-detection performance for both modalities. Hum Brain Mapp 2011; 33:1212-24. [PMID: 21953980 PMCID: PMC3498728 DOI: 10.1002/hbm.21278] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2010] [Revised: 01/13/2011] [Accepted: 01/14/2011] [Indexed: 11/21/2022] Open
Abstract
The brain seeks to combine related inputs from different senses (e.g., hearing and vision), via multisensory integration. Temporal information can indicate whether stimuli in different senses are related or not. A recent human fMRI study (Noesselt et al. [2007]: J Neurosci 27:11431–11441) used auditory and visual trains of beeps and flashes with erratic timing, manipulating whether auditory and visual trains were synchronous or unrelated in temporal pattern. A region of superior temporal sulcus (STS) showed higher BOLD signal for the synchronous condition. But this could not be related to performance, and it remained unclear if the erratic, unpredictable nature of the stimulus trains was important. Here we compared synchronous audiovisual trains to asynchronous trains, while using a behavioral task requiring detection of higher‐intensity target events in either modality. We further varied whether the stimulus trains had predictable temporal pattern or not. Synchrony (versus lag) between auditory and visual trains enhanced behavioral sensitivity (d') to intensity targets in either modality, regardless of predictable versus unpredictable patterning. The analogous contrast in fMRI revealed BOLD increases in several brain areas, including the left STS region reported by Noesselt et al. [2007: J Neurosci 27:11431–11441]. The synchrony effect on BOLD here correlated with the subject‐by‐subject impact on performance. Predictability of temporal pattern did not affect target detection performance or STS activity, but did lead to an interaction with audiovisual synchrony for BOLD in inferior parietal cortex. Hum Brain Mapp, 2011. © 2011 Wiley‐Liss, Inc.
Collapse
Affiliation(s)
- Jennifer L Marchant
- Wellcome Trust Centre for Neuroimaging at UCL, Institute of Neurology, University College London, London, WC1N 3BG, United Kingdom.
| | | | | |
Collapse
|
31
|
Mahoney JR, Li PCC, Oh-Park M, Verghese J, Holtzer R. Multisensory integration across the senses in young and old adults. Brain Res 2011; 1426:43-53. [PMID: 22024545 DOI: 10.1016/j.brainres.2011.09.017] [Citation(s) in RCA: 88] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2011] [Revised: 09/09/2011] [Accepted: 09/12/2011] [Indexed: 10/17/2022]
Abstract
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 years) and eighteen old (M=76.44 years) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults.
Collapse
Affiliation(s)
- Jeannette R Mahoney
- Ferkauf Graduate School of Psychology, Albert Einstein College of Medicine, Bronx, NY, USA
| | | | | | | | | |
Collapse
|
32
|
Sarko D, Nidiffer A, III A, Ghose D, Hillock-Dunn R, Fister M, Krueger J, Wallace M. Spatial and Temporal Features of Multisensory Processes. Front Neurosci 2011. [DOI: 10.1201/9781439812174-15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
33
|
Sarko D, Nidiffer A, III A, Ghose D, Hillock-Dunn R, Fister M, Krueger J, Wallace M. Spatial and Temporal Features of Multisensory Processes. Front Neurosci 2011. [DOI: 10.1201/b11092-15] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
34
|
Franciotti R, Brancucci A, Della Penna S, Onofrj M, Tommasi L. Neuromagnetic responses reveal the cortical timing of audiovisual synchrony. Neuroscience 2011; 193:182-92. [PMID: 21787844 DOI: 10.1016/j.neuroscience.2011.07.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2011] [Revised: 07/01/2011] [Accepted: 07/06/2011] [Indexed: 11/25/2022]
Abstract
Multisensory processing involving visual and auditory inputs is modulated by their relative temporal offsets. In order to assess whether multisensory integration alters the activation timing of primary visual and auditory cortices as a function of the temporal offsets between auditory and visual stimuli, a task was designed in which subjects had to judge the perceptual simultaneity of the onset of visual stimuli and brief acoustic tones. These were presented repeatedly with three different inter-stimulus intervals that were chosen to meet three perceptual conditions: (1) physical synchrony perceived as synchrony by subjects (SYNC); (2) physical asynchrony perceived as asynchrony (ASYNC); (3) physical asynchrony perceived ambiguously (AMB, i.e. 50% perceived as synchrony, 50% as asynchrony). Magnetoencephalographic activity was recorded during crossmodal sessions and unimodal control sessions. The activation of primary visual and auditory cortices peaked at a longer latency for the crossmodal conditions as compared to the unimodal conditions. Moreover, the latency in the auditory cortex was longer in the SYNC than in the ASYNC condition, whereas in the visual cortex the latency in the AMB condition was longer than in the ASYNC condition. These findings suggest that multisensory processing affects temporal dynamics already in primary cortices, that such activity can differ regionally and can be sensitive to the temporal offsets of multisensory inputs. In addition, in the AMB condition the conscious awareness of asynchrony might be associated to a later activation of the primary auditory cortex.
Collapse
Affiliation(s)
- R Franciotti
- Department of Neuroscience and Imaging, G. d'Annunzio University, Chieti, Italy.
| | | | | | | | | |
Collapse
|
35
|
Banks MI, Uhlrich DJ, Smith PH, Krause BM, Manning KA. Descending projections from extrastriate visual cortex modulate responses of cells in primary auditory cortex. Cereb Cortex 2011; 21:2620-38. [PMID: 21471557 DOI: 10.1093/cercor/bhr048] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Primary sensory cortical responses are modulated by the presence or expectation of related sensory information in other modalities, but the sources of multimodal information and the cellular locus of this integration are unclear. We investigated the modulation of neural responses in the murine primary auditory cortical area Au1 by extrastriate visual cortex (V2). Projections from V2 to Au1 terminated in a classical descending/modulatory pattern, with highest density in layers 1, 2, 5, and 6. In brain slices, whole-cell recordings revealed long latency responses to stimulation in V2L that could modulate responses to subsequent white matter (WM) stimuli at latencies of 5-20 ms. Calcium responses imaged in Au1 cell populations showed that preceding WM with V2L stimulation modulated WM responses, with both summation and suppression observed. Modulation of WM responses was most evident for near-threshold WM stimuli. These data indicate that corticocortical projections from V2 contribute to multimodal integration in primary auditory cortex.
Collapse
Affiliation(s)
- Matthew I Banks
- Department of Anesthesiology, University of Wisconsin, Madison, WI 53706, USA.
| | | | | | | | | |
Collapse
|
36
|
Royal DW, Krueger J, Fister MC, Wallace MT. Adult plasticity of spatiotemporal receptive fields of multisensory superior colliculus neurons following early visual deprivation. Restor Neurol Neurosci 2010; 28:259-70. [PMID: 20404413 DOI: 10.3233/rnn-2010-0488] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE Previous work has established that the integrative capacity of multisensory neurons in the superior colliculus (SC) matures over a protracted period of postnatal life (Wallace and Stein, 1997), and that the development of normal patterns of multisensory integration depends critically on early sensory experience (Wallace et al., 2004). Although these studies demonstrated the importance of early sensory experience in the creation of mature multisensory circuits, it remains unknown whether the reestablishment of sensory experience in adulthood can reverse these effects and restore integrative capacity. METHODS The current study tested this hypothesis in cats that were reared in absolute darkness until adulthood and then returned to a normal housing environment for an equivalent period of time. Single unit extracellular recordings targeted multisensory neurons in the deep layers of the SC, and analyses were focused on both conventional measures of multisensory integration and on more recently developed methods designed to characterize spatiotemporal receptive fields (STRF). RESULTS Analysis of the STRF structure and integrative capacity of multisensory SC neurons revealed significant modifications in the temporal response dynamics of multisensory responses (e.g., discharge durations, peak firing rates, and mean firing rates), as well as significant changes in rates of spontaneous activation and degrees of multisensory integration. CONCLUSIONS These results emphasize the importance of early sensory experience in the establishment of normal multisensory processing architecture and highlight the limited plastic potential of adult multisensory circuits.
Collapse
Affiliation(s)
- David W Royal
- Kennedy Center for Research on Human Development, Nashville, Tennessee 37232, USA.
| | | | | | | |
Collapse
|
37
|
Froyen D, van Atteveldt N, Blomert L. Exploring the Role of Low Level Visual Processing in Letter-Speech Sound Integration: A Visual MMN Study. Front Integr Neurosci 2010; 4:9. [PMID: 20428501 PMCID: PMC2859813 DOI: 10.3389/fnint.2010.00009] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2009] [Accepted: 03/21/2010] [Indexed: 11/21/2022] Open
Abstract
In contrast with for example audiovisual speech, the relation between visual and auditory properties of letters and speech sounds is artificial and learned only by explicit instruction. The arbitrariness of the audiovisual link together with the widespread usage of letter-speech sound pairs in alphabetic languages makes those audiovisual objects a unique subject for crossmodal research. Brain imaging evidence has indicated that heteromodal areas in superior temporal, as well as modality-specific auditory cortex are involved in letter-speech sound processing. The role of low level visual areas, however, remains unclear. In this study the visual counterpart of the auditory mismatch negativity (MMN) is used to investigate the influences of speech sounds on letter processing. Letter and non-letter deviants were infrequently presented in a train of standard letters, either in isolation or simultaneously with speech sounds. Although previous findings showed that letters systematically modulate speech sound processing (reflected by auditory MMN amplitude modulation), the reverse does not seem to hold: our results did not show evidence for an automatic influence of speech sounds on letter processing (no visual MMN amplitude modulation). This apparent asymmetric recruitment of low level sensory cortices during letter-speech sound processing, contrasts with the symmetric involvement of these cortices in audiovisual speech processing, and is possibly due to the arbitrary nature of the link between letters and speech sounds.
Collapse
Affiliation(s)
- Dries Froyen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience & Maastricht Brain Imaging Institute, Maastricht UniversityMaastricht, Netherlands
| | - Nienke van Atteveldt
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience & Maastricht Brain Imaging Institute, Maastricht UniversityMaastricht, Netherlands
- Division of Child and Adolescent Psychiatry, Columbia University College of Physicians and SurgeonsNew York, NY, USA
| | - Leo Blomert
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience & Maastricht Brain Imaging Institute, Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
38
|
Cuppini C, Ursino M, Magosso E, Rowland BA, Stein BE. An emergent model of multisensory integration in superior colliculus neurons. Front Integr Neurosci 2010; 4:6. [PMID: 20431725 PMCID: PMC2861478 DOI: 10.3389/fnint.2010.00006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Accepted: 03/03/2010] [Indexed: 11/21/2022] Open
Abstract
Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | | | | | |
Collapse
|
39
|
Budinger E, Scheich H. Anatomical connections suitable for the direct processing of neuronal information of different modalities via the rodent primary auditory cortex. Hear Res 2009; 258:16-27. [DOI: 10.1016/j.heares.2009.04.021] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Revised: 04/30/2009] [Accepted: 04/30/2009] [Indexed: 10/20/2022]
|
40
|
Krueger J, Royal DW, Fister MC, Wallace MT. Spatial receptive field organization of multisensory neurons and its impact on multisensory interactions. Hear Res 2009; 258:47-54. [PMID: 19698773 DOI: 10.1016/j.heares.2009.08.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2009] [Revised: 08/05/2009] [Accepted: 08/12/2009] [Indexed: 11/18/2022]
Abstract
Previous work has established that the spatial receptive fields (SRFs) of multisensory neurons in the cerebral cortex are strikingly heterogeneous, and that SRF architecture plays an important deterministic role in sensory responsiveness and multisensory integrative capacities. The initial part of this contribution serves to review these findings detailing the key features of SRF organization in cortical multisensory populations by highlighting work from the cat anterior ectosylvian sulcus (AES). In addition, we have recently conducted parallel studies designed to examine SRF architecture in the classic model for multisensory studies, the cat superior colliculus (SC), and we present some of the preliminary observations from the SC here. An examination of individual SC neurons revealed marked similarities between their unisensory (i.e., visual and auditory) SRFs, as well as between these unisensory SRFs and the multisensory SRF. Despite these similarities within individual neurons, different SC neurons had SRFs that ranged from a single area of greatest activation (hot spot) to multiple and spatially discrete hot spots. Similar to cortical multisensory neurons, the interactive profile of SC neurons was correlated strongly to SRF architecture, closely following the principle of inverse effectiveness. Thus, large and often superadditive multisensory response enhancements were typically seen at SRF locations where visual and auditory stimuli were weakly effective. Conversely, subadditive interactions were seen at SRF locations where stimuli were highly effective. Despite the unique functions characteristic of cortical and subcortical multisensory circuits, our results suggest a strong mechanistic interrelationship between SRF microarchitecture and integrative capacity.
Collapse
Affiliation(s)
- Juliane Krueger
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN 37232, USA.
| | | | | | | |
Collapse
|
41
|
Multisensory integration in the superior colliculus requires synergy among corticocollicular inputs. J Neurosci 2009; 29:6580-92. [PMID: 19458228 DOI: 10.1523/jneurosci.0525-09.2009] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Influences from the visual (AEV), auditory (FAES), and somatosensory (SIV) divisions of the cat anterior ectosylvian sulcus (AES) play a critical role in rendering superior colliculus (SC) neurons capable of multisensory integration. However, it is not known whether this is accomplished via their independent sensory-specific action or via some cross-modal cooperative action that emerges as a consequence of their convergence on SC neurons. Using visual-auditory SC neurons as a model, we examined how selective and combined deactivation of FAES and AEV affected SC multisensory (visual-auditory) and unisensory (visual-visual) integration capabilities. As noted earlier, multisensory integration yielded SC responses that were significantly greater than those evoked by the most effective individual component stimulus. This multisensory "response enhancement" was more evident when the component stimuli were weakly effective. Conversely, unisensory integration was dominated by the lack of response enhancement. During cryogenic deactivation of FAES and/or AEV, the unisensory responses of SC neurons were only modestly affected; however, their multisensory response enhancement showed a significant downward shift and was eliminated. The shift was similar in magnitude for deactivation of either AES subregion and, in general, only marginally greater when both were deactivated simultaneously. These data reveal that SC multisensory integration is dependent on the cooperative action of distinct subsets of unisensory corticofugal afferents, afferents whose sensory combination matches the multisensory profile of their midbrain target neurons, and whose functional synergy is specific to rendering SC neurons capable of synthesizing information from those particular senses.
Collapse
|
42
|
Trojan J, Getzmann S, Möller J, Kleinböhl D, Hölzl R. Tactile-auditory saltation: spatiotemporal integration across sensory modalities. Neurosci Lett 2009; 460:156-60. [PMID: 19477228 DOI: 10.1016/j.neulet.2009.05.053] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2008] [Revised: 02/13/2009] [Accepted: 05/20/2009] [Indexed: 11/26/2022]
Abstract
The perceptual phenomena of sensory saltation involve the systematic displacement of a target stimulus (the attractee) towards a subsequent stimulus (the attractant), which occurs closely in time and space. Here, we demonstrate the existence of cross-modal tactile-auditory saltation. Tactile stimuli were delivered to the forehead and spatially congruent stereoscopic auditory stimuli were presented via headphones to a total of 20 participants. After a reference stimulus at one of five spatial positions, the attractee was presented at a fixed position, followed by the attractant at a different fixed position with a delay of 81, 121, or 181 ms. Participants rated whether the attractee was perceived left or right of the reference in 2 uni-modal and 2 cross-modal (different reference/attractee vs. attractant mode) configurations. Saltation was present in all uni- and cross-modal configurations at an attractee-attractant delay of 81 ms. At delays of 81 ms the overall displacements were stronger than at delays of 121 ms, and tactile attractants generally induced stronger displacements than auditory attractants. The results indicated the existence of cross-modal tactile-auditory saltation, suggesting the application of the saltation phenomenon as a powerful approach for examining multi-modal sensory representations in future studies.
Collapse
Affiliation(s)
- Jörg Trojan
- Otto Selz Institute for Applied Psychology - Mannheim Centre for Work and Health, University of Mannheim, 68131 Mannheim, Germany.
| | | | | | | | | |
Collapse
|
43
|
Polley DB, Hillock AR, Spankovich C, Popescu MV, Royal DW, Wallace MT. Development and plasticity of intra- and intersensory information processing. J Am Acad Audiol 2009; 19:780-98. [PMID: 19358458 DOI: 10.3766/jaaa.19.10.6] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The functional architecture of sensory brain regions reflects an ingenious biological solution to the competing demands of a continually changing sensory environment. While they are malleable, they have the constancy necessary to support a stable sensory percept. How does the functional organization of sensory brain regions contend with these antithetical demands? Here we describe the functional organization of auditory and multisensory (i.e., auditory-visual) information processing in three sensory brain structures: (1) a low-level unisensory cortical region, the primary auditory cortex (A1); (2) a higher-order multisensory cortical region, the anterior ectosylvian sulcus (AES); and (3) a multisensory subcortical structure, the superior colliculus (SC). We then present a body of work that characterizes the ontogenic expression of experience-dependent influences on the operations performed by the functional circuits contained within these regions. We will present data to support the hypothesis that the competing demands for plasticity and stability are addressed through a developmental transition in operational properties of functional circuits from an initially labile mode in the early stages of postnatal development to a more stable mode in the mature brain that retains the capacity for plasticity under specific experiential conditions. Finally, we discuss parallels between the central tenets of functional organization and plasticity of sensory brain structures drawn from animal studies and a growing literature on human brain plasticity and the potential applicability of these principles to the audiology clinic.
Collapse
Affiliation(s)
- Daniel B Polley
- Vanderbilt Bill Wilkerson Center for Otolaryngology and Communication Sciences, Department of Hearing and Speech Sciences, Vanderbilt Kennedy Center for Human Development, Vanderbilt University Medical School, USA.
| | | | | | | | | | | |
Collapse
|
44
|
Meftah EM, Bourgeon S, Chapman CE. Instructed Delay Discharge in Primary and Secondary Somatosensory Cortex Within the Context of a Selective Attention Task. J Neurophysiol 2009; 101:2649-67. [DOI: 10.1152/jn.91121.2008] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The neuronal mechanisms that contribute to tactile perception were studied using single-unit recordings from the cutaneous hand representation of primate primary (S1) and secondary (S2) somatosensory cortex. This study followed up on our recent observation that S1 and S2 neurons developed a sustained change in discharge during the instruction period of a directed-attention task. We determined the extent to which the symbolic light cues, which signaled the modality (tactile, visual) to attend and discriminate, elicited changes in discharge rate during the instructed delay (ID) period of the attention task and the functional importance of this discharge. ID responses, consisting of a sustained increase or decrease in discharge during the 2-s instruction period, were present in about 40% of the neurons in S1 and S2. ID responses in both cortical regions were very similar in most respects (frequency, sign, latency, amplitude), suggesting a common source. A major difference, however, was related to attentional modulation during the ID period: attentional influences were almost entirely restricted to S2 and these effects were always superimposed on the ID response (additive effect). These findings suggest that the underlying mechanisms for ID discharge and attention are independent. ID discharge significantly modified the initial response to the standard stimuli (competing texture and visual stimuli), usually enhancing responsiveness. We also showed that tactile detection in humans is enhanced during the ID period. Together, the results suggest that ID discharge represents a priming mechanism that prepares cortical areas to receive and process sensory inputs.
Collapse
|
45
|
Royal DW, Carriere BN, Wallace MT. Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions. Exp Brain Res 2009; 198:127-36. [PMID: 19308362 DOI: 10.1007/s00221-009-1772-y] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2008] [Accepted: 03/05/2009] [Indexed: 11/29/2022]
Abstract
Recent electrophysiology studies have suggested that neuronal responses to multisensory stimuli may possess a unique temporal signature. To evaluate this temporal dynamism, unisensory and multisensory spatiotemporal receptive fields (STRFs) of neurons in the cortex of the cat anterior ectosylvian sulcus were constructed. Analyses revealed that the multisensory STRFs of these neurons differed significantly from the component unisensory STRFs and their linear summation. Most notably, multisensory responses were found to have higher peak firing rates, shorter response latencies, and longer discharge durations. More importantly, multisensory STRFs were characterized by two distinct temporal phases of enhanced integration that reflected the shorter response latencies and longer discharge durations. These findings further our understanding of the temporal architecture of cortical multisensory processing, and thus provide important insights into the possible functional role(s) played by multisensory cortex in spatially directed perceptual processes.
Collapse
Affiliation(s)
- David W Royal
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN 37232, USA.
| | | | | |
Collapse
|
46
|
Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci 2008; 9:255-66. [PMID: 18354398 DOI: 10.1038/nrn2331] [Citation(s) in RCA: 899] [Impact Index Per Article: 56.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For thousands of years science philosophers have been impressed by how effectively the senses work together to enhance the salience of biologically meaningful events. However, they really had no idea how this was accomplished. Recent insights into the underlying physiological mechanisms reveal that, in at least one circuit, this ability depends on an intimate dialogue among neurons at multiple levels of the neuraxis; this dialogue cannot take place until long after birth and might require a specific kind of experience. Understanding the acquisition and usage of multisensory integration in the midbrain and cerebral cortex of mammals has been aided by a multiplicity of approaches. Here we examine some of the fundamental advances that have been made and some of the challenging questions that remain.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | |
Collapse
|
47
|
Carriere BN, Royal DW, Wallace MT. Spatial heterogeneity of cortical receptive fields and its impact on multisensory interactions. J Neurophysiol 2008; 99:2357-68. [PMID: 18287544 DOI: 10.1152/jn.01386.2007] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Investigations of multisensory processing at the level of the single neuron have illustrated the importance of the spatial and temporal relationship of the paired stimuli and their relative effectiveness in determining the product of the resultant interaction. Although these principles provide a good first-order description of the interactive process, they were derived by treating space, time, and effectiveness as independent factors. In the anterior ectosylvian sulcus (AES) of the cat, previous work hinted that the spatial receptive field (SRF) architecture of multisensory neurons might play an important role in multisensory processing due to differences in the vigor of responses to identical stimuli placed at different locations within the SRF. In this study the impact of SRF architecture on cortical multisensory processing was investigated using semichronic single-unit electrophysiological experiments targeting a multisensory domain of the cat AES. The visual and auditory SRFs of AES multisensory neurons exhibited striking response heterogeneity, with SRF architecture appearing to play a major role in the multisensory interactions. The deterministic role of SRF architecture was tightly coupled to the manner in which stimulus location modulated the responsiveness of the neuron. Thus multisensory stimulus combinations at weakly effective locations within the SRF resulted in large (often superadditive) response enhancements, whereas combinations at more effective spatial locations resulted in smaller (additive/subadditive) interactions. These results provide important insights into the spatial organization and processing capabilities of cortical multisensory neurons, features that may provide important clues as to the functional roles played by this area in spatially directed perceptual processes.
Collapse
Affiliation(s)
- Brian N Carriere
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA.
| | | | | |
Collapse
|
48
|
Kayser C, Logothetis NK. Do early sensory cortices integrate cross-modal information? Brain Struct Funct 2007; 212:121-32. [PMID: 17717687 DOI: 10.1007/s00429-007-0154-0] [Citation(s) in RCA: 188] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2007] [Accepted: 07/14/2007] [Indexed: 10/23/2022]
Abstract
Our different senses provide complementary evidence about the environment and their interaction often aids behavioral performance or alters the quality of the sensory percept. A traditional view defers the merging of sensory information to higher association cortices, and posits that a large part of the brain can be reduced into a collection of unisensory systems that can be studied in isolation. Recent studies, however, challenge this view and suggest that cross-modal interactions can already occur in areas hitherto regarded as unisensory. We review results from functional imaging and electrophysiology exemplifying cross-modal interactions that occur early during the evoked response, and at the earliest stages of sensory cortical processing. Although anatomical studies revealed several potential origins of these cross-modal influences, there is yet no clear relation between particular functional observations and specific anatomical connections. In addition, our view on sensory integration at the neuronal level is coined by many studies on subcortical model systems of sensory integration; yet, the patterns of cross-modal interaction in cortex deviate from these model systems in several ways. Consequently, future studies on cortical sensory integration need to leave the descriptive level and need to incorporate cross-modal influences into models of the organization of sensory processing. Only then will we be able to determine whether early cross-modal interactions truly merit the label sensory integration, and how they increase a sensory system's ability to scrutinize its environment and finally aid behavior.
Collapse
Affiliation(s)
- Christoph Kayser
- Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany.
| | | |
Collapse
|
49
|
Ethofer T, Pourtois G, Wildgruber D. Investigating audiovisual integration of emotional signals in the human brain. PROGRESS IN BRAIN RESEARCH 2006; 156:345-61. [PMID: 17015090 DOI: 10.1016/s0079-6123(06)56019-4] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Humans can communicate their emotional state via facial expression and affective prosody. This chapter reviews behavioural, neuroanatomical, electrophysiological and neuroimaging studies pertaining to audiovisual integration of emotional communicative signals. Particular emphasis will be given to neuroimaging studies using positron emission tomography (PET) or functional magnetic resonance imaging (fMRI). Conjunction analyses, interaction analyses, correlation analyses between haemodynamic responses and behavioural effects and connectivity analyses have been employed to analyse neuroimaging data. There is no general agreement as to which of these approaches can be considered "optimal" to classify brain regions as multisensory. We argue that these approaches provide complementing information as they assess different aspects of multisensory integration of emotional information. Assets and drawbacks of the different analysis types are discussed and demonstrated on the basis of one fMRI data set.
Collapse
Affiliation(s)
- Thomas Ethofer
- Section of Experimental MR of the CNS, Department of Neuroradiology, Otfried-Müller-Str. 51, University of Tübingen, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
50
|
Murray MM, Molholm S, Michel CM, Heslenfeld DJ, Ritter W, Javitt DC, Schroeder CE, Foxe JJ. Grabbing Your Ear: Rapid Auditory–Somatosensory Multisensory Interactions in Low-level Sensory Cortices Are Not Constrained by Stimulus Alignment. Cereb Cortex 2004; 15:963-74. [PMID: 15537674 DOI: 10.1093/cercor/bhh197] [Citation(s) in RCA: 303] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Collapse
Affiliation(s)
- Micah M Murray
- The Cognitive Neurophysiology Lab, Nathan S. Kline Institute for Psychiatric Research, Program in Cognitive Neuroscience and Schizophrenia, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA
| | | | | | | | | | | | | | | |
Collapse
|