1
|
Böing S, Van der Stigchel S, Van der Stoep N. The impact of acute asymmetric hearing loss on multisensory integration. Eur J Neurosci 2024; 59:2373-2390. [PMID: 38303554 DOI: 10.1111/ejn.16263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024]
Abstract
Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.
Collapse
Affiliation(s)
- Sanne Böing
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Marusic U, Mahoney JR. Editorial: The intersection of cognitive, motor, and sensory processing in aging: links to functional outcomes, volume II. Front Aging Neurosci 2024; 15:1340547. [PMID: 38239490 PMCID: PMC10794332 DOI: 10.3389/fnagi.2023.1340547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 12/07/2023] [Indexed: 01/22/2024] Open
Affiliation(s)
- Uros Marusic
- Institute for Kinesiology Research, Science and Research Centre Koper, Koper, Slovenia
- Department of Health Sciences, Alma Mater Europaea - ECM, Maribor, Slovenia
| | - Jeannette R. Mahoney
- Division of Cognitive and Motor Aging, Albert Einstein College of Medicine, Bronx, NY, United States
| |
Collapse
|
3
|
Dixon SC, Calder BJ, Lilya SM, Davies BM, Martin A, Peterson M, Hansen JM, Suli A. Valproic acid affects neurogenesis during early optic tectum development in zebrafish. Biol Open 2023; 12:286129. [PMID: 36537579 PMCID: PMC9916031 DOI: 10.1242/bio.059567] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 12/09/2022] [Indexed: 02/01/2023] Open
Abstract
The mammalian superior colliculus and its non-mammalian homolog, the optic tectum (OT), are midbrain structures that integrate multimodal sensory inputs and guide non-voluntary movements in response to prevalent stimuli. Recent studies have implicated this structure as a possible site affected in autism spectrum disorder (ASD). Interestingly, fetal exposure to valproic acid (VPA) has also been associated with an increased risk of ASD in humans and animal models. Therefore, we took the approach of determining the effects of VPA treatment on zebrafish OT development as a first step in identifying the mechanisms that allow its formation. We describe normal OT development during the first 5 days of development and show that in VPA-treated embryos, neuronal specification and neuropil formation was delayed. VPA treatment was most detrimental during the first 3 days of development and did not appear to be linked to oxidative stress. In conclusion, our work provides a foundation for research into mechanisms driving OT development, as well as the relationship between the OT, VPA, and ASD. This article has an associated First Person interview with one of the co-first authors of the paper.
Collapse
Affiliation(s)
- Sierra C. Dixon
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Bailey J. Calder
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Shane M. Lilya
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Brandon M. Davies
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Annalie Martin
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Maggie Peterson
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Jason M. Hansen
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA
| | - Arminda Suli
- Department of Cell Biology and Physiology, Brigham Young University, Provo, UT 84602, USA,Author for correspondence ()
| |
Collapse
|
4
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback (“correct”/“incorrect”) on 900 training trials. Post-training, participants’ biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- *Correspondence: Jennifer L. Campos,
| |
Collapse
|
5
|
Vidal M, Vitu F. Multisensory temporal binding induces an illusory gap/overlap that reduces the expected audiovisual interactions on saccades but not manual responses. PLoS One 2022; 17:e0266468. [PMID: 35390067 PMCID: PMC8989229 DOI: 10.1371/journal.pone.0266468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 03/21/2022] [Indexed: 11/19/2022] Open
Abstract
Throughout the day, humans react to multisensory events conveying both visual and auditory signals by rapidly reorienting their gaze. Several studies showed that sounds can impact the latency of visually guided saccades depending on when and where they are delivered. We found that unlocalized beeps delivered near the onset time of a visual target reduce latencies, more for early beeps and less for late beeps, however, this modulation is far weaker than for perceptual temporal judgments. Here we tested our previous assumption that beeps shift the perceived timing of target onset and result in two competing effects on saccade latencies: a multisensory modulation in line with the expected perceptual effect and an illusory gap/overlap effect, resulting from target appearance being perceived later/closer in time than fixation offset and shortening/lengthening saccade latencies. Gap/overlap effects involve an oculomotor component associated with neuronal activity in the superior colliculus (SC), a multisensory subcortical structure devoted to sensory-motor transformation. We therefore predicted that the interfering illusory gap/overlap effect would be weaker for manual responses, which involve distinct multisensory areas. In three experiments we manipulated the delay between target onset and an irrelevant auditory beep (stimulus onset asynchrony; SOA) and between target onset and fixation offset (real gap/overlap). Targets appeared left/right of fixation and participants were instructed to make quick saccades or button presses towards the targets. Adding a real overlap/gap (50% of SOA) compensated for the illusory gap/overlap by increasing the beep-related modulation of saccade latencies across the entire SOA range, whereas it barely affected manual responses. However, although auditory and gap/overlap effects modulated saccade latencies in similar ways, these were additive and could saturate, suggesting that they reflect independent mechanisms. Therefore, multisensory temporal binding affects perception and oculomotor control differently, likely due to the implication of the SC in saccade programming and multisensory integration.
Collapse
Affiliation(s)
- Manuel Vidal
- Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, France
- Laboratoire de Psychologie Cognitive, UMR 7290, CNRS, Aix-Marseille Université, France
- * E-mail:
| | - Françoise Vitu
- Laboratoire de Psychologie Cognitive, UMR 7290, CNRS, Aix-Marseille Université, France
| |
Collapse
|
6
|
Van der Stoep N, Van der Smagt MJ, Notaro C, Spock Z, Naber M. The additive nature of the human multisensory evoked pupil response. Sci Rep 2021; 11:707. [PMID: 33436889 PMCID: PMC7803952 DOI: 10.1038/s41598-020-80286-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 12/14/2020] [Indexed: 12/23/2022] Open
Abstract
Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil response, however, have produced conflicting results. Some studies observed super-additive multisensory pupil responses, indicative of multisensory integration (MSI). Others observed additive multisensory pupil responses even though reaction time (RT) measures were indicative of MSI. Therefore, in the present study, we investigated the nature of the multisensory pupil response by combining methodological approaches of previous studies while using supra-threshold stimuli only. In two experiments we presented auditory and visual stimuli to observers that evoked a(n) (onset) response (be it constriction or dilation) in a simple detection task and a change detection task. In both experiments, the RT data indicated MSI as shown by race model inequality violation. Still, the multisensory pupil response in both experiments could best be explained by linear summation of the unisensory pupil responses. We conclude that the multisensory pupil response for supra-threshold stimuli is additive in nature and cannot be used as a measure of MSI, as only a departure from additivity can unequivocally demonstrate an interaction between the senses.
Collapse
Affiliation(s)
- Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - M J Van der Smagt
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - C Notaro
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Z Spock
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - M Naber
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
7
|
Visually guided saccades and acoustic distractors: no evidence for the remote distractor effect or global effect. Exp Brain Res 2020; 239:59-66. [PMID: 33098653 DOI: 10.1007/s00221-020-05959-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 10/13/2020] [Indexed: 10/23/2022]
Abstract
A remote visual distractor increases saccade reaction time (RT) to a visual target and may reflect the time required to resolve conflict between target- and distractor-related information within a common retinotopic representation in the superior colliculus (SC) (i.e., the remote distractor effect: RDE). Notably, because the SC serves as a sensorimotor interface it is possible that the RDE may be associated with the pairing of an acoustic distractor with a visual target; that is, the conflict related to saccade generation signals may be sensory-independent. To address that issue, we employed a traditional RDE experiment involving a visual target and visual proximal and remote distractors (Experiment 1) and an experiment wherein a visual target was presented with acoustic proximal and remote distractors (Experiment 2). As well, Experiments 1 and 2 employed no-distractor trials. Experiment 1 RTs elicited a reliable RDE, whereas Experiment 2 RTs for proximal and remote distractors were shorter than their no distractor counterparts. Accordingly, findings demonstrate that the RDE is sensory specific and arises from conflicting visual signals within a common retinotopic map. As well, Experiment 2 findings indicate that an acoustic distractor supports an intersensory facilitation that optimizes oculomotor planning.
Collapse
|
8
|
de Boer MJ, Başkent D, Cornelissen FW. Eyes on Emotion: Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli. Multisens Res 2020; 34:17-47. [PMID: 33706278 DOI: 10.1163/22134808-bja10029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 05/29/2020] [Indexed: 11/19/2022]
Abstract
The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Collapse
Affiliation(s)
- Minke J de Boer
- 1Research School of Behavioural and Cognitive Neurosciences (BCN), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,2Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,3Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- 1Research School of Behavioural and Cognitive Neurosciences (BCN), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,2Department of Otorhinolaryngology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Frans W Cornelissen
- 1Research School of Behavioural and Cognitive Neurosciences (BCN), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,3Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
9
|
Human Mirror Neuron System Based Alarms in the Cockpit: A Neuroergonomic Evaluation. Appl Psychophysiol Biofeedback 2020; 46:29-42. [PMID: 32602072 DOI: 10.1007/s10484-020-09481-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Controlled Flight Into Terrain (CFIT) events still remain among the deadliest accidents in aviation. When facing the possible occurrence of such an event, pilots have to immediately react to the ground proximity alarm ("Pull Up" alarm) in order to avoid the impending collision. However, the pilots' reaction to this alarm is not always optimal. This may be at least partly due to the low visual saliency of the current alarm and the deleterious effects of stress that alleviate the pilot's reactions. In the present study, two experiments (in a laboratory and in a flight simulator) were conducted to (1) investigate whether hand gesture videos (a hand pulling back the sidestick) can trigger brainwave frequencies related to the mirror neuron system; (2) determine whether enhancing the visual characteristics of the "Pull Up" alarm could improve pilots' response times. Electrophysiological results suggest that hand gesture videos attracted more participants' attention (greater alpha desynchronization in the parieto-occipital area) and possibly triggered greater activity of the mirror neuron system (greater mu and beta desynchronizations at central electrodes). Results obtained in the flight simulator revealed that enhancing the visual characteristics of the original "Pull Up" alarm improved the pilots' reaction times. However, no significant difference in reaction times between an enlarged "Pull Up" inscription and the hand gesture video was found. Further work is needed to determine whether mirror neuron system based alarms could bring benefits for flight safety, in particular, these alarms should be assessed during a high stress context.
Collapse
|
10
|
Abstract
Behavior is readily classified into patterns of movements with inferred common goals-actions. Goals may be discrete; movements are continuous. Through the careful study of isolated movements in laboratory settings, or via introspection, it has become clear that animals can exhibit exquisite graded specification to their movements. Moreover, graded control can be as fundamental to success as the selection of which action to perform under many naturalistic scenarios: a predator adjusting its speed to intercept moving prey, or a tool-user exerting the perfect amount of force to complete a delicate task. The basal ganglia are a collection of nuclei in vertebrates that extend from the forebrain (telencephalon) to the midbrain (mesencephalon), constituting a major descending extrapyramidal pathway for control over midbrain and brainstem premotor structures. Here we discuss how this pathway contributes to the continuous specification of movements that endows our voluntary actions with vigor and grace.
Collapse
Affiliation(s)
- Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA;
| | - Luke T Coddington
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA;
| | - Joshua T Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA;
| |
Collapse
|
11
|
Judging Relative Onsets and Offsets of Audiovisual Events. Vision (Basel) 2020; 4:vision4010017. [PMID: 32138261 PMCID: PMC7157228 DOI: 10.3390/vision4010017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 02/15/2020] [Accepted: 02/23/2020] [Indexed: 01/29/2023] Open
Abstract
This study assesses the fidelity with which people can make temporal order judgments (TOJ) between auditory and visual onsets and offsets. Using an adaptive staircase task administered to a large sample of young adults, we find that the ability to judge temporal order varies widely among people, with notable difficulty created when auditory events closely follow visual events. Those findings are interpretable within the context of an independent channels model. Visual onsets and offsets can be difficult to localize in time when they occur within the temporal neighborhood of sound onsets or offsets.
Collapse
|
12
|
Elshout JA, Van der Stoep N, Nijboer TCW, Van der Stigchel S. Motor congruency and multisensory integration jointly facilitate visual information processing before movement execution. Exp Brain Res 2020; 238:667-673. [PMID: 32036413 PMCID: PMC7080670 DOI: 10.1007/s00221-019-05714-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 12/18/2019] [Indexed: 10/25/2022]
Abstract
Attention allows us to select important sensory information and enhances sensory information processing. Attention and our motor system are tightly coupled: attention is shifted to the target location before a goal-directed eye- or hand movement is executed. Congruent eye-hand movements to the same target can boost the effect of this pre-movement shift of attention. Moreover, visual information processing can be enhanced by, for example, auditory input presented in spatial and temporal proximity of visual input via multisensory integration (MSI). In this study, we investigated whether the combination of MSI and motor congruency can synergistically enhance visual information processing beyond what can be observed using motor congruency alone. Participants performed congruent eye- and hand movements during a 2-AFC visual discrimination task. The discrimination target was presented in the planning phase of the movements at the movement target location or a movement irrelevant location. Three conditions were compared: (1) a visual target without sound, (2) a visual target with sound spatially and temporally aligned (MSI) and (3) a visual target with sound temporally misaligned (no MSI). Performance was enhanced at the movement-relevant location when congruent motor actions and MSI coincide compared to the other conditions. Congruence in the motor system and MSI together therefore lead to enhanced sensory information processing beyond the effects of motor congruency alone, before a movement is executed. Such a synergy implies that the boost of attention previously observed for the independent factors is not at ceiling level, but can be increased even further when the right conditions are met.
Collapse
Affiliation(s)
- J A Elshout
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - N Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - T C W Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Center of Excellence for Rehabilitation Medicine, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht University and De Hoogstraat Rehabilitation, 3583 TM, Utrecht, The Netherlands
| | - S Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
13
|
Denervaud S, Gentaz E, Matusz PJ, Murray MM. Multisensory Gains in Simple Detection Predict Global Cognition in Schoolchildren. Sci Rep 2020; 10:1394. [PMID: 32019951 PMCID: PMC7000735 DOI: 10.1038/s41598-020-58329-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 01/14/2020] [Indexed: 11/08/2022] Open
Abstract
The capacity to integrate information from different senses is central for coherent perception across the lifespan from infancy onwards. Later in life, multisensory processes are related to cognitive functions, such as speech or social communication. During learning, multisensory processes can in fact enhance subsequent recognition memory for unisensory objects. These benefits can even be predicted; adults' recognition memory performance is shaped by earlier responses in the same task to multisensory - but not unisensory - information. Everyday environments where learning occurs, such as classrooms, are inherently multisensory in nature. Multisensory processes may therefore scaffold healthy cognitive development. Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level cognition that is present already in schoolchildren. Multiple regression analyses indicated that the extent to which a child (N = 68; aged 4.5-15years) exhibited multisensory benefits on a simple detection task not only predicted benefits on a continuous recognition task involving naturalistic objects (p = 0.009), even when controlling for age, but also the same relative multisensory benefit also predicted working memory scores (p = 0.023) and fluid intelligence scores (p = 0.033) as measured using age-standardised test batteries. By contrast, gains in unisensory detection did not show significant prediction of any of the above global cognition measures. Our findings show that low-level multisensory processes predict higher-order memory and cognition already during childhood, even if still subject to ongoing maturation. These results call for revision of traditional models of cognitive development (and likely also education) to account for the role of multisensory processing, while also opening exciting opportunities to facilitate early learning through multisensory programs. More generally, these data suggest that a simple detection task could provide direct insights into the integrity of global cognition in schoolchildren and could be further developed as a readily-implemented and cost-effective screening tool for neurodevelopmental disorders, particularly in cases when standard neuropsychological tests are infeasible or unavailable.
Collapse
Affiliation(s)
- Solange Denervaud
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
| | - Edouard Gentaz
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Geneva, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland.
- Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.
| |
Collapse
|
14
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
15
|
Sürig R, Bottari D, Röder B. Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks. Multisens Res 2018; 31:556-578. [PMID: 31264612 DOI: 10.1163/22134808-00002611] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 09/21/2017] [Indexed: 12/19/2022]
Abstract
Temporal and spatial characteristics of sensory inputs are fundamental to multisensory integration because they provide probabilistic information as to whether or not multiple sensory inputs belong to the same event. The multisensory temporal binding window defines the time range within which two stimuli of different sensory modalities are merged into one percept and has been shown to depend on training. The aim of the present study was to evaluate the role of the training procedure for improving multisensory temporal discrimination and to test for a possible transfer of training to other multisensory tasks. Participants were trained over five sessions in a two-alternative forced-choice simultaneity judgment task. The task difficulty of each trial was either at each participant's threshold (adaptive group) or randomly chosen (control group). A possible transfer of improved multisensory temporal discrimination on multisensory binding was tested with a redundant signal paradigm in which the temporal alignment of auditory and visual stimuli was systematically varied. Moreover, the size of the spatial audio-visual ventriloquist effect was assessed. Adaptive training resulted in faster improvements compared to the control condition. Transfer effects were found for both tasks: The processing speed of auditory inputs and the size of the ventriloquist effect increased in the adaptive group following the training. We suggest that the relative precision of the temporal and spatial features of a cross-modal stimulus is weighted during multisensory integration. Thus, changes in the precision of temporal processing are expected to enhance the likelihood of multisensory integration for temporally aligned cross-modal stimuli.
Collapse
Affiliation(s)
- Ralf Sürig
- Biological Psychology and Neuropsychology, University of Hamburg, Von Melle Park 11, 20146 Hamburg, Germany
| | - Davide Bottari
- Biological Psychology and Neuropsychology, University of Hamburg, Von Melle Park 11, 20146 Hamburg, Germany.,IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von Melle Park 11, 20146 Hamburg, Germany
| |
Collapse
|
16
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
17
|
Kim JH, Yang X. Applying fractal analysis to pupil dilation for measuring complexity in a process monitoring task. APPLIED ERGONOMICS 2017; 65:61-69. [PMID: 28802461 DOI: 10.1016/j.apergo.2017.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2016] [Revised: 03/20/2017] [Accepted: 06/01/2017] [Indexed: 06/07/2023]
Abstract
This laboratory experiment was designed to use fractal dimension as a new method to analyze pupil dilation to evaluate the level of complexity in a multitasking environment. By using the eye-head integrated tracking system, we collected both pupil responses and head positions while participants conducted both process monitoring task and Multi-Attribute Task Battery (MATB-II) tasks. There was a significant effect of scenario complexity on a composite index of multitasking performance (Low Complexity » High Complexity). The fractal dimension of pupil dilation was also significantly influenced by complexity. The results clearly showed that the correlation between pupil dilation and multitasking performance was stronger when the pupil data was analyzed by using the fractal dimension method. The participants showed a higher fractal dimension when they performed a low complexity multitasking scenario. The findings of this research help us to advance our understanding of how to evaluate the complexity level of real-world applications by using pupillary responses.
Collapse
Affiliation(s)
- Jung Hyup Kim
- Department of Industrial and Manufacturing Systems Engineering, University of Missouri, Columbia, MO, 65211, USA.
| | - Xiaonan Yang
- Department of Industrial and Manufacturing Systems Engineering, University of Missouri, Columbia, MO, 65211, USA
| |
Collapse
|
18
|
Thomas RL, Nardini M, Mareschal D. The impact of semantically congruent and incongruent visual information on auditory object recognition across development. J Exp Child Psychol 2017; 162:72-88. [PMID: 28595113 DOI: 10.1016/j.jecp.2017.04.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 10/19/2022]
Abstract
The ability to use different sensory signals in conjunction confers numerous advantages on perception. Multisensory perception in adults is influenced by factors beyond low-level stimulus properties such as semantic congruency. Sensitivity to semantic relations has been shown to emerge early in development; however, less is known about whether implementation of these associations changes with development or whether development in the representations themselves might modulate their influence. Here, we used a Stroop-like paradigm that requires participants to identify an auditory stimulus while ignoring a visual stimulus. Prior research shows that in adults visual distractors have more impact on processing of auditory objects than vice versa; however, this pattern appears to be inverted early in development. We found that children from 8years of age (and adults) gain a speed advantage from semantically congruent visual information and are disadvantaged by semantically incongruent visual information. At 6years of age, children gain a speed advantage for semantically congruent visual information but are not disadvantaged by semantically incongruent visual information (as compared with semantically unrelated visual information). Both children and adults were influenced by associations between auditory and visual stimuli, which they had been exposed to on only 12 occasions during the learning phase of the study. Adults showed a significant speed advantage over children for well-established associations but showed no such advantage for newly acquired pairings. This suggests that the influence of semantic associations on multisensory processing does not change with age but rather these associations become more robust and, in turn, more influential.
Collapse
Affiliation(s)
- Rhiannon L Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths College, University of London, London SE14 6NW, UK; Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK
| | - Marko Nardini
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK; Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK.
| |
Collapse
|
19
|
Abstract
The use of separate multisensory signals is often beneficial. A prominent example is the speed-up of responses to two redundant signals relative to the components, which is known as the redundant signals effect (RSE). A convenient explanation for the effect is statistical facilitation, which is inherent in the basic architecture of race models (Raab, 1962,Trans. N. Y. Acad. Sci.24, 574–590). However, this class of models has been largely rejected in multisensory research, which we think results from an ambiguity in definitions and misinterpretations of the influential race model test (Miller, 1982,Cogn. Psychol.14, 247–279). To resolve these issues, we here discuss four main items. First, we clarify definitions and ask how successful models of perceptual decision making can be extended from uni- to multisensory decisions. Second, we review the race model test and emphasize elements leading to confusion with its interpretation. Third, we introduce a new approach to study the RSE. As a major change of direction, our working hypothesis is that the basic race model architecture is correct even if the race model test seems to suggest otherwise. Based on this approach, we argue that understanding the variability of responses is the key to understand the RSE. Finally, we highlight the critical role of model testability to advance research on multisensory decisions. Despite being largely rejected, it should be recognized that race models, as part of a broader class of parallel decision models, demonstrate, in fact, a convincing explanatory power in a range of experimental paradigms. To improve research consistency in the future, we conclude with a short checklist for RSE studies.
Collapse
Affiliation(s)
- Thomas U. Otto
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248), Ecole Normale Supérieure — PSL Research University, Paris, France
| |
Collapse
|
20
|
Kurela L, Wallace M. Serotonergic Modulation of Sensory and Multisensory Processing in Superior Colliculus. Multisens Res 2017. [DOI: 10.1163/22134808-00002552] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The ability to integrate information across the senses is vital for coherent perception of and interaction with the world. While much is known regarding the organization and function of multisensory neurons within the mammalian superior colliculus (SC), very little is understood at a mechanistic level. One open question in this regard is the role of neuromodulatory networks in shaping multisensory responses. While the SC receives substantial serotonergic projections from the raphe nuclei, and serotonergic receptors are distributed throughout the SC, the potential role of serotonin (5-HT) signaling in multisensory function is poorly understood. To begin to fill this knowledge void, the current study provides physiological evidence for the influences of 5-HT signaling on auditory, visual and audiovisual responses of individual neurons in the intermediate and deep layers of the SC, with a focus on the 5HT2a receptor. Using single-unit extracellular recordings in combination with pharmacological methods, we demonstrate that alterations in 5HT2a receptor signaling change receptive field (RF) architecture as well as responsivity and integrative abilities of SC neurons when assessed at the level of the single neuron. In contrast, little changes were seen in the local field potential (LFP). These results are the first to implicate the serotonergic system in multisensory processing, and are an important step to understanding how modulatory networks mediate multisensory integration in the SC.
Collapse
Affiliation(s)
- LeAnne R. Kurela
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
- Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN 37232, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37232, USA
- Department of Psychiatry, Vanderbilt University, Nashville, TN 37232, USA
| |
Collapse
|
21
|
Ramkhalawansingh R, Keshavarz B, Haycock B, Shahab S, Campos JL. Examining the Effect of Age on Visual–Vestibular Self-Motion Perception Using a Driving Paradigm. Perception 2016; 46:566-585. [DOI: 10.1177/0301006616675883] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual–vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual–vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Collapse
Affiliation(s)
- Robert Ramkhalawansingh
- Department of Psychology, University of Toronto, Canada; Toronto Rehabilitation Institute, University Health Network, Canada
| | - Behrang Keshavarz
- Toronto Rehabilitation Institute, University Health Network, Canada; Department of Psychology, Ryerson University
| | - Bruce Haycock
- Toronto Rehabilitation Institute, University Health Network, Canada; Institute for Aerospace Studies, University of Toronto, Canada
| | - Saba Shahab
- Faculty of Medicine, University of Toronto, Canada
| | - Jennifer L. Campos
- Toronto Rehabilitation Institute, University Health Network, Canada; Department of Psychology, University of Toronto, Canada
| |
Collapse
|
22
|
Steinweg B, Mast FW. Semantic incongruity influences response caution in audio-visual integration. Exp Brain Res 2016; 235:349-363. [PMID: 27734118 DOI: 10.1007/s00221-016-4796-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 10/03/2016] [Indexed: 11/27/2022]
Abstract
Multisensory stimulus combinations trigger shorter reaction times (RTs) than individual single-modality stimuli. It has been suggested that this inter-sensory facilitation effect is found exclusively for semantically congruent stimuli, because incongruity would prevent multisensory integration. Here we provide evidence that the effect of incongruity is due to a change in response caution rather than prevention of stimulus integration. In two experiments, participants performed two-alternative forced-choice decision tasks in which they categorized auditory stimuli, visual stimuli or audio-visual stimulus pairs. The pairs were either semantically congruent (e.g. ambulance image and horn sound) or incongruent (e.g. ambulance image and bell sound). Shorter RTs and violations of the race model inequality on congruent trials are in accordance with previous studies. However, Bayesian hierarchical drift diffusion analyses contradict former co-activation-based explanations of the effects of congruency. Instead, they show that longer RTs on incongruent compared to congruent trials are most likely the result of an incongruity caution effect-more cautious response behaviour in face of semantically incongruent sensory input. Further, they show that response caution can be adjusted on a trial-by-trial basis depending on incoming information. Finally, stimulus modality influenced non-cognitive components of the response. We suggest that the combined stimulus energy from simultaneously presented stimuli reduces encoding time.
Collapse
Affiliation(s)
- Benjamin Steinweg
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland. .,Center for Cognition, Learning and Memory, University of Bern, Bern, Switzerland.
| | - Fred W Mast
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland.,Center for Cognition, Learning and Memory, University of Bern, Bern, Switzerland
| |
Collapse
|
23
|
Abstract
Binocular processing was investigated using a quantitative, process-oriented metatheory of response times. The analyses are not confined to particular distributional assumptions or specific models. Upper and lower performance boundaries for probability summation in parallel processing are defined and compared with observed distributions of reaction times using a variety of dichoptic stimuli. Performance that exceeds the upper bound strongly suggests facilitatory convergence between the two eyes (binocular channel summation). Performance below the lower bound suggests that inputs to the two eyes are processed serially. The results indicate that binocular channel summation in subjects with normal stereo vision requires targets of the same luminance polarity (paired increments or decrements) in corresponding retinal locations. When corresponding retinal locations are stimulated with opposing luminance polarities (increment to one eye, decrement to the other), performance is consistent with probability summation, indicating that parallel ON and OFF pathways remain segregated at least to the level of binocular fusion. Further analyses of data from a stereo-blind observer suggest serial processing of binocular inputs.
Collapse
|
24
|
|
25
|
Cecere R, Gross J, Thut G. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality. Eur J Neurosci 2016; 43:1561-8. [PMID: 27003546 PMCID: PMC4915493 DOI: 10.1111/ejn.13242] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Revised: 02/09/2016] [Accepted: 03/17/2016] [Indexed: 11/30/2022]
Abstract
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.
Collapse
Affiliation(s)
- Roberto Cecere
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK
| | - Joachim Gross
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK
| | - Gregor Thut
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK
| |
Collapse
|
26
|
Diederich A, Colonius H, Kandil FI. Prior knowledge of spatiotemporal configuration facilitates crossmodal saccadic response. Exp Brain Res 2016; 234:2059-2076. [DOI: 10.1007/s00221-016-4609-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 02/23/2016] [Indexed: 10/22/2022]
|
27
|
Interactions between space and effectiveness in human multisensory performance. Neuropsychologia 2016; 88:83-91. [PMID: 26826522 DOI: 10.1016/j.neuropsychologia.2016.01.031] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/23/2022]
Abstract
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Collapse
|
28
|
Zelic G, Mottet D, Lagarde J. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics. Exp Brain Res 2015; 234:463-74. [DOI: 10.1007/s00221-015-4476-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2014] [Accepted: 10/15/2015] [Indexed: 11/30/2022]
|
29
|
Blais M, Albaret JM, Tallet J. Is there a link between sensorimotor coordination and inter-manual coordination? Differential effects of auditory and/or visual rhythmic stimulations. Exp Brain Res 2015; 233:3261-9. [PMID: 26238405 DOI: 10.1007/s00221-015-4394-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2015] [Accepted: 07/22/2015] [Indexed: 12/29/2022]
Abstract
The purpose of this study was to test how the sensory modality of rhythmic stimuli affects the production of bimanual coordination patterns. To this aim, participants had to synchronize the taps of their two index fingers with auditory and visual stimuli presented separately (auditory or visual) or simultaneously (audio-visual). This kind of task requires two levels of coordination: (1) sensorimotor coordination, which can be measured by the mean asynchrony between the beat of the stimulus and the corresponding tap and by mean asynchrony stability, and (2) inter-manual coordination, which can be assessed by the accuracy and stability of the relative phase between the right-hand and left-hand taps. Previous studies show that sensorimotor coordination is better during the synchronization with auditory or audio-visual metronomes than with visual metronome, but it is not known whether inter-manual coordination is affected by stimulation modalities. To answer this question, 13 participants were required to tap their index fingers in synchrony with the beat of auditory and/or visual stimuli specifying three coordination patterns: two preferred inphase and antiphase patterns and a non-preferred intermediate pattern. A first main result demonstrated that inphase tapping had the best inter-manual stability, but the worst asynchrony stability. The second main finding revealed that for all patterns, audio-visual stimulation improved the stability of sensorimotor coordination but not of inter-manual coordination. The combination of visual and auditory modalities results in multisensory integration, which improves sensorimotor coordination but not inter-manual coordination. Both results suggest that there is dissociation between processes underlying sensorimotor synchronization (anticipation or reactivity) and processes underlying inter-manual coordination (motor control). This finding opens new perspectives to evaluate separately the possible sensorimotor and inter-manual coordination deficits present in movement disorders.
Collapse
Affiliation(s)
- Mélody Blais
- Laboratory PRISSMH-LAPMA (EA 4651), University of Paul Sabatier Toulouse 3, 31062, Toulouse, France
| | - Jean-Michel Albaret
- Laboratory PRISSMH-LAPMA (EA 4651), University of Paul Sabatier Toulouse 3, 31062, Toulouse, France
| | - Jessica Tallet
- Laboratory PRISSMH-LAPMA (EA 4651), University of Paul Sabatier Toulouse 3, 31062, Toulouse, France.
| |
Collapse
|
30
|
Makovac E, Buonocore A, McIntosh RD. Audio-visual integration and saccadic inhibition. Q J Exp Psychol (Hove) 2015; 68:1295-305. [PMID: 25599266 DOI: 10.1080/17470218.2014.979210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Saccades operate a continuous selection between competing targets at different locations. This competition has been mostly investigated in the visual context, and it is well known that a visual distractor can interfere with a saccade toward a visual target. Here, we investigated whether multimodal, audio-visual targets confer stronger resilience against visual distraction. Saccades to audio-visual targets had shorter latencies than saccades to unisensory stimuli. This facilitation exceeded the level that could be explained by simple probability summation, indicating that multisensory integration had occurred. The magnitude of inhibition induced by a visual distractor was comparable for saccades to unisensory and multisensory targets, but the duration of the inhibition was shorter for multimodal targets. We conclude that multisensory integration can allow a saccade plan to be reestablished more rapidly following saccadic inhibition.
Collapse
Affiliation(s)
- Elena Makovac
- a Human Cognitive Neuroscience, Psychology , University of Edinburgh , Edinburgh , UK
| | | | | |
Collapse
|
31
|
Brandwein A, Foxe J, Butler J, Frey H, Bates J, Shulman L, Molholm S. Neurophysiological indices of atypical auditory processing and multisensory integration are associated with symptom severity in autism. J Autism Dev Disord 2015; 45:230-44. [PMID: 25245785 PMCID: PMC4289100 DOI: 10.1007/s10803-014-2212-9] [Citation(s) in RCA: 121] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Atypical processing and integration of sensory inputs are hypothesized to play a role in unusual sensory reactions and social-cognitive deficits in autism spectrum disorder (ASD). Reports on the relationship between objective metrics of sensory processing and clinical symptoms, however, are surprisingly sparse. Here we examined the relationship between neurophysiological assays of sensory processing and (1) autism severity and (2) sensory sensitivities, in individuals with ASD aged 6-17. Multiple linear regression indicated significant associations between neural markers of auditory processing and multisensory integration, and autism severity. No such relationships were apparent for clinical measures of visual/auditory sensitivities. These data support that aberrant early sensory processing contributes to autism symptoms, and reveal the potential of electrophysiology to objectively subtype autism.
Collapse
Affiliation(s)
- A.B. Brandwein
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- The Graduate Center of the City University of New York, New York, NY 10016, USA
| | - J.J. Foxe
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- The Graduate Center of the City University of New York, New York, NY 10016, USA
- The Cognitive Neurophysiology Laboratory, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - J.S. Butler
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
| | - H.P. Frey
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
| | - J.C. Bates
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
| | - L. Shulman
- Department of Pediatrics, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1165 Morris Park Avenue, Bronx, NY 10461, USA
| | - S. Molholm
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine, 1225 Morris Park Avenue, Bronx, NY 10461, USA
- The Graduate Center of the City University of New York, New York, NY 10016, USA
| |
Collapse
|
32
|
Schilling TM, Larra MF, Deuter CE, Blumenthal TD, Schächinger H. Rapid cortisol enhancement of psychomotor and startle reactions to side-congruent stimuli in a focused cross-modal choice reaction time paradigm. Eur Neuropsychopharmacol 2014; 24:1828-35. [PMID: 25262177 DOI: 10.1016/j.euroneuro.2014.09.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2014] [Revised: 08/18/2014] [Accepted: 09/03/2014] [Indexed: 10/24/2022]
Abstract
The stress hormone cortisol has been shown to affect hemodynamic activity of human brain structures, presumably via a nongenomic mechanism. However, behavioral implications of this finding remain unknown. In a placebo-controlled, blinded, cross-over design the rapid effects of IV hydrocortisone (5mg) on cross-modal integration of simultaneous, unilateral visual and acoustic signals in a challenging startle and reaction time (RT) paradigm were studied. On two separate days 1 week apart, 24 male volunteers responded by button push to either up- or down pointing triangles presented in random sequence in the periphery of one of the visual hemi-fields. Visual targets were accompanied by unilateral acoustic startle noise bursts, presented at the same or opposite side. Saccadic latency, manual RT, and startle eye blink responses were recorded. Faster manual reactions and increased startle eye blink responses were observed 11-20 min after hydrocortisone administration when visual targets and unilateral acoustic startle noises were presented in the same sensory hemi-field, but not when presented in opposite sensory hemi-fields. Our results suggest that a nongenomic, cortisol-sensitive mechanism enhances psychomotor and startle reactions when stimuli occur in the same sensory hemi-field. Such basic cognitive effects of cortisol may serve rapid adaptation and protection against danger stimuli in stressful contexts.
Collapse
Affiliation(s)
- Thomas M Schilling
- Institute of Psychobiology, Division of Clinical Psychophysiology, University of Trier, Johanniterufer 15, D-54290 Trier, Germany.
| | - Mauro F Larra
- Institute of Psychobiology, Division of Clinical Psychophysiology, University of Trier, Johanniterufer 15, D-54290 Trier, Germany
| | - Christian E Deuter
- Institute of Psychobiology, Division of Clinical Psychophysiology, University of Trier, Johanniterufer 15, D-54290 Trier, Germany
| | - Terry D Blumenthal
- Department of Psychology, Wake Forest University, Winston-Salem, NC 27109, USA
| | - Hartmut Schächinger
- Institute of Psychobiology, Division of Clinical Psychophysiology, University of Trier, Johanniterufer 15, D-54290 Trier, Germany
| |
Collapse
|
33
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
34
|
Slugocki C, Trainor LJ. Cortical indices of sound localization mature monotonically in early infancy. Eur J Neurosci 2014; 40:3608-19. [PMID: 25308742 DOI: 10.1111/ejn.12741] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Accepted: 09/01/2014] [Indexed: 11/28/2022]
Affiliation(s)
- Christopher Slugocki
- Department of Psychology, Neuroscience & Behaviour; McMaster University; 1280 Main Street West Hamilton ON L8S4L8 Canada
| | - Laurel J. Trainor
- Department of Psychology, Neuroscience & Behaviour; McMaster University; 1280 Main Street West Hamilton ON L8S4L8 Canada
| |
Collapse
|
35
|
Andrade GN, Molholm S, Butler JS, Brandwein AB, Walkley SU, Foxe JJ. Atypical multisensory integration in Niemann-Pick type C disease - towards potential biomarkers. Orphanet J Rare Dis 2014; 9:149. [PMID: 25239094 PMCID: PMC4173006 DOI: 10.1186/s13023-014-0149-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Accepted: 09/16/2014] [Indexed: 11/15/2022] Open
Abstract
Background Niemann-Pick type C (NPC) is an autosomal recessive disease in which cholesterol and glycosphingolipids accumulate in lysosomes due to aberrant cell-transport mechanisms. It is characterized by progressive and ultimately terminal neurological disease, but both pre-clinical studies and direct human trials are underway to test the safety and efficacy of cholesterol clearing compounds, with good success already observed in animal models. Key to assessing the effectiveness of interventions in patients, however, is the development of objective neurobiological outcome measures. Multisensory integration mechanisms present as an excellent candidate since they necessarily rely on the fidelity of long-range neural connections between the respective sensory cortices (e.g. the auditory and visual systems). Methods A simple way to test integrity of the multisensory system is to ask whether individuals respond faster to the occurrence of a bisensory event than they do to the occurrence of either of the unisensory constituents alone. Here, we presented simple auditory, visual, and audio-visual stimuli in random sequence. Participants responded as fast as possible with a button push. One 11-year-old and two 14-year-old boys with NPC participated in the experiment and their results were compared to those of 35 age-matched neurotypical boys. Results Reaction times (RTs) to the stimuli when presented simultaneously were significantly faster than when they were presented alone in the neurotypical children, a facilitation that could not be accounted for by probability summation, as evidenced by violation of the so-called ‘race’ model. In stark contrast, the NPC boys showed no such speeding, despite the fact that their unisensory RTs fell within the distribution of RTs observed in the neurotypicals. Conclusions These results uncover a previously undescribed deficit in multisensory integrative abilities in NPC, with implications for ongoing treatment of the clinical symptoms of these children. They also suggest that multisensory processes may represent a good candidate biomarker against which to test the efficacy of therapeutic interventions. Electronic supplementary material The online version of this article (doi:10.1186/s13023-014-0149-x) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
| | | | | | | | | | - John J Foxe
- Department of Pediatrics, The Sheryl and Daniel R, Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Albert Einstein College of Medicine & Montefiore Medical Center, Van Etten Building - Wing 1C, 1225 Morris Park Avenue, Bronx 10461, NY, USA.
| |
Collapse
|
36
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 211] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|
37
|
Abstract
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners' ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.
Collapse
|
38
|
A neurocomputational analysis of the sound-induced flash illusion. Neuroimage 2014; 92:248-66. [DOI: 10.1016/j.neuroimage.2014.02.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Revised: 01/14/2014] [Accepted: 02/01/2014] [Indexed: 11/18/2022] Open
|
39
|
Jaekl P, Pérez-Bellido A, Soto-Faraco S. On the 'visual' in 'audio-visual integration': a hypothesis concerning visual pathways. Exp Brain Res 2014; 232:1631-8. [PMID: 24699769 DOI: 10.1007/s00221-014-3927-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2013] [Accepted: 03/19/2014] [Indexed: 11/28/2022]
Abstract
Crossmodal interaction conferring enhancement in sensory processing is nowadays widely accepted. Such benefit is often exemplified by neural response amplification reported in physiological studies conducted with animals, which parallel behavioural demonstrations of sound-driven improvement in visual tasks in humans. Yet, a good deal of controversy still surrounds the nature and interpretation of these human psychophysical studies. Here, we consider the interpretation of crossmodal enhancement findings under the light of the functional as well as anatomical specialization of magno- and parvocellular visual pathways, whose paramount relevance has been well established in visual research but often overlooked in crossmodal research. We contend that a more explicit consideration of this important visual division may resolve some current controversies and help optimize the design of future crossmodal research.
Collapse
Affiliation(s)
- Philip Jaekl
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA,
| | | | | |
Collapse
|
40
|
Barense MD, Erez J, Ma H, Cusack R. Resources required for processing ambiguous complex features in vision and audition are modality specific. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2014; 14:336-353. [PMID: 24022792 DOI: 10.3758/s13415-013-0207-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.
Collapse
|
41
|
Rowland BA, Stein BE. A model of the temporal dynamics of multisensory enhancement. Neurosci Biobehav Rev 2013; 41:78-84. [PMID: 24374382 DOI: 10.1016/j.neubiorev.2013.12.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 11/04/2013] [Accepted: 12/10/2013] [Indexed: 11/29/2022]
Abstract
The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual-auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear "super-additive" computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation.
Collapse
Affiliation(s)
| | - Barry E Stein
- Wake Forest School of Medicine, Winston-Salem, NC 27157, United States.
| |
Collapse
|
42
|
Ghose D, Wallace MT. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration. Neuroscience 2013; 256:147-62. [PMID: 24183964 DOI: 10.1016/j.neuroscience.2013.10.044] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Revised: 10/08/2013] [Accepted: 10/22/2013] [Indexed: 11/15/2022]
Abstract
Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location - highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC-mediated behaviors.
Collapse
Affiliation(s)
- D Ghose
- Department of Psychology, Vanderbilt University, Nashville, TN, United States; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States.
| | - M T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, United States; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States; Department of Psychiatry, Vanderbilt University, Nashville, TN, United States; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
43
|
Mossbridge JA, Grabowecky M, Suzuki S. Seeing the song: left auditory structures may track auditory-visual dynamic alignment. PLoS One 2013; 8:e77201. [PMID: 24194873 PMCID: PMC3806747 DOI: 10.1371/journal.pone.0077201] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2013] [Accepted: 09/08/2013] [Indexed: 11/18/2022] Open
Abstract
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
Collapse
Affiliation(s)
- Julia A. Mossbridge
- Department of Psychology, Northwestern University, Evanston, Illinois, United States of America
- * E-mail:
| | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, Illinois, United States of America
- Interdepartmental Neuroscience Program, Northwestern University, Evanston, Illinois, United States of America
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, Illinois, United States of America
- Interdepartmental Neuroscience Program, Northwestern University, Evanston, Illinois, United States of America
| |
Collapse
|
44
|
Abstract
The combined use of multisensory signals is often beneficial. Based on neuronal recordings in the superior colliculus of cats, three basic rules were formulated to describe the effectiveness of multisensory signals: the enhancement of neuronal responses to multisensory compared with unisensory signals is largest when signals occur at the same location ("spatial rule"), when signals are presented at the same time ("temporal rule"), and when signals are rather weak ("principle of inverse effectiveness"). These rules are also considered with respect to multisensory benefits as observed with behavioral measures, but do they capture these benefits best? To uncover the principles that rule benefits in multisensory behavior, we here investigated the classical redundant signal effect (RSE; i.e., the speedup of response times in multisensory compared with unisensory conditions) in humans. Based on theoretical considerations using probability summation, we derived two alternative principles to explain the effect. First, the "principle of congruent effectiveness" states that the benefit in multisensory behavior (here the speedup of response times) is largest when behavioral performance in corresponding unisensory conditions is similar. Second, the "variability rule" states that the benefit is largest when performance in corresponding unisensory conditions is unreliable. We then tested these predictions in two experiments, in which we manipulated the relative onset and the physical strength of distinct audiovisual signals. Our results, which are based on a systematic analysis of response time distributions, show that the RSE follows these principles very well, thereby providing compelling evidence in favor of probability summation as the underlying combination rule.
Collapse
|
45
|
Spence C. Just how important is spatial coincidence to multisensory integration? Evaluating the spatial rule. Ann N Y Acad Sci 2013; 1296:31-49. [DOI: 10.1111/nyas.12121] [Citation(s) in RCA: 115] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology; Oxford University
| |
Collapse
|
46
|
Van Barneveld DCPBM, Van Wanrooij MM. The influence of static eye and head position on the ventriloquist effect. Eur J Neurosci 2013; 37:1501-10. [PMID: 23463919 DOI: 10.1111/ejn.12176] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Revised: 12/20/2012] [Accepted: 01/30/2013] [Indexed: 11/28/2022]
Abstract
Orienting responses to audiovisual events have shorter reaction times and better accuracy and precision when images and sounds in the environment are aligned in space and time. How the brain constructs an integrated audiovisual percept is a computational puzzle because the auditory and visual senses are represented in different reference frames: the retina encodes visual locations with respect to the eyes; whereas the sound localisation cues are referenced to the head. In the well-known ventriloquist effect, the auditory spatial percept of the ventriloquist's voice is attracted toward the synchronous visual image of the dummy, but does this visual bias on sound localisation operate in a common reference frame by correctly taking into account eye and head position? Here we studied this question by independently varying initial eye and head orientations, and the amount of audiovisual spatial mismatch. Human subjects pointed head and/or gaze to auditory targets in elevation, and were instructed to ignore co-occurring visual distracters. Results demonstrate that different initial head and eye orientations are accurately and appropriately incorporated into an audiovisual response. Effectively, sounds and images are perceptually fused according to their physical locations in space independent of an observer's point of view. Implications for neurophysiological findings and modelling efforts that aim to reconcile sensory and motor signals for goal-directed behaviour are discussed.
Collapse
Affiliation(s)
- Denise C P B M Van Barneveld
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands
| | | |
Collapse
|
47
|
Modeling Multisensory Processes in Saccadic Responses. Front Neurosci 2013. [DOI: 10.1201/9781439812174-18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] Open
|
48
|
Impact of the spatial congruence of redundant targets on within-modal and cross-modal integration. Exp Brain Res 2012. [DOI: 10.1007/s00221-012-3308-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
49
|
Ghose D, Barnett ZP, Wallace MT. Impact of response duration on multisensory integration. J Neurophysiol 2012; 108:2534-44. [PMID: 22896723 DOI: 10.1152/jn.00286.2012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multisensory neurons in the superior colliculus (SC) have been shown to have large receptive fields that are heterogeneous in nature. These neurons have the capacity to integrate their different sensory inputs, a process that has been shown to depend on the physical characteristics of the stimuli that are combined (i.e., spatial and temporal relationship and relative effectiveness). Recent work has highlighted the interdependence of these factors in driving multisensory integration, adding a layer of complexity to our understanding of multisensory processes. In the present study our goal was to add to this understanding by characterizing how stimulus location impacts the temporal dynamics of multisensory responses in cat SC neurons. The results illustrate that locations within the spatial receptive fields (SRFs) of these neurons can be divided into those showing short-duration responses and long-duration response profiles. Most importantly, discharge duration appears to be a good determinant of multisensory integration, such that short-duration responses are typically associated with a high magnitude of multisensory integration (i.e., superadditive responses) while long-duration responses are typically associated with low integrative capacity. These results further reinforce the complexity of the integrative features of SC neurons and show that the large SRFs of these neurons are characterized by vastly differing temporal dynamics, dynamics that strongly shape the integrative capacity of these neurons.
Collapse
Affiliation(s)
- Dipanwita Ghose
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240, USA.
| | | | | |
Collapse
|
50
|
Esposito A, Esposito AM. On the recognition of emotional vocal expressions: motivations for a holistic approach. Cogn Process 2012; 13 Suppl 2:541-50. [PMID: 22872508 DOI: 10.1007/s10339-012-0516-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Accepted: 07/11/2012] [Indexed: 10/28/2022]
Abstract
Human beings seem to be able to recognize emotions from speech very well and information communication technology aims to implement machines and agents that can do the same. However, to be able to automatically recognize affective states from speech signals, it is necessary to solve two main technological problems. The former concerns the identification of effective and efficient processing algorithms capable of capturing emotional acoustic features from speech sentences. The latter focuses on finding computational models able to classify, with an approximation as good as human listeners, a given set of emotional states. This paper will survey these topics and provide some insights for a holistic approach to the automatic analysis, recognition and synthesis of affective states.
Collapse
Affiliation(s)
- Anna Esposito
- Department of Psychology, Second University of Naples, Caserta, Italy.
| | | |
Collapse
|