1
|
Cary E, Lahdesmaki I, Badde S. Audiovisual simultaneity windows reflect temporal sensory uncertainty. Psychon Bull Rev 2024; 31:2170-2179. [PMID: 38388825 PMCID: PMC11543760 DOI: 10.3758/s13423-024-02478-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
The ability to judge the temporal alignment of visual and auditory information is a prerequisite for multisensory integration and segregation. However, each temporal measurement is subject to error. Thus, when judging whether a visual and auditory stimulus were presented simultaneously, observers must rely on a subjective decision boundary to distinguish between measurement error and truly misaligned audiovisual signals. Here, we tested whether these decision boundaries are relaxed with increasing temporal sensory uncertainty, i.e., whether participants make the same type of adjustment an ideal observer would make. Participants judged the simultaneity of audiovisual stimulus pairs with varying temporal offset, while being immersed in different virtual environments. To obtain estimates of participants' temporal sensory uncertainty and simultaneity criteria in each environment, an independent-channels model was fitted to their simultaneity judgments. In two experiments, participants' simultaneity decision boundaries were predicted by their temporal uncertainty, which varied unsystematically with the environment. Hence, observers used a flexibly updated estimate of their own audiovisual temporal uncertainty to establish subjective criteria of simultaneity. This finding implies that, under typical circumstances, audiovisual simultaneity windows reflect an observer's cross-modal temporal uncertainty.
Collapse
Affiliation(s)
- Emma Cary
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Ilona Lahdesmaki
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, 02155, USA.
| |
Collapse
|
2
|
Mafi F, Tang MF, Afarinesh MR, Ghasemian S, Sheibani V, Arabzadeh E. Temporal order judgment of multisensory stimuli in rat and human. Front Behav Neurosci 2023; 16:1070452. [PMID: 36710957 PMCID: PMC9879721 DOI: 10.3389/fnbeh.2022.1070452] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/16/2022] [Indexed: 01/13/2023] Open
Abstract
We do not fully understand the resolution at which temporal information is processed by different species. Here we employed a temporal order judgment (TOJ) task in rats and humans to test the temporal precision with which these species can detect the order of presentation of simple stimuli across two modalities of vision and audition. Both species reported the order of audiovisual stimuli when they were presented from a central location at a range of stimulus onset asynchronies (SOA)s. While both species could reliably distinguish the temporal order of stimuli based on their sensory content (i.e., the modality label), rats outperformed humans at short SOAs (less than 100 ms) whereas humans outperformed rats at long SOAs (greater than 100 ms). Moreover, rats produced faster responses compared to humans. The reaction time data further revealed key differences in decision process across the two species: at longer SOAs, reaction times increased in rats but decreased in humans. Finally, drift-diffusion modeling allowed us to isolate the contribution of various parameters including evidence accumulation rates, lapse and bias to the sensory decision. Consistent with the psychophysical findings, the model revealed higher temporal sensitivity and a higher lapse rate in rats compared to humans. These findings suggest that these species applied different strategies for making perceptual decisions in the context of a multimodal TOJ task.
Collapse
Affiliation(s)
- Fatemeh Mafi
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
- Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Matthew F. Tang
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
| | - Mohammad Reza Afarinesh
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
- Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Sadegh Ghasemian
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
- Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Vahid Sheibani
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
- Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Ehsan Arabzadeh
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
- Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
| |
Collapse
|
3
|
He Y, Yang T, He C, Sun K, Guo Y, Wang X, Bai L, Xue T, Xu T, Guo Q, Liao Y, Liu X, Wu S. Effects of audiovisual interactions on working memory: Use of the combined N-back + Go/NoGo paradigm. Front Psychol 2023; 14:1080788. [PMID: 36874804 PMCID: PMC9982107 DOI: 10.3389/fpsyg.2023.1080788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/27/2023] [Indexed: 02/19/2023] Open
Abstract
Background Approximately 94% of sensory information acquired by humans originates from the visual and auditory channels. Such information can be temporarily stored and processed in working memory, but this system has limited capacity. Working memory plays an important role in higher cognitive functions and is controlled by central executive function. Therefore, elucidating the influence of the central executive function on information processing in working memory, such as in audiovisual integration, is of great scientific and practical importance. Purpose This study used a paradigm that combined N-back and Go/NoGo tasks, using simple Arabic numerals as stimuli, to investigate the effects of cognitive load (modulated by varying the magnitude of N) and audiovisual integration on the central executive function of working memory as well as their interaction. Methods Sixty college students aged 17-21 years were enrolled and performed both unimodal and bimodal tasks to evaluate the central executive function of working memory. The order of the three cognitive tasks was pseudorandomized, and a Latin square design was used to account for order effects. Finally, working memory performance, i.e., reaction time and accuracy, was compared between unimodal and bimodal tasks with repeated-measures analysis of variance (ANOVA). Results As cognitive load increased, the presence of auditory stimuli interfered with visual working memory by a moderate to large extent; similarly, as cognitive load increased, the presence of visual stimuli interfered with auditory working memory by a moderate to large effect size. Conclusion Our study supports the theory of competing resources, i.e., that visual and auditory information interfere with each other and that the magnitude of this interference is primarily related to cognitive load.
Collapse
Affiliation(s)
- Yang He
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Tianqi Yang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Chunyan He
- Department of Nursing, Fourth Military Medical University, Xi'an, China
| | - Kewei Sun
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Yaning Guo
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Lifeng Bai
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Ting Xue
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Tao Xu
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Qingjun Guo
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Yang Liao
- Air Force Medical Center, Air Force Medical University, Beijing, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
4
|
Carlini A, Bigand E. Does Sound Influence Perceived Duration of Visual Motion? Front Psychol 2021; 12:751248. [PMID: 34925155 PMCID: PMC8675101 DOI: 10.3389/fpsyg.2021.751248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 11/10/2021] [Indexed: 11/13/2022] Open
Abstract
Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants' responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.
Collapse
Affiliation(s)
- Alessandro Carlini
- Laboratory for Research on Learning and Development, CNRS UMR 5022, University of Burgundy, Dijon, France
| | - Emmanuel Bigand
- Laboratory for Research on Learning and Development, CNRS UMR 5022, University of Burgundy, Dijon, France
| |
Collapse
|
5
|
Chau E, Murray CA, Shams L. Hierarchical drift diffusion modeling uncovers multisensory benefit in numerosity discrimination tasks. PeerJ 2021; 9:e12273. [PMID: 34760356 PMCID: PMC8556708 DOI: 10.7717/peerj.12273] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 09/19/2021] [Indexed: 11/30/2022] Open
Abstract
Studies of accuracy and reaction time in decision making often observe a speed-accuracy tradeoff, where either accuracy or reaction time is sacrificed for the other. While this effect may mask certain multisensory benefits in performance when accuracy and reaction time are separately measured, drift diffusion models (DDMs) are able to consider both simultaneously. However, drift diffusion models are often limited by large sample size requirements for reliable parameter estimation. One solution to this restriction is the use of hierarchical Bayesian estimation for DDM parameters. Here, we utilize hierarchical drift diffusion models (HDDMs) to reveal a multisensory advantage in auditory-visual numerosity discrimination tasks. By fitting this model with a modestly sized dataset, we also demonstrate that large sample sizes are not necessary for reliable parameter estimation.
Collapse
Affiliation(s)
- Edwin Chau
- Department of Mathematics, University of California, Los Angeles, Los Angeles, California, USA
| | - Carolyn A Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, USA
| | - Ladan Shams
- Department of Psychology, BioEngineering, and Interdepartmental Neuroscience Program, University of California, Los Angeles, Los Angeles, California, USA
| |
Collapse
|
6
|
Horsfall RP. Narrowing of the Audiovisual Temporal Binding Window Due To Perceptual Training Is Specific to High Visual Intensity Stimuli. Iperception 2021; 12:2041669520978670. [PMID: 33680418 PMCID: PMC7897829 DOI: 10.1177/2041669520978670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 11/14/2020] [Indexed: 12/04/2022] Open
Abstract
The temporal binding window (TBW), which reflects the range of temporal offsets in which audiovisual stimuli are combined to form a singular percept, can be reduced through training. Our research aimed to investigate whether training-induced reductions in TBW size transfer across stimulus intensities. A total of 32 observers performed simultaneity judgements at two visual intensities with a fixed auditory intensity, before and after receiving audiovisual TBW training at just one of these two intensities. We show that training individuals with a high visual intensity reduces the size of the TBW for bright stimuli, but this improvement did not transfer to dim stimuli. The reduction in TBW can be explained by shifts in decision criteria. Those trained with the dim visual stimuli, however, showed no reduction in TBW. Our main finding is that perceptual improvements following training are specific for high-intensity stimuli, potentially highlighting limitations of proposed TBW training procedures.
Collapse
Affiliation(s)
- Ryan P. Horsfall
- Ryan P. Horsfall, Division of Neuroscience & Experimental Psychology, University of Manchester, Manchester M13 9PL, United Kingdom.
| |
Collapse
|
7
|
Chien SE, Chen YC, Matsumoto A, Yamashita W, Shih KT, Tsujimura SI, Yeh SL. The modulation of background color on perceiving audiovisual simultaneity. Vision Res 2020; 172:1-10. [PMID: 32388209 DOI: 10.1016/j.visres.2020.04.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 04/15/2020] [Accepted: 04/16/2020] [Indexed: 11/28/2022]
Abstract
Perceiving simultaneity is critical in integrating visual and auditory signals that give rise to a unified perception. We examined whether background color modulates people's perception of audiovisual simultaneity. Two hypotheses were proposed and examined: (1) the red-impairment hypothesis: visual processing speed deteriorates when viewing a red background because the magnocellular system is inhibited by red light; and (2) the blue-enhancement hypothesis: the detection of both visual and auditory signals is enhanced when viewing a blue background because it stimulates the blue-light sensitive intrinsically photosensitive retinal ganglion cells (ipRGCs), which trigger a higher alert state. Participants were exposed to different backgrounds while performing an audiovisual simultaneity judgment (SJ) task: a flash and a beep were presented at pre-designated stimulus onset asynchronies (SOAs) and participants judged whether or not the two stimuli were presented simultaneously. Experiment 1 demonstrated a shift of the point of subjective simultaneity (PSS) toward the visual-leading condition in the red compared to the blue background when the flash was presented in the periphery. In Experiment 2, the stimulation of ipRGCs was specifically manipulated to test the blue-enhancement hypothesis. The results showed no support for this hypothesis, perhaps due to top-down cortical modulations. Taken together, the shift of PSS toward the visual-leading condition in the red background was attributed to impaired visual processing speed with respect to auditory processing speed, caused by the inhibition of the magnocellular system under red light.
Collapse
Affiliation(s)
- Sung-En Chien
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
| | - Akiko Matsumoto
- Faculty of Science and Engineering, Kagoshima University, Kagoshima, Japan
| | - Wakayo Yamashita
- Faculty of Science and Engineering, Kagoshima University, Kagoshima, Japan
| | - Kuaug-Tsu Shih
- Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
| | - Sei-Ichi Tsujimura
- Faculty of Design and Architecture, Nagoya City University, Nagoya, Japan
| | - Su-Ling Yeh
- Department of Psychology, National Taiwan University, Taipei, Taiwan; Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan; Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan; Center for the Advanced Study in the Behavioral Sciences, Stanford University, USA.
| |
Collapse
|
8
|
Rapid recalibration to audiovisual asynchrony follows the physical-not the perceived-temporal order. Atten Percept Psychophys 2019; 80:2060-2068. [PMID: 29968078 DOI: 10.3758/s13414-018-1540-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In natural scenes, audiovisual events deriving from the same source are synchronized at their origin. However, from the perspective of the observer, there are likely to be significant multisensory delays due to physical and neural latencies. Fortunately, our brain appears to compensate for the resulting latency differences by rapidly adapting to asynchronous audiovisual events by shifting the point of subjective synchrony (PSS) in the direction of the leading modality of the most recent event. Here we examined whether it is the perceived modality order of this prior lag or its physical order that determines the direction of the subsequent rapid recalibration. On each experimental trial, a brief tone pip and flash were presented across a range of stimulus onset asynchronies (SOAs). The participants' task alternated over trials: On adaptor trials, audition either led or lagged vision with fixed SOAs, and participants judged the order of the audiovisual event; on test trials, the SOA as well as the modality order varied randomly, and participants judged whether or not the event was synchronized. For test trials, we showed that the PSS shifted in the direction of the physical rather than the perceived (reported) modality order of the preceding adaptor trial. These results suggest that rapid temporal recalibration is determined by the physical timing of the preceding events, not by one's prior perceptual decisions.
Collapse
|
9
|
Sanders P, Thompson B, Corballis P, Searchfield G. On the Timing of Signals in Multisensory Integration and Crossmodal Interactions: a Scoping Review. Multisens Res 2019; 32:533-573. [PMID: 31137004 DOI: 10.1163/22134808-20191331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 04/24/2019] [Indexed: 11/19/2022]
Abstract
A scoping review was undertaken to explore research investigating early interactions and integration of auditory and visual stimuli in the human brain. The focus was on methods used to study low-level multisensory temporal processing using simple stimuli in humans, and how this research has informed our understanding of multisensory perception. The study of multisensory temporal processing probes how the relative timing between signals affects perception. Several tasks, illusions, computational models, and neuroimaging techniques were identified in the literature search. Research into early audiovisual temporal processing in special populations was also reviewed. Recent research has continued to provide support for early integration of crossmodal information. These early interactions can influence higher-level factors, and vice versa. Temporal relationships between auditory and visual stimuli influence multisensory perception, and likely play a substantial role in solving the 'correspondence problem' (how the brain determines which sensory signals belong together, and which should be segregated).
Collapse
Affiliation(s)
- Philip Sanders
- 1Section of Audiology, University of Auckland, Auckland, New Zealand
- 2Centre for Brain Research, University of Auckland, New Zealand
- 3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| | - Benjamin Thompson
- 2Centre for Brain Research, University of Auckland, New Zealand
- 4School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand
- 5School of Optometry and Vision Science, University of Waterloo, Waterloo, Canada
| | - Paul Corballis
- 2Centre for Brain Research, University of Auckland, New Zealand
- 6Department of Psychology, University of Auckland, Auckland, New Zealand
| | - Grant Searchfield
- 1Section of Audiology, University of Auckland, Auckland, New Zealand
- 2Centre for Brain Research, University of Auckland, New Zealand
- 3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| |
Collapse
|
10
|
Bazilinskyy P, de Winter J. Crowdsourced Measurement of Reaction Times to Audiovisual Stimuli With Various Degrees of Asynchrony. HUMAN FACTORS 2018; 60:1192-1206. [PMID: 30036098 PMCID: PMC6207992 DOI: 10.1177/0018720818787126] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2017] [Accepted: 06/10/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This study was designed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA) using a large sample of crowdsourcing respondents. BACKGROUND Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. METHOD Participants ( N = 1,823) each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of US$0.20 per participant. Results were verified with a local Web-in-lab study ( N = 34). RESULTS The results replicated past research, with a V shape of mean reaction time as a function of SOA, the V shape being stronger for lower-intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% was hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the Web-in-lab study. CONCLUSION Crowdsourcing is a promising medium for reaction time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. APPLICATION The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.
Collapse
Affiliation(s)
- Pavlo Bazilinskyy
- Pavlo Bazilinskyy, Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, the Netherlands; e-mail:
| | | |
Collapse
|
11
|
Audiovisual integration in depth: multisensory binding and gain as a function of distance. Exp Brain Res 2018; 236:1939-1951. [PMID: 29700577 PMCID: PMC6010498 DOI: 10.1007/s00221-018-5274-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Accepted: 02/19/2018] [Indexed: 11/01/2022]
Abstract
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Collapse
|
12
|
Schumann F, O'Regan JK. Sensory augmentation: integration of an auditory compass signal into human perception of space. Sci Rep 2017; 7:42197. [PMID: 28195187 PMCID: PMC5307328 DOI: 10.1038/srep42197] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 01/06/2017] [Indexed: 12/30/2022] Open
Abstract
Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences.
Collapse
Affiliation(s)
- Frank Schumann
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| | - J Kevin O'Regan
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| |
Collapse
|
13
|
Multisensory integration is independent of perceived simultaneity. Exp Brain Res 2016; 235:763-775. [DOI: 10.1007/s00221-016-4822-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 11/04/2016] [Indexed: 10/20/2022]
|
14
|
Abstract
We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.
Collapse
|
15
|
Zeki S. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems. Eur J Neurosci 2016; 44:2515-2527. [DOI: 10.1111/ejn.13270] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 04/25/2016] [Accepted: 05/03/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Semir Zeki
- Wellcome Laboratory of Neurobiology; University College London; London WC1E 6BT UK
| |
Collapse
|
16
|
Noel JP, Lukowska M, Wallace M, Serino A. Multisensory simultaneity judgment and proximity to the body. J Vis 2016; 16:21. [PMID: 26891828 PMCID: PMC4777235 DOI: 10.1167/16.3.21] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
The integration of information across different sensory modalities is known to be dependent upon the statistical characteristics of the stimuli to be combined. For example, the spatial and temporal proximity of stimuli are important determinants with stimuli that are close in space and time being more likely to be bound. These multisensory interactions occur not only for singular points in space/time, but over “windows” of space and time that likely relate to the ecological statistics of real-world stimuli. Relatedly, human psychophysical work has demonstrated that individuals are highly prone to judge multisensory stimuli as co-occurring over a wide range of time—a so-called simultaneity window (SW). Similarly, there exists a spatial representation of peripersonal space (PPS) surrounding the body in which stimuli related to the body and to external events occurring near the body are highly likely to be jointly processed. In the current study, we sought to examine the interaction between these temporal and spatial dimensions of multisensory representation by measuring the SW for audiovisual stimuli through proximal–distal space (i.e., PPS and extrapersonal space). Results demonstrate that the audiovisual SWs within PPS are larger than outside PPS. In addition, we suggest that this effect is likely due to an automatic and additional computation of these multisensory events in a body-centered reference frame. We discuss the current findings in terms of the spatiotemporal constraints of multisensory interactions and the implication of distinct reference frames on this process.
Collapse
|
17
|
Cecere R, Gross J, Thut G. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality. Eur J Neurosci 2016; 43:1561-8. [PMID: 27003546 PMCID: PMC4915493 DOI: 10.1111/ejn.13242] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Revised: 02/09/2016] [Accepted: 03/17/2016] [Indexed: 11/30/2022]
Abstract
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.
Collapse
Affiliation(s)
- Roberto Cecere
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK
| | - Joachim Gross
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK
| | - Gregor Thut
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK
| |
Collapse
|