1
|
Halow SJ, Hamilton A, Folmer E, MacNeilage PR. Impaired stationarity perception is associated with increased virtual reality sickness. J Vis 2023; 23:7. [PMID: 38127329 PMCID: PMC10750839 DOI: 10.1167/jov.23.14.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 11/05/2023] [Indexed: 12/23/2023] Open
Abstract
Stationarity perception refers to the ability to accurately perceive the surrounding visual environment as world-fixed during self-motion. Perception of stationarity depends on mechanisms that evaluate the congruence between retinal/oculomotor signals and head movement signals. In a series of psychophysical experiments, we systematically varied the congruence between retinal/oculomotor and head movement signals to find the range of visual gains that is compatible with perception of a stationary environment. On each trial, human subjects wearing a head-mounted display execute a yaw head movement and report whether the visual gain was perceived to be too slow or fast. A psychometric fit to the data across trials reveals the visual gain most compatible with stationarity (a measure of accuracy) and the sensitivity to visual gain manipulation (a measure of precision). Across experiments, we varied 1) the spatial frequency of the visual stimulus, 2) the retinal location of the visual stimulus (central vs. peripheral), and 3) fixation behavior (scene-fixed vs. head-fixed). Stationarity perception is most precise and accurate during scene-fixed fixation. Effects of spatial frequency and retinal stimulus location become evident during head-fixed fixation, when retinal image motion is increased. Virtual Reality sickness assessed using the Simulator Sickness Questionnaire covaries with perceptual performance. Decreased accuracy is associated with an increase in the nausea subscore, while decreased precision is associated with an increase in the oculomotor and disorientation subscores.
Collapse
Affiliation(s)
| | - Allie Hamilton
- University of Nevada, Reno, Psychology, Reno, Nevada, USA
| | - Eelke Folmer
- University of Nevada, Reno, Computer Science, Reno, Nevada, USA
| | | |
Collapse
|
2
|
Wong SW, Crowe P. Visualisation ergonomics and robotic surgery. J Robot Surg 2023; 17:1873-1878. [PMID: 37204648 PMCID: PMC10492791 DOI: 10.1007/s11701-023-01618-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 05/13/2023] [Indexed: 05/20/2023]
Abstract
Stereopsis may be an advantage of robotic surgery. Perceived robotic ergonomic advantages in visualisation include better exposure, three-dimensional vision, surgeon camera control, and line of sight screen location. Other ergonomic factors relating to visualisation include stereo-acuity, vergence-accommodation mismatch, visual-perception mismatch, visual-vestibular mismatch, visuospatial ability, visual fatigue, and visual feedback to compensate for lack of haptic feedback. Visual fatigue symptoms may be related to dry eye or accommodative/binocular vision stress. Digital eye strain can be measured by questionnaires and objective tests. Management options include treatment of dry eye, correction of refractive error, and management of accommodation and vergence anomalies. Experienced robotic surgeons can use visual cues like tissue deformation and surgical tool information as surrogates for haptic feedback.
Collapse
Affiliation(s)
- Shing Wai Wong
- Department of General Surgery, Prince of Wales Hospital, Sydney, NSW, Australia.
- Randwick Campus, School of Clinical Medicine, The University of New South Wales, Sydney, NSW, Australia.
| | - Philip Crowe
- Department of General Surgery, Prince of Wales Hospital, Sydney, NSW, Australia
- Randwick Campus, School of Clinical Medicine, The University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
3
|
Thurley K. Naturalistic neuroscience and virtual reality. Front Syst Neurosci 2022; 16:896251. [PMID: 36467978 PMCID: PMC9712202 DOI: 10.3389/fnsys.2022.896251] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 10/31/2022] [Indexed: 04/04/2024] Open
Abstract
Virtual reality (VR) is one of the techniques that became particularly popular in neuroscience over the past few decades. VR experiments feature a closed-loop between sensory stimulation and behavior. Participants interact with the stimuli and not just passively perceive them. Several senses can be stimulated at once, large-scale environments can be simulated as well as social interactions. All of this makes VR experiences more natural than those in traditional lab paradigms. Compared to the situation in field research, a VR simulation is highly controllable and reproducible, as required of a laboratory technique used in the search for neural correlates of perception and behavior. VR is therefore considered a middle ground between ecological validity and experimental control. In this review, I explore the potential of VR in eliciting naturalistic perception and behavior in humans and non-human animals. In this context, I give an overview of recent virtual reality approaches used in neuroscientific research.
Collapse
Affiliation(s)
- Kay Thurley
- Faculty of Biology, Ludwig-Maximilians-Universität München, Munich, Germany
- Bernstein Center for Computational Neuroscience Munich, Munich, Germany
| |
Collapse
|
4
|
Chung W, Barnett-Cowan M. Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality. Multisens Res 2022; 35:1-23. [PMID: 35477696 DOI: 10.1163/22134808-bja10074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 03/17/2022] [Indexed: 02/21/2024]
Abstract
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
Collapse
Affiliation(s)
- William Chung
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
5
|
Chaudhary S, Saywell N, Taylor D. The Differentiation of Self-Motion From External Motion Is a Prerequisite for Postural Control: A Narrative Review of Visual-Vestibular Interaction. Front Hum Neurosci 2022; 16:697739. [PMID: 35210998 PMCID: PMC8860980 DOI: 10.3389/fnhum.2022.697739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 01/18/2022] [Indexed: 11/13/2022] Open
Abstract
The visual system is a source of sensory information that perceives environmental stimuli and interacts with other sensory systems to generate visual and postural responses to maintain postural stability. Although the three sensory systems; the visual, vestibular, and somatosensory systems work concurrently to maintain postural control, the visual and vestibular system interaction is vital to differentiate self-motion from external motion to maintain postural stability. The visual system influences postural control playing a key role in perceiving information required for this differentiation. The visual system’s main afferent information consists of optic flow and retinal slip that lead to the generation of visual and postural responses. Visual fixations generated by the visual system interact with the afferent information and the vestibular system to maintain visual and postural stability. This review synthesizes the roles of the visual system and their interaction with the vestibular system, to maintain postural stability.
Collapse
|
6
|
Moura Neto ED, Fonseca BHDS, Rocha DS, Souza LAPSD, Abdalla DR, Viana DA, Luvizutto GJ. Additional acute effects of virtual reality head-mounted displays on balance outcomes in non-disabled individuals: a proof-of-concept study. MOTRIZ: REVISTA DE EDUCACAO FISICA 2022. [DOI: 10.1590/s1980-657420220006721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
7
|
Jeong D, Han SH, Jeong DY, Kwon K, Choi S. Investigating 4D movie audiences’ emotional responses to motion effects and empathy. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106797] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Boon MY, Asper LJ, Chik P, Alagiah P, Ryan M. Treatment and compliance with virtual reality and anaglyph‐based training programs for convergence insufficiency. Clin Exp Optom 2021; 103:870-876. [DOI: 10.1111/cxo.13057] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 02/19/2020] [Accepted: 02/24/2020] [Indexed: 11/30/2022] Open
Affiliation(s)
- Mei Ying Boon
- School of Optometry and Vision Science, The University of New South Wales, Sydney, Australia,
| | - Lisa J Asper
- School of Optometry and Vision Science, The University of New South Wales, Sydney, Australia,
| | - Peiting Chik
- School of Optometry and Vision Science, The University of New South Wales, Sydney, Australia,
| | - Piranaa Alagiah
- School of Optometry and Vision Science, The University of New South Wales, Sydney, Australia,
| | - Malcolm Ryan
- Department of Computing, Macquarie University, Sydney, Australia,
| |
Collapse
|
9
|
Elsayed M, Kadom N, Ghobadi C, Strauss B, Al Dandan O, Aggarwal A, Anzai Y, Griffith B, Lazarow F, Straus CM, Safdar NM. Virtual and augmented reality: potential applications in radiology. Acta Radiol 2020; 61:1258-1265. [PMID: 31928346 DOI: 10.1177/0284185119897362] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The modern-day radiologist must be adept at image interpretation, and the one who most successfully leverages new technologies may provide the highest value to patients, clinicians, and trainees. Applications of virtual reality (VR) and augmented reality (AR) have the potential to revolutionize how imaging information is applied in clinical practice and how radiologists practice. This review provides an overview of VR and AR, highlights current applications, future developments, and limitations hindering adoption.
Collapse
Affiliation(s)
- Mohammad Elsayed
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Comeron Ghobadi
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Benjamin Strauss
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Omran Al Dandan
- Department of Radiology, Imam Abdulrahman Bin Faisal University College of Medicine, Dammam, Eastern Province, Saudi Arabia
| | - Abhimanyu Aggarwal
- Department of Radiology, Eastern Virginia Medical School, Norfolk, VA, USA
| | - Yoshimi Anzai
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah, USA
| | - Brent Griffith
- Department of Radiology, Henry Ford Health System, Detroit, MI, USA
| | - Frances Lazarow
- Department of Radiology, Eastern Virginia Medical School, Norfolk, VA, USA
| | - Christopher M Straus
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Nabile M Safdar
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
10
|
Garzorz I, Deroy O. Why There Is a Vestibular Sense, or How Metacognition Individuates the Senses. Multisens Res 2020; 34:261-280. [PMID: 33706282 DOI: 10.1163/22134808-bja10026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/23/2020] [Indexed: 11/19/2022]
Abstract
Should the vestibular system be counted as a sense? This basic conceptual question remains surprisingly controversial. While it is possible to distinguish specific vestibular organs, it is not clear that this suffices to identify a genuine vestibular sense because of the supposed absence of a distinctive vestibular personal-level manifestation. The vestibular organs instead contribute to more general multisensory representations, whose name still suggest that they have a distinct 'sensory' contribution. The vestibular case shows a good example of the challenge of individuating the senses when multisensory interactions are the norm, neurally, representationally and phenomenally. Here, we propose that an additional metacognitive criterion can be used to single out a distinct sense, besides the existence of specific organs and despite the fact that the information coming from these organs is integrated with other sensory information. We argue that it is possible for human perceivers to monitor information coming from distinct organs, despite their integration, as exhibited and measured through metacognitive performance. Based on the vestibular case, we suggest that metacognitive awareness of the information coming from sensory organs constitutes a new criterion to individuate a sense through both physiological and personal criteria. This new way of individuating the senses accommodates both the specialised nature of sensory receptors as well as the intricate multisensory aspect of neural processes and experience, while maintaining the idea that each sense contributes something special to how we monitor the world and ourselves, at the subjective level.
Collapse
Affiliation(s)
- Isabelle Garzorz
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University, Munich, Germany.,German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University, Munich, Germany.,Munich Center for Neuroscience, Ludwig Maximilian University, Munich, Germany.,Institute of Philosophy, School of Advanced Study, University of London, London, UK
| |
Collapse
|
11
|
Treleaven J, Joloud V, Nevo Y, Radcliffe C, Ryder M. Normative Responses to Clinical Tests for Cervicogenic Dizziness: Clinical Cervical Torsion Test and Head-Neck Differentiation Test. Phys Ther 2020; 100:192-200. [PMID: 31584656 DOI: 10.1093/ptj/pzz143] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 04/01/2019] [Accepted: 06/27/2019] [Indexed: 11/14/2022]
Abstract
BACKGROUND The clinical diagnosis of cervicogenic dizziness (CGD) is challenging because of a lack of sensitive and specific diagnostic tests. It is vital for clinicians to know normative responses to suggested clinical tests to help them develop the method and interpretation of these tests and maximize their diagnostic value for CGD. OBJECTIVE The purpose of the study was to determine normative responses to the clinical application of the cervical torsion test and the head-neck differentiation test, with consideration of different age groups and sex. DESIGN This was an observational study. METHODS One hundred forty-seven people who were healthy and asymptomatic served as controls and performed both tests, which involved 3 components: cervical torsion, cervical rotation, and en bloc rotation (head and trunk rotation together). RESULTS Thirty-five (23.81%) of the 147 participants reported some symptoms (mild dizziness, visual disturbances, unusual eye movements on opening eyes after the test, motion sickness, or nausea) on 1 or more of the 3 test components in either test. The specificity when using a positive response to torsion alone (ie, a negative response to the rotation or en bloc component) was high (for the cervical torsion test, 98.64%; for the head-neck differentiation test, 89.8%), as participants with likely global sensorimotor sensitivity were eliminated. The combined specificity was 100%, as no participants presented with exclusive positive torsion results in both tests. Age and sex did not influence the results. LIMITATIONS There were several examiners who were not blinded. CONCLUSIONS Confirmation of the high specificity of these clinical tests with the method used in this study to conduct and interpret the results will allow future research to determine the sensitivity of these clinical measures in a population with CGD and specificity in those with dizziness of other origins.
Collapse
Affiliation(s)
- Julia Treleaven
- Division of Physiotherapy, The Neck Pain and Whiplash Research Unit, School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane 4072, Queensland, Australia
| | - Vladimir Joloud
- Division of Physiotherapy, The Neck Pain and Whiplash Research Unit, School of Health and Rehabilitation Sciences, The University of Queensland
| | - Yoav Nevo
- Division of Physiotherapy, The Neck Pain and Whiplash Research Unit, School of Health and Rehabilitation Sciences, The University of Queensland
| | - Clare Radcliffe
- Division of Physiotherapy, The Neck Pain and Whiplash Research Unit, School of Health and Rehabilitation Sciences, The University of Queensland
| | - Mollie Ryder
- Division of Physiotherapy, The Neck Pain and Whiplash Research Unit, School of Health and Rehabilitation Sciences, The University of Queensland
| |
Collapse
|
12
|
Moroz M, Garzorz I, Folmer E, MacNeilage P. Sensitivity to Visual Speed Modulation in Head-Mounted Displays Depends on Fixation. DISPLAYS 2019; 58:12-19. [PMID: 32863474 PMCID: PMC7454227 DOI: 10.1016/j.displa.2018.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A primary cause of simulator sickness in head-mounted displays (HMDs) is conflict between the visual scene displayed to the user and the visual scene expected by the brain when the user's head is in motion. It is useful to measure perceptual sensitivity to visual speed modulation in HMDs because conditions that minimize this sensitivity may prove less likely to elicit simulator sickness. In prior research, we measured sensitivity to visual gain modulation during slow, passive, full-body yaw rotations and observed that sensitivity was reduced when subjects fixated a head-fixed target compared with when they fixated a scene-fixed target. In the current study, we investigated whether this pattern of results persists when (1) movements are faster, active head turns, and (2) visual stimuli are presented on an HMD rather than on a monitor. Subjects wore an Oculus Rift CV1 HMD and viewed a 3D scene of white points on a black background. On each trial, subjects moved their head from a central position to face a 15° eccentric target. During the head movement they fixated a point that was either head-fixed or scene-fixed, depending on condition. They then reported if the visual scene motion was too fast or too slow. Visual speed on subsequent trials was modulated according to a staircase procedure to find the speed increment that was just noticeable. Sensitivity to speed modulation during active head movement was reduced during head-fixed fixation, similar to what we observed during passive whole-body rotation. We conclude that fixation of a head-fixed target is an effective way to reduce sensitivity to visual speed modulation in HMDs, and may also be an effective strategy to reduce susceptibility to simulator sickness.
Collapse
Affiliation(s)
- Matthew Moroz
- Department of Psychology, University of Nevada, Reno
| | - Isabelle Garzorz
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München
| | - Eelke Folmer
- Department of Computer Science, University of Nevada, Reno
| | | |
Collapse
|
13
|
Garzorz IT, MacNeilage PR. Towards dynamic modeling of visual-vestibular conflict detection. PROGRESS IN BRAIN RESEARCH 2019; 248:277-284. [PMID: 31239138 PMCID: PMC7162554 DOI: 10.1016/bs.pbr.2019.03.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Visual-vestibular mismatch is a common occurrence, with causes ranging from vehicular travel, to vestibular dysfunction, to virtual reality displays. Behavioral and physiological consequences of this mismatch include adaptation of reflexive eye movements, oscillopsia, vertigo, and nausea. Despite this significance, we still do not have a good understanding of how the nervous system evaluates visual-vestibular conflict. Here we review research that quantifies perceptual sensitivity to visual-vestibular conflict and factors that mediate this sensitivity, such as noise on visual and vestibular sensory estimates. We emphasize that dynamic modeling methods are necessary to investigate how the nervous system monitors conflict between time-varying visual and vestibular signals, and we present a simple example of a drift-diffusion model for visual-vestibular conflict detection. The model makes predictions for detection of conflict arising from changes in both visual gain and latency. We conclude with discussion of topics for future research.
Collapse
Affiliation(s)
- Isabelle T Garzorz
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilian University, Munich, Germany.
| | - Paul R MacNeilage
- Department of Psychology, Cognitive and Brain Sciences, University of Nevada, Reno, NV, United States
| |
Collapse
|
14
|
Perdreau F, Cooke JRH, Koppen M, Medendorp WP. Causal inference for spatial constancy across whole body motion. J Neurophysiol 2019; 121:269-284. [PMID: 30461369 DOI: 10.1152/jn.00473.2018] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object's location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated "updated" target location, and participants had to report the initial target location. We found that the participants' responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the "updated" and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.
Collapse
Affiliation(s)
- Florian Perdreau
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University , Nijmegen , The Netherlands
| | - James R H Cooke
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University , Nijmegen , The Netherlands
| | - Mathieu Koppen
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University , Nijmegen , The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University , Nijmegen , The Netherlands
| |
Collapse
|
15
|
A virtual reality approach identifies flexible inhibition of motion aftereffects induced by head rotation. Behav Res Methods 2018; 51:96-107. [PMID: 30187432 DOI: 10.3758/s13428-018-1116-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As we move in space, our retinae receive motion signals from two causes: those resulting from motion in the world and those resulting from self-motion. Mounting evidence has shown that vestibular self-motion signals interact with visual motion processing profoundly. However, most contemporary methods arguably lack portability and generality and are incapable of providing measurements during locomotion. Here we developed a virtual reality approach, combining a three-space sensor with a head-mounted display, to quantitatively manipulate the causality between retinal motion and head rotations in the yaw plane. Using this system, we explored how self-motion affected visual motion perception, particularly the motion aftereffect (MAE). Subjects watched gratings presented on a head-mounted display. The gratings drifted at the same velocity as head rotations, with the drifting direction being identical, opposite, or perpendicular to the direction of head rotations. We found that MAE lasted a significantly shorter time when subjects' heads rotated than when their heads were kept still. This effect was present regardless of the drifting direction of the gratings, and was also observed during passive head rotations. These findings suggest that the adaptation to retinal motion is suppressed by head rotations. Because the suppression was also found during passive head movements, it should result from visual-vestibular interaction rather than from efference copy signals. Such visual-vestibular interaction is more flexible than has previously been thought, since the suppression could be observed even when the retinal motion direction was perpendicular to head rotations. Our work suggests that a virtual reality approach can be applied to various studies of multisensory integration and interaction.
Collapse
|
16
|
Shayman CS, Seo JH, Oh Y, Lewis RF, Peterka RJ, Hullar TE. Relationship between vestibular sensitivity and multisensory temporal integration. J Neurophysiol 2018; 120:1572-1577. [PMID: 30020839 DOI: 10.1152/jn.00379.2018] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A single event can generate asynchronous sensory cues due to variable encoding, transmission, and processing delays. To be interpreted as being associated in time, these cues must occur within a limited time window, referred to as a "temporal binding window" (TBW). We investigated the hypothesis that vestibular deficits could disrupt temporal visual-vestibular integration by determining the relationships between vestibular threshold and TBW in participants with normal vestibular function and with vestibular hypofunction. Vestibular perceptual thresholds to yaw rotation were characterized and compared with the TBWs obtained from participants who judged whether a suprathreshold rotation occurred before or after a brief visual stimulus. Vestibular thresholds ranged from 0.7 to 16.5 deg/s and TBWs ranged from 13.8 to 395 ms. Among all participants, TBW and vestibular thresholds were well correlated ( R2 = 0.674, P < 0.001), with vestibular-deficient patients having higher thresholds and wider TBWs. Participants reported that the rotation onset needed to lead the light flash by an average of 80 ms for the visual and vestibular cues to be perceived as occurring simultaneously. The wide TBWs in vestibular-deficient participants compared with normal functioning participants indicate that peripheral sensory loss can lead to abnormal multisensory integration. A reduced ability to temporally combine sensory cues appropriately may provide a novel explanation for some symptoms reported by patients with vestibular deficits. Even among normal functioning participants, a high correlation between TBW and vestibular thresholds was observed, suggesting that these perceptual measurements are sensitive to small differences in vestibular function. NEW & NOTEWORTHY While spatial visual-vestibular integration has been well characterized, the temporal integration of these cues is not well understood. The relationship between sensitivity to whole body rotation and duration of the temporal window of visual-vestibular integration was examined using psychophysical techniques. These parameters were highly correlated for those with normal vestibular function and for patients with vestibular hypofunction. Reduced temporal integration performance in patients with vestibular hypofunction may explain some symptoms associated with vestibular loss.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon
| | - Jae-Hyun Seo
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon.,Department of Otolaryngology-Head and Neck Surgery, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yonghee Oh
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon
| | - Richard F Lewis
- Department of Otolaryngology, Harvard Medical School , Boston, Massachusetts.,Department of Neurology, Harvard Medical School , Boston, Massachusetts.,Jenks Vestibular Physiology Laboratory, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts
| | - Robert J Peterka
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System , Portland, Oregon.,Department of Neurology, Oregon Health and Science University , Portland, Oregon
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon
| |
Collapse
|
17
|
Optic flow detection is not influenced by visual-vestibular congruency. PLoS One 2018; 13:e0191693. [PMID: 29352317 PMCID: PMC5774822 DOI: 10.1371/journal.pone.0191693] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 01/09/2018] [Indexed: 12/02/2022] Open
Abstract
Optic flow patterns generated by self-motion relative to the stationary environment result in congruent visual-vestibular self-motion signals. Incongruent signals can arise due to object motion, vestibular dysfunction, or artificial stimulation, which are less common. Hence, we are predominantly exposed to congruent rather than incongruent visual-vestibular stimulation. If the brain takes advantage of this probabilistic association, we expect observers to be more sensitive to visual optic flow that is congruent with ongoing vestibular stimulation. We tested this expectation by measuring the motion coherence threshold, which is the percentage of signal versus noise dots, necessary to detect an optic flow pattern. Observers seated on a hexapod motion platform in front of a screen experienced two sequential intervals. One interval contained optic flow with a given motion coherence and the other contained noise dots only. Observers had to indicate which interval contained the optic flow pattern. The motion coherence threshold was measured for detection of laminar and radial optic flow during leftward/rightward and fore/aft linear self-motion, respectively. We observed no dependence of coherence thresholds on vestibular congruency for either radial or laminar optic flow. Prior studies using similar methods reported both decreases and increases in coherence thresholds in response to congruent vestibular stimulation; our results do not confirm either of these prior reports. While methodological differences may explain the diversity of results, another possibility is that motion coherence thresholds are mediated by neural populations that are either not modulated by vestibular stimulation or that are modulated in a manner that does not depend on congruency.
Collapse
|
18
|
Greenlee MW. Self-Motion Perception: Ups and Downs of Multisensory Integration and Conflict Detection. Curr Biol 2017; 27:R1006-R1007. [PMID: 28950080 DOI: 10.1016/j.cub.2017.07.050] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
A new study indicates that, in humans, eye movements play an important role in self-motion perception, in particular in integrating information from the visual and vestibular systems and detecting possible conflicts between them.
Collapse
Affiliation(s)
- Mark W Greenlee
- Institute for Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany.
| |
Collapse
|