1
|
Zari G, Condino S, Cutolo F, Ferrari V. Magic Leap 1 versus Microsoft HoloLens 2 for the Visualization of 3D Content Obtained from Radiological Images. Sensors (Basel) 2023; 23:3040. [PMID: 36991751 PMCID: PMC10054537 DOI: 10.3390/s23063040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/01/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
The adoption of extended reality solutions is growing rapidly in the healthcare world. Augmented reality (AR) and virtual reality (VR) interfaces can bring advantages in various medical-health sectors; it is thus not surprising that the medical MR market is among the fastest-growing ones. The present study reports on a comparison between two of the most popular MR head-mounted displays, Magic Leap 1 and Microsoft HoloLens 2, for the visualization of 3D medical imaging data. We evaluate the functionalities and performance of both devices through a user-study in which surgeons and residents assessed the visualization of 3D computer-generated anatomical models. The digital content is obtained through a dedicated medical imaging suite (Verima imaging suite) developed by the Italian start-up company (Witapp s.r.l.). According to our performance analysis in terms of frame rate, there are no significant differences between the two devices. The surgical staff expressed a clear preference for Magic Leap 1, particularly for the better visualization quality and the ease of interaction with the 3D virtual content. Nonetheless, even though the results of the questionnaire were slightly more positive for Magic Leap 1, the spatial understanding of the 3D anatomical model in terms of depth relations and spatial arrangement was positively evaluated for both devices.
Collapse
Affiliation(s)
- Giulia Zari
- Information Engineering Department, University of Pisa, Via Girolamo Caruso, 16, 56122 Pisa, Italy; (G.Z.); (S.C.); (V.F.)
| | - Sara Condino
- Information Engineering Department, University of Pisa, Via Girolamo Caruso, 16, 56122 Pisa, Italy; (G.Z.); (S.C.); (V.F.)
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, Via Girolamo Caruso, 16, 56122 Pisa, Italy; (G.Z.); (S.C.); (V.F.)
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, Via Girolamo Caruso, 16, 56122 Pisa, Italy; (G.Z.); (S.C.); (V.F.)
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| |
Collapse
|
2
|
Ceccariglia F, Cercenelli L, Badiali G, Marcelli E, Tarsitano A. Application of Augmented Reality to Maxillary Resections: A Three-Dimensional Approach to Maxillofacial Oncologic Surgery. J Pers Med 2022; 12:jpm12122047. [PMID: 36556268 PMCID: PMC9785494 DOI: 10.3390/jpm12122047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/03/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
In the relevant global context, although virtual reality, augmented reality, and mixed reality have been emerging methodologies for several years, only now have technological and scientific advances made them suitable for revolutionizing clinical care and medical settings through the provision of advanced features and improved healthcare services. Over the past fifteen years, tools and applications using augmented reality (AR) have been designed and tested in the context of various surgical and medical disciplines, including maxillofacial surgery. The purpose of this paper is to show how a marker-less AR guidance system using the Microsoft® HoloLens 2 can be applied in mandible and maxillary demolition surgery to guide maxillary osteotomies. We describe three mandibular and maxillary oncologic resections performed during 2021 using AR support. In these three patients, we applied a marker-less tracking method based on recognition of the patient's facial profile. The surgeon, using HoloLens 2 smart glasses, could see the virtual surgical planning superimposed on the patient's anatomy. We showed that performing osteotomies under AR guidance is feasible and viable, as demonstrated by comparison with osteotomies performed using CAD-CAM cutting guides. This technology has advantages and disadvantages. However, further research is needed to improve the stability and robustness of the marker-less tracking method applied to patient face recognition.
Collapse
Affiliation(s)
- Francesco Ceccariglia
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Correspondence: ; Tel.: +39-051-2144197
| | - Laura Cercenelli
- eDimes Lab-Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
| | - Giovanni Badiali
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
| | - Emanuela Marcelli
- eDimes Lab-Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
| | - Achille Tarsitano
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
| |
Collapse
|
3
|
Pagano TP, dos Santos LL, Santos VR, Sá PHM, Bonfim YDS, Paranhos JVD, Ortega LL, Nascimento LFS, Santos A, Rönnau MM, Winkler I, Nascimento EGS. Remote Heart Rate Prediction in Virtual Reality Head-Mounted Displays Using Machine Learning Techniques. Sensors (Basel) 2022; 22:9486. [PMID: 36502188 PMCID: PMC9738680 DOI: 10.3390/s22239486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/22/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Head-mounted displays are virtual reality devices that may be equipped with sensors and cameras to measure a patient's heart rate through facial regions. Heart rate is an essential body signal that can be used to remotely monitor users in a variety of situations. There is currently no study that predicts heart rate using only highlighted facial regions; thus, an adaptation is required for beats per minute predictions. Likewise, there are no datasets containing only the eye and lower face regions, necessitating the development of a simulation mechanism. This work aims to remotely estimate heart rate from facial regions that can be captured by the cameras of a head-mounted display using state-of-the-art EVM-CNN and Meta-rPPG techniques. We developed a region of interest extractor to simulate a dataset from a head-mounted display device using stabilizer and video magnification techniques. Then, we combined support vector machine and FaceMash to determine the regions of interest and adapted photoplethysmography and beats per minute signal predictions to work with the other techniques. We observed an improvement of 188.88% for the EVM and 55.93% for the Meta-rPPG. In addition, both models were able to predict heart rate using only facial regions as input. Moreover, the adapted technique Meta-rPPG outperformed the original work, whereas the EVM adaptation produced comparable results for the photoplethysmography signal.
Collapse
Affiliation(s)
- Tiago Palma Pagano
- Computational Modeling Department, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | - Lucas Lisboa dos Santos
- Computational Modeling Department, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | - Victor Rocha Santos
- Computational Modeling Department, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | - Paulo H. Miranda Sá
- Computational Modeling Department, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | - Yasmin da Silva Bonfim
- Computational Modeling Department, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | | | - Lucas Lemos Ortega
- Computational Modeling Department, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | | | - Alexandre Santos
- HP Inc. Brazil R&D, Porto Alegre 90619-900, Rio Grande do Sul, Brazil
| | | | - Ingrid Winkler
- Department of Management and Industrial Technology, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
| | - Erick G. Sperandio Nascimento
- Department of Management and Industrial Technology, SENAI CIMATEC University Center, Salvador 41650-010, Bahia, Brazil
- Faculty of Engineering and Physical Sciences, School of Computer Science and Electronic Engineering, Surrey Institute for People-Centred AI, University of Surrey, Guildford GU2 7XH, UK
| |
Collapse
|
4
|
Kim YS, Won J, Jang SW, Ko J. Effects of Cybersickness Caused by Head-Mounted Display-Based Virtual Reality on Physiological Responses: Cross-sectional Study. JMIR Serious Games 2022; 10:e37938. [PMID: 36251360 PMCID: PMC9623462 DOI: 10.2196/37938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 07/06/2022] [Accepted: 07/21/2022] [Indexed: 11/13/2022] Open
Abstract
Background Although more people are experiencing cybersickness due to the popularization of virtual reality (VR), no official standard for the cause and reduction of cybersickness exists to date. One of the main reasons is that an objective method to assess cybersickness has not been established. To resolve this, research on evaluating cybersickness with physiological responses that can be measured in real time is required. Since research on deriving physiological responses that can assess cybersickness is at an early stage, further studies examining various physiological responses are needed. Objective This study analyzed the effects of cybersickness caused by head-mounted display–based VR on physiological responses. Methods We developed content that provided users with a first-person view of an aircraft that moved (with translation and combined rotation) over a city via a predetermined trajectory. In the experiment, cybersickness and the physiological responses of participants were measured. Cybersickness was assessed by the Simulator Sickness Questionnaire (SSQ). The measured physiological responses were heart rate, blood pressure, body temperature, and cortisol level. Results Our measurement confirmed that all SSQ scores increased significantly (all Ps<.05) when participants experienced cybersickness. Heart rate and cortisol level increased significantly (P=.01 and P=.001, respectively). Body temperature also increased, but there was no statistically significant difference (P=.02). Systolic blood pressure and diastolic blood pressure decreased significantly (P=.001). Conclusions Based on the results of our analysis, the following conclusions were drawn: (1) cybersickness causes significant disorientation, and research on this topic should focus on factors that affect disorientation; and (2) the physiological responses that are suitable for measuring cybersickness are heart rate and cortisol level.
Collapse
Affiliation(s)
- Yoon Sang Kim
- BioComputing Lab, Institute for Bio-engineering Application Technology, Department of Computer Science and Engineering, Korea University of Technology and Education, Cheonan-si, Republic of Korea
| | - JuHye Won
- BioComputing Lab, Department of Computer Science and Engineering, Korea University of Technology and Education, Cheonan-si, Republic of Korea
| | - Seong-Wook Jang
- Assistive Technology Research Team for Independent Living, National Rehabilitation Institute, Seoul, Republic of Korea
| | - Junho Ko
- AirPlug Ltd, Seoul, Republic of Korea
| |
Collapse
|
5
|
Ortet CP, Veloso AI, Vale Costa L. Cycling through 360° Virtual Reality Tourism for Senior Citizens: Empirical Analysis of an Assistive Technology. Sensors (Basel) 2022; 22:6169. [PMID: 36015929 PMCID: PMC9413856 DOI: 10.3390/s22166169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/05/2022] [Accepted: 08/16/2022] [Indexed: 06/15/2023]
Abstract
In recent years, there has been a renewed interest in using virtual reality (VR) to (re)create different scenarios and environments with interactive and immersive experiences. Although VR has been popular in the tourism sector to reconfigure tourists' relationships with places and overcome mobility restrictions, its usage in senior cyclotourism has been understudied. VR is suggested to positively impact tourism promotion, cycling simulation, and active and healthy ageing due to physical and mental rehabilitation. The purpose of this study is to assess the senior citizens' perceived experience and attitudes toward a designed 360° VR cyclotouristic experiment, using a head-mounted display (HMD) setting within a laboratory context. A total of 76 participants aged between 50 and 97 years old were involved in convergent parallel mixed-method research, and data were collected using a questionnaire based on the technology acceptance model, as well as the researchers' field notes. Findings suggest that 360° VR with HMD can be an effective assistive technology to foster senior cyclotourism by promoting tourism sites, simulating the cycling pedaling effect, and improving senior citizens' general wellbeing and independence with physical and mental rehabilitation.
Collapse
|
6
|
Martingano AJ, Brown E, Telaak SH, Dolwick AP, Persky S. Cybersickness Variability by Race: Findings From 6 Studies and a Mini Meta-analysis. J Med Internet Res 2022; 24:e36843. [PMID: 35648477 PMCID: PMC9201708 DOI: 10.2196/36843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/22/2022] [Accepted: 04/20/2022] [Indexed: 12/04/2022] Open
Abstract
Background With the influx of medical virtual reality (VR) technologies, cybersickness has transitioned from a nuisance experienced during leisure activities to a potential safety and efficacy concern for patients and clinicians. To improve health equity, it is important to understand any potential differences in cybersickness propensity among demographic groups, including racial groups. Objective This study aims to explore whether cybersickness propensity differs across racial groups. Methods We collected self-reported cybersickness ratings from 6 racially diverse independent samples within 1 laboratory group (N=931). In these studies, the participants were asked to perform tasks in VR such as traversing environments, pointing at and selecting objects, and interacting with virtual humans. Results Significant racial differences in cybersickness were found in 50% (3/6) of studies. A mini meta-analysis revealed that, on average, Black participants reported approximately one-third of SD less cybersickness than White participants (Cohen d=−0.31; P<.001), regardless of the nature of the VR experience. There was no overall difference in reported cybersickness between the Asian and White participants (Cohen d=−0.11; P=.51). Conclusions Racial differences in cybersickness indicate that researchers, practitioners, and regulators should consider patient demographics when evaluating VR health intervention outcomes. These findings lay the groundwork for future studies that may explore racial differences in cybersickness directly.
Collapse
Affiliation(s)
- Alison Jane Martingano
- Social and Behavioral Research Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, United States
| | - Ellenor Brown
- Office of Science and Engineering Laboratories, US Food and Drug Administration, Silver Spring, MD, United States
| | - Sydney H Telaak
- Social and Behavioral Research Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, United States
| | - Alexander P Dolwick
- Social and Behavioral Research Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, United States
| | - Susan Persky
- Social and Behavioral Research Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
7
|
Weber A, Werth J, Epro G, Friemert D, Hartmann U, Lambrianides Y, Seeley J, Nickel P, Karamanidis K. Head-Mounted and Hand-Held Displays Diminish the Effectiveness of Fall-Resisting Skills. Sensors (Basel) 2022; 22:344. [PMID: 35009886 DOI: 10.3390/s22010344] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/23/2021] [Accepted: 12/28/2021] [Indexed: 02/04/2023]
Abstract
Use of head-mounted displays (HMDs) and hand-held displays (HHDs) may affect the effectiveness of stability control mechanisms and impair resistance to falls. This study aimed to examine whether the ability to control stability during locomotion is diminished while using HMDs and HHDs. Fourteen healthy adults (21–46 years) were assessed under single-task (no display) and dual-task (spatial 2-n-back presented on the HMD or the HHD) conditions while performing various locomotor tasks. An optical motion capture system and two force plates were used to assess locomotor stability using an inverted pendulum model. For perturbed standing, 57% of the participants were not able to maintain stability by counter-rotation actions when using either display, compared to the single-task condition. Furthermore, around 80% of participants (dual-task) compared to 50% (single-task) showed a negative margin of stability (i.e., an unstable body configuration) during recovery for perturbed walking due to a diminished ability to increase their base of support effectively. However, no evidence was found for HMDs or HHDs affecting stability during unperturbed locomotion. In conclusion, additional cognitive resources required for dual-tasking, using either display, are suggested to result in delayed response execution for perturbed standing and walking, consequently diminishing participants’ ability to use stability control mechanisms effectively and increasing the risk of falls.
Collapse
|
8
|
Skjoldborg NM, Bender PK, Jensen de López KM. The Efficacy of Head-Mounted-Display Virtual Reality Intervention to Improve Life Skills of Individuals with Autism Spectrum Disorders: A Systematic Review. Neuropsychiatr Dis Treat 2022; 18:2295-2310. [PMID: 36281222 PMCID: PMC9586887 DOI: 10.2147/ndt.s331990] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Accepted: 06/14/2022] [Indexed: 11/06/2022] Open
Abstract
Challenges in life skills in individuals with autism spectrum disorders (ASD) are associated with dependency on others and increased isolation from peers. In recent years, interventions using virtual reality (VR) technology have been proposed to improve life skills in ASD populations. This systematic review seeks to evaluate the efficacy of employing VR interventions mediated via head-mounted displays (HMD) for the improvement of life skills in individuals with ASD. Several databases were searched and a narrative synthesis was conducted to examine the findings of the included studies. Eight studies including a total of 58 participants were deemed relevant for this systematic review. The methodological quality of the included studies was assessed via the use of critical appraisal tools. Results were generally positive, with one study reporting statistically significant results, and one study not reporting any change in abilities. The remaining six studies reported varying degrees of life skill improvement. The studies were characterized by methodological issues, such as very low sample sizes. The findings of this systematic review indicate some potential for HMD VR interventions in the improvement of life skills in individuals with ASD. However, this review also highlights the current lack of methodologically strong study designs, which prohibits any firm conclusions. Findings are discussed regarding methodological recommendations for further research as well as practical implications for life skills interventions for individuals with ASD.
Collapse
Affiliation(s)
- Nikki M Skjoldborg
- Center for Developmental & Applied Psychological Science, Department of Communication and Psychology, Aalborg University, Aalborg, Denmark
| | - Patrick K Bender
- Center for Developmental & Applied Psychological Science, Department of Communication and Psychology, Aalborg University, Aalborg, Denmark
| | - Kristine M Jensen de López
- Center for Developmental & Applied Psychological Science, Department of Communication and Psychology, Aalborg University, Aalborg, Denmark
| |
Collapse
|
9
|
Lauer L, Altmeyer K, Malone S, Barz M, Brünken R, Sonntag D, Peschel M. Investigating the Usability of a Head-Mounted Display Augmented Reality Device in Elementary School Children. Sensors (Basel) 2021; 21:s21196623. [PMID: 34640942 PMCID: PMC8512836 DOI: 10.3390/s21196623] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/23/2021] [Accepted: 09/30/2021] [Indexed: 01/20/2023]
Abstract
Augmenting reality via head-mounted displays (HMD-AR) is an emerging technology in education. The interactivity provided by HMD-AR devices is particularly promising for learning, but presents a challenge to human activity recognition, especially with children. Recent technological advances regarding speech and gesture recognition concerning Microsoft’s HoloLens 2 may address this prevailing issue. In a within-subjects study with 47 elementary school children (2nd to 6th grade), we examined the usability of the HoloLens 2 using a standardized tutorial on multimodal interaction in AR. The overall system usability was rated “good”. However, several behavioral metrics indicated that specific interaction modes differed in their efficiency. The results are of major importance for the development of learning applications in HMD-AR as they partially deviate from previous findings. In particular, the well-functioning recognition of children’s voice commands that we observed represents a novelty. Furthermore, we found different interaction preferences in HMD-AR among the children. We also found the use of HMD-AR to have a positive effect on children’s activity-related achievement emotions. Overall, our findings can serve as a basis for determining general requirements, possibilities, and limitations of the implementation of educational HMD-AR environments in elementary school classrooms.
Collapse
Affiliation(s)
- Luisa Lauer
- Department of Physics, Campus C6.3, Saarland University, 66123 Saarbrücken, Germany;
- Correspondence: ; Tel.: +49-681-302-71397
| | - Kristin Altmeyer
- Department of Education, Campus A4.2, Saarland University, 66123 Saarbrücken, Germany; (K.A.); (S.M.)
| | - Sarah Malone
- Department of Education, Campus A4.2, Saarland University, 66123 Saarbrücken, Germany; (K.A.); (S.M.)
| | - Michael Barz
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Stuhlsatzenhausweg 3, Saarland Informatics Campus D3_2, 66123 Saarbrücken, Germany; (M.B.); (D.S.)
- Applied Artificial Intelligence, Oldenburg University, Marie-Curie-Str. 1, 26129 Oldenburg, Germany
| | - Roland Brünken
- Department of Education, Campus A4.2, Saarland University, 66123 Saarbrücken, Germany; (K.A.); (S.M.)
| | - Daniel Sonntag
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Stuhlsatzenhausweg 3, Saarland Informatics Campus D3_2, 66123 Saarbrücken, Germany; (M.B.); (D.S.)
- Applied Artificial Intelligence, Oldenburg University, Marie-Curie-Str. 1, 26129 Oldenburg, Germany
| | - Markus Peschel
- Department of Physics, Campus C6.3, Saarland University, 66123 Saarbrücken, Germany;
| |
Collapse
|
10
|
Kim J, Cha J, Kim S. Hands-Free User Interface for VR Headsets Based on In Situ Facial Gesture Sensing. Sensors (Basel) 2020; 20:E7206. [PMID: 33339247 DOI: 10.3390/s20247206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 01/26/2023]
Abstract
The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer's facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user's facial muscles, we define a set of commands that combine predefined facial gestures with head movements. This is achieved by utilizing six pairs of infrared (IR) photocouplers positioned at the foam interface of an HMD. We demonstrate the usability and report on the user experience as well as the performance of the proposed command set using an experimental VR game without any additional controllers. We obtained more than 99% of recognition accuracy for each facial gesture throughout the three steps of experimental tests. The proposed input interface is a cost-effective and efficient solution that facilitates hands-free user operation of a VR headset using built-in infrared photocouplers positioned in the foam interface. The proposed system recognizes facial gestures and incorporates a hands-free user interface to HMD, which is similar to the touch-screen experience of a smartphone.
Collapse
|
11
|
Grassini S, Laumann K, Rasmussen Skogstad M. The Use of Virtual Reality Alone Does Not Promote Training Performance (but Sense of Presence Does). Front Psychol 2020; 11:1743. [PMID: 32765384 PMCID: PMC7379892 DOI: 10.3389/fpsyg.2020.01743] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 06/23/2020] [Indexed: 11/24/2022] Open
Abstract
Virtual reality (VR) offers novel ways to develop skills and learning. This technology can be used to enhance the way we educate and train professionals by possibly being more effective, cost-efficient, and reducing training-related risks. However, the potential benefits from virtual training assume that the trained skills can be transferred to the real world. Nevertheless, in the current published scientific literature, there is limited empirical evidence that links VR use to better learning. The present investigation aimed to explore the use of VR as a tool for training procedural skills and compare this modality with traditional instruction methods. To investigate skill development using the two forms of training, participants were randomly divided into two groups. The first group received training through an instructional video, while the second group trained in VR. After the training session, the participants performed the trained task in a real setting, and task performance was measured. Subsequently, the user's experienced sense of presence and simulator sickness (SS) was measured with self-report questionnaires. There were no significant differences between groups for any of the performance measures. There was no gender effect on performance. Importantly, the results of the present study indicate that a high sense of presence during the VR simulation might contribute to increased skill learning. These findings can be used as a starting point that could be of value when further exploring VR as a tool for skill development.
Collapse
Affiliation(s)
- Simone Grassini
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Karin Laumann
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Martin Rasmussen Skogstad
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
- NTNU Social Research, Studio Apertura, Trondheim, Norway
| |
Collapse
|
12
|
Southworth MK, Silva JNA, Blume WM, Van Hare GF, Dalal AS, Silva JR. Performance Evaluation of Mixed Reality Display for Guidance During Transcatheter Cardiac Mapping and Ablation. IEEE J Transl Eng Health Med 2020; 8:1900810. [PMID: 32742821 PMCID: PMC7390021 DOI: 10.1109/jtehm.2020.3007031] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 06/23/2020] [Accepted: 06/24/2020] [Indexed: 01/18/2023]
Abstract
Cardiac electrophysiology procedures present the physician with a wealth of 3D information, typically presented on fixed 2D monitors. New developments in wearable mixed reality displays offer the potential to simplify and enhance 3D visualization while providing hands-free, dynamic control of devices within the procedure room. OBJECTIVE This work aims to evaluate the performance and quality of a mixed reality system designed for intraprocedural use in cardiac electrophysiology. METHOD The Enhanced Electrophysiology Visualization and Interaction System (ĒLVIS) mixed reality system performance criteria, including image quality, hardware performance, and usability were evaluated using existing display validation procedures adapted to the electrophysiology specific use case. Additional performance and user validation were performed through a 10 patient, in-human observational study, the Engineering ĒLVIS (E2) Study. RESULTS The ĒLVIS system achieved acceptable frame rate, latency, and battery runtime with acceptable dynamic range and depth distortion as well as minimal geometric distortion. Bench testing results corresponded with physician feedback in the observational study, and potential improvements in geometric understanding were noted. CONCLUSION The ĒLVIS system, based on current commercially available mixed reality hardware, is capable of meeting the hardware performance, image quality, and usability requirements of the electroanatomic mapping display for intraprocedural, real-time use in electrophysiology procedures. Verifying off the shelf mixed reality hardware for specific clinical use can accelerate the adoption of this transformative technology and provide novel visualization, understanding, and control of clinically relevant data in real-time.
Collapse
Affiliation(s)
| | - Jennifer N. Avari Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | | | - George F. Van Hare
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Aarti S. Dalal
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Jonathan R. Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMO63130USA
| |
Collapse
|
13
|
Riecke BE, Jordan JD. Comparing the effectiveness of different displays in enhancing illusions of self-movement (vection). Front Psychol 2015; 6:713. [PMID: 26082735 PMCID: PMC4450174 DOI: 10.3389/fpsyg.2015.00713] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 05/13/2015] [Indexed: 11/13/2022] Open
Abstract
Illusions of self-movement (vection) can be used in virtual reality (VR) and other applications to give users the embodied sensation that they are moving when physical movement is unfeasible or too costly. Whereas a large body of vection literature studied how various parameters of the presented visual stimulus affect vection, little is known how different display types might affect vection. As a step toward addressing this gap, we conducted three experiments to compare vection and usability parameters between commonly used VR displays, ranging from stereoscopic projection and 3D TV to high-end head-mounted display (HMD, NVIS SX111) and recent low-cost HMD (Oculus Rift). The last experiment also compared these two HMDs in their native full field of view (FOV) and a reduced, matched FOV of 72° × 45°. Participants moved along linear and curvilinear paths in the virtual environment, reported vection onset time, and rated vection intensity at the end of each trial. In addition, user ratings on immersion, motion sickness, vection, and overall preference were recorded retrospectively and compared between displays. Unexpectedly, there were no significant effects of display on vection measures. Reducing the FOV for the HMDs (from full to 72° × 45°) decreased vection onset latencies, but did not affect vection intensity. As predicted, curvilinear paths yielded earlier and more intense vection. Although vection has often been proposed to predict or even cause motion sickness, we observed no correlation for any of the displays studied. In conclusion, perceived self-motion and other user experience measures proved surprisingly tolerant toward changes in display type as long as the FOV was roughly matched. This suggests that display choice for vection research and VR applications can be largely based on other considerations as long as the provided FOV is sufficiently large.
Collapse
Affiliation(s)
- Bernhard E Riecke
- iSpace Lab, School of Interactive Arts and Technology, Simon Fraser University , Surrey, BC, Canada
| | - Jacqueline D Jordan
- iSpace Lab, School of Interactive Arts and Technology, Simon Fraser University , Surrey, BC, Canada
| |
Collapse
|
14
|
Lin Q, Xu Z, Li B, Baucom R, Poulose B, Landman BA, Bodenheimer RE. Immersive Virtual Reality for Visualization of Abdominal CT. Proc SPIE Int Soc Opt Eng 2013; 8673. [PMID: 24386547 DOI: 10.1117/12.2008050] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Collapse
Affiliation(s)
- Qiufeng Lin
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Zhoubing Xu
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Bo Li
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Rebeccah Baucom
- General Surgery, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Benjamin Poulose
- General Surgery, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | | |
Collapse
|