1
|
Cheng A, Fijacko N, Lockey A, Greif R, Abelairas-Gomez C, Gosak L, Lin Y. Use of augmented and virtual reality in resuscitation training: A systematic review. Resusc Plus 2024; 18:100643. [PMID: 38681058 PMCID: PMC11053298 DOI: 10.1016/j.resplu.2024.100643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/04/2024] [Accepted: 04/06/2024] [Indexed: 05/01/2024] Open
Abstract
Objectives To evaluate the effectiveness of augmented reality (AR) and virtual reality (VR), compared with other instructional methods, for basic and advanced life support training. Methods This systematic review was part of the continuous evidence evaluation process of the International Liaison Committee on Resuscitation (ILCOR) and reported based on the Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) guidelines and registered with PROSPERO (CRD42023376751). MEDLINE, EMBASE, and SCOPUS were searched from inception to January 16, 2024. We included all published studies comparing virtual or augmented reality to other methods of resuscitation training evaluating knowledge acquisition and retention, skills acquisition and retention, skill performance in real resuscitation, willingness to help, bystander CPR rate, and patients' survival. Results Our initial literature search identified 1807 citations. After removing duplicates, reviewing the titles and abstracts of the remaining 1301 articles, full text review of 74 articles and searching references lists of relevant articles, 19 studies were identified for analysis. AR was used in 4 studies to provide real-time feedback during CPR, demonstrating improved CPR performance compared to groups trained with no feedback, but no difference when compared to other sources of CPR feedback. VR use in resuscitation training was explored in 15 studies, with the majority of studies that assessed CPR skills favoring other interventions over VR, or showing no difference between groups. Conclusion Augmented and virtual reality can be used to support resuscitation training of lay people and healthcare professionals, however current evidence does not clearly demonstrate a consistent benefit when compared to other methods of training.
Collapse
Affiliation(s)
- Adam Cheng
- Department of Pediatrics and Emergency Medicine, Cumming School of Medicine, University of Calgary, KidSIM-ASPIRE Simulation Research Program, Alberta Children’s Hospital, Canada
| | - Nino Fijacko
- Faculty of Health Sciences, University of Maribor, Maribor University Medical Centre, Maribor, Slovenia
| | - Andrew Lockey
- Emergency Department, Calderdale & Huddersfield NHS Trust, Halifax, UK
- School of Human and Health Sciences, University of Huddersfield, Huddersfield, UK
| | - Robert Greif
- University of Bern, Bern, Switzerland
- School of Medicine, Sigmund Freud University Vienna, Vienna, Austria
| | - Cristian Abelairas-Gomez
- Faculty of Education Sciences and CLINURSID Research Group, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
- Simulation and Intensive Care Unit of Santiago (SICRUS) Research Group, Health Research Institute of Santiago, University Hospital of Santiago de Compostela-CHUS, Santiago de Compostela, Spain
| | - Lucija Gosak
- Faculty of Health Sciences, University of Maribor, Maribor, Slovenia
| | - Yiqun Lin
- KidSIM-ASPIRE Simulation Research Program, Alberta Children’s Hospital, University of Calgary, Canada
| | - the Education Implementation Team Task Force of the International Liaison Committee on Resuscitation (ILCOR)1
- Department of Pediatrics and Emergency Medicine, Cumming School of Medicine, University of Calgary, KidSIM-ASPIRE Simulation Research Program, Alberta Children’s Hospital, Canada
- Faculty of Health Sciences, University of Maribor, Maribor University Medical Centre, Maribor, Slovenia
- Emergency Department, Calderdale & Huddersfield NHS Trust, Halifax, UK
- School of Human and Health Sciences, University of Huddersfield, Huddersfield, UK
- University of Bern, Bern, Switzerland
- School of Medicine, Sigmund Freud University Vienna, Vienna, Austria
- Faculty of Education Sciences and CLINURSID Research Group, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
- Simulation and Intensive Care Unit of Santiago (SICRUS) Research Group, Health Research Institute of Santiago, University Hospital of Santiago de Compostela-CHUS, Santiago de Compostela, Spain
- Faculty of Health Sciences, University of Maribor, Maribor, Slovenia
- KidSIM-ASPIRE Simulation Research Program, Alberta Children’s Hospital, University of Calgary, Canada
| |
Collapse
|
2
|
Connolly M, Iohom G, O'Brien N, Volz J, O'Muircheartaigh A, Serchan P, Biculescu A, Gadre KG, Soare C, Griseto L, Shorten G. Delivering clinical tutorials to medical students using the Microsoft HoloLens 2: A mixed-methods evaluation. BMC Med Educ 2024; 24:498. [PMID: 38704522 PMCID: PMC11070104 DOI: 10.1186/s12909-024-05475-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/26/2024] [Indexed: 05/06/2024]
Abstract
BACKGROUND Mixed reality offers potential educational advantages in the delivery of clinical teaching. Holographic artefacts can be rendered within a shared learning environment using devices such as the Microsoft HoloLens 2. In addition to facilitating remote access to clinical events, mixed reality may provide a means of sharing mental models, including the vertical and horizontal integration of curricular elements at the bedside. This study aimed to evaluate the feasibility of delivering clinical tutorials using the Microsoft HoloLens 2 and the learning efficacy achieved. METHODS Following receipt of institutional ethical approval, tutorials on preoperative anaesthetic history taking and upper airway examination were facilitated by a tutor who wore the HoloLens device. The tutor interacted face to face with a patient and two-way audio-visual interaction was facilitated using the HoloLens 2 and Microsoft Teams with groups of students who were located in a separate tutorial room. Holographic functions were employed by the tutor. The tutor completed the System Usability Scale, the tutor, technical facilitator, patients, and students provided quantitative and qualitative feedback, and three students participated in semi-structured feedback interviews. Students completed pre- and post-tutorial, and end-of-year examinations on the tutorial topics. RESULTS Twelve patients and 78 students participated across 12 separate tutorials. Five students did not complete the examinations and were excluded from efficacy calculations. Student feedback contained 90 positive comments, including the technology's ability to broadcast the tutor's point-of-vision, and 62 negative comments, where students noted issues with the audio-visual quality, and concerns that the tutorial was not as beneficial as traditional in-person clinical tutorials. The technology and tutorial structure were viewed favourably by the tutor, facilitator and patients. Significant improvement was observed between students' pre- and post-tutorial MCQ scores (mean 59.2% Vs 84.7%, p < 0.001). CONCLUSIONS This study demonstrates the feasibility of using the HoloLens 2 to facilitate remote bedside tutorials which incorporate holographic learning artefacts. Students' examination performance supports substantial learning of the tutorial topics. The tutorial structure was agreeable to students, patients and tutor. Our results support the feasibility of offering effective clinical teaching and learning opportunities using the HoloLens 2. However, the technical limitations and costs of the device are significant, and further research is required to assess the effectiveness of this tutorial format against in-person tutorials before wider roll out of this technology can be recommended as a result of this study.
Collapse
Affiliation(s)
- Murray Connolly
- Cork University Hospital and University College Cork, Cork, Ireland.
| | - Gabriella Iohom
- Cork University Hospital and University College Cork, Cork, Ireland
| | | | | | | | | | | | | | - Corina Soare
- Cork University Hospital and University College Cork, Cork, Ireland
| | | | - George Shorten
- Cork University Hospital and University College Cork, Cork, Ireland
| |
Collapse
|
3
|
Strauss DJ, Francis AL, Vibell J, Corona-Strauss FI. The role of attention in immersion: The two-competitor model. Brain Res Bull 2024; 210:110923. [PMID: 38462137 DOI: 10.1016/j.brainresbull.2024.110923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 11/19/2023] [Accepted: 03/06/2024] [Indexed: 03/12/2024]
Abstract
Currently, we face an exponentially increasing interest in immersion, especially sensory-driven immersion, mainly due to the rapid development of ideas and business models centered around a digital virtual universe as well as the increasing availability of affordable immersive technologies for education, communication, and entertainment. However, a clear definition of 'immersion', in terms of established neurocognitive concepts and measurable properties, remains elusive, slowing research on the human side of immersive interfaces. To address this problem, we propose a conceptual, taxonomic model of attention in immersion. We argue (a) modeling immersion theoretically as well as studying immersion experimentally requires a detailed characterization of the role of attention in immersion, even though (b) attention, while necessary, cannot be a sufficient condition for defining immersion. Our broader goal is to characterize immersion in terms that will be compatible with established psychophysiolgical measures that could then in principle be used for the assessment and eventually the optimization of an immersive experience. We start from the perspective that immersion requires the projection of attention to an induced reality, and build on accepted taxonomies of different modes of attention for the development of our two-competitor model. The two-competitor model allows for a quantitative implementation and has an easy graphical interpretation. It helps to highlight the important link between different modes of attention and affect in studying immersion.
Collapse
Affiliation(s)
- Daniel J Strauss
- Systems Neuroscience & Neurotechnology Unit, Faculty of Medicine, Saarland University & School of Engineering, htw saar, Homburg/Saar, Germany.
| | - Alexander L Francis
- Speech Perception & Cognitive Effort Lab, Dept. of Speech, Language & Hearing Sciences, Purdue University, West Lafayette, IN, USA
| | - Jonas Vibell
- Brain & Behavior Lab, Dept. of Psychology, University of Hawai'i at Manoa, Honololulu, HI, USA
| | - Farah I Corona-Strauss
- Systems Neuroscience & Neurotechnology Unit, Faculty of Medicine, Saarland University & School of Engineering, htw saar, Homburg/Saar, Germany
| |
Collapse
|
4
|
Schweizer F, Willinger L, Oberhoffer-Fritz R, Müller J, Jonas S, Reimer LM. KIJANI: Designing a Physical Activity Promoting Collaborative Augmented Reality Game. Stud Health Technol Inform 2024; 313:113-120. [PMID: 38682514 DOI: 10.3233/shti240021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
BACKGROUND There is an increased need for physical activity among children and adolescents. KIJANI, a mobile augmented reality game, is designed to increase physical activity through gamified exercises. OBJECTIVES The primary aim of this study is to get feedback on the design and implementation of potentially physical activity-increasing features in KIJANI. METHODS A mixed-method study (n=13) evaluates newly implemented game design features quantitatively through measuring physical activity and qualitatively through participant feedback. RESULTS Preliminary results are limited and need further studies. Participants' feedback shows a positive trend and highlights the game's potential effectiveness. CONCLUSION KIJANI shows potential for increasing physical activity among children and adolescents through gamified exercise. Future work will refine the game based on user feedback and findings presented in related work. The game's long-term impact is to be explored.
Collapse
Affiliation(s)
- Florian Schweizer
- School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - Laura Willinger
- Chair of Preventive Pediatrics, Technical University of Munich, Munich, Germany
| | | | - Jan Müller
- Chair of Preventive Pediatrics, Technical University of Munich, Munich, Germany
| | - Stephan Jonas
- Institute for Digital Medicine, University Hospital Bonn, Bonn, Germany
| | - Lara Marie Reimer
- School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- Institute for Digital Medicine, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
5
|
Hoogendoorn EM, Geerse DJ, van Dam AT, Stins JF, Roerdink M. Gait-modifying effects of augmented-reality cueing in people with Parkinson's disease. Front Neurol 2024; 15:1379243. [PMID: 38654737 PMCID: PMC11037397 DOI: 10.3389/fneur.2024.1379243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 03/04/2024] [Indexed: 04/26/2024] Open
Abstract
Introduction External cueing can improve gait in people with Parkinson's disease (PD), but there is a need for wearable, personalized and flexible cueing techniques that can exploit the power of action-relevant visual cues. Augmented Reality (AR) involving headsets or glasses represents a promising technology in those regards. This study examines the gait-modifying effects of real-world and AR cueing in people with PD. Methods 21 people with PD performed walking tasks augmented with either real-world or AR cues, imposing changes in gait speed, step length, crossing step length, and step height. Two different AR headsets, differing in AR field of view (AR-FOV) size, were used to evaluate potential AR-FOV-size effects on the gait-modifying effects of AR cues as well as on the head orientation required for interacting with them. Results Participants modified their gait speed, step length, and crossing step length significantly to changes in both real-world and AR cues, with step lengths also being statistically equivalent to those imposed. Due to technical issues, step-height modulation could not be analyzed. AR-FOV size had no significant effect on gait modifications, although small differences in head orientation were observed when interacting with nearby objects between AR headsets. Conclusion People with PD can modify their gait to AR cues as effectively as to real-world cues with state-of-the-art AR headsets, for which AR-FOV size is no longer a limiting factor. Future studies are warranted to explore the merit of a library of cue modalities and individually-tailored AR cueing for facilitating gait in real-world environments.
Collapse
Affiliation(s)
- Eva M. Hoogendoorn
- Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, Netherlands
| | | | | | | | | |
Collapse
|
6
|
Kolpan KE, Vadala J, Dhanaliwala A, Chao T. Utilizing augmented reality for reconstruction of fractured, fragmented and damaged craniofacial remains in forensic anthropology. Forensic Sci Int 2024; 357:111995. [PMID: 38513528 DOI: 10.1016/j.forsciint.2024.111995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/26/2024] [Accepted: 03/17/2024] [Indexed: 03/23/2024]
Abstract
Forensic anthropologists are often confronted with human remains that have been damaged due to trauma, fire, or postmortem taphonomic alteration, frequently resulting in the fracture and fragmentation of skeletal elements. The augmented reality (AR) technology introduced in this paper builds on familiar 3D visualization methods and utilizes them to make three dimensional holographic meshes of skeletal fragments that can be manipulated, tagged, and examined by the user. Here, CT scans, neural radiance fields (NeRF) artificial intelligence software, and Unreal Engine production software are utilized to construct a three-dimensional holographic image that can be manipulated with HoloLens™ technology to analyze the fracture margin and reconstruct craniofacial elements without causing damage to fragile remains via excessive handling. This allows forensic anthropologists a means of assessing aspects of the biological profile and traumatic injuries without risking further damage to the skeleton. It can also be utilized by students and professional anthropologists to practice refitting before reconstructing craniofacial fragments if refitting is necessary. Additionally, the holographic images can be used to explain complicated concepts in a courtroom without the emotional response related to using bony elements as courtroom exhibits.
Collapse
Affiliation(s)
- Katharine E Kolpan
- Department of Culture, Society and Justice, University of Idaho, 875 Perimeter Drive, Moscow, ID 83844, USA.
| | - Jeffrey Vadala
- Penn Neurology Virtual Reality Laboratory, University of Pennsylvania, Richards Medical Laboratories, 3700 Hamilton Walk, Philadelphia, PA 19104, USA
| | - Ali Dhanaliwala
- Department of Radiology, Penn Presbyterian Medical Center, 51 N. 39th Street, Philadelphia, PA 19104, USA
| | - Tiffany Chao
- Department of Otorhinolaryngology - Head and Neck Surgery, Penn Presbyterian Medical Center, 51 N. 39th Street, Philadelphia, PA 19104, USA
| |
Collapse
|
7
|
Leung R, Shi G. Building Your Future Holographic Mentor: Can We Use Mixed Reality Holograms for Visual Spatial Motor Skills Acquisition in Surgical Education? Surg Innov 2024; 31:82-91. [PMID: 37916497 PMCID: PMC10773164 DOI: 10.1177/15533506231211844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Learning surgical skills require critical visual-spatial motor skills. Current learning methods employ costly and limited in-person teaching in addition to supplementation by videos, textbooks, and cadaveric labs. Increasingly limited healthcare resources and in-person training has led to growing concerns for skills acquisition of trainees. Recent Mixed Reality (MR) devices offer an attractive solution to these resource barriers by providing three-dimensional holographic representations of reality that mimic in-person experiences in a portable, individualized, and cost-effective form. We developed and evaluated two holographic MR models to explore the feasibility of visual-spatial motor skill acquisition from a technical development, learning, and usability perspective. In our first, a pair of holographic hands were created and projected in front of the trainee, and participants were evaluated on their ability to learn complex hand motions in comparison to traditional methods of video and apprenticeship-based learning. The second model displayed a 3D holographic model of the middle and inner ear with labeled anatomical structures which users could explore and user experience feedback was obtained. Our studies demonstrated that scores between MR and apprenticeship learning were comparable. All felt MR was an effective learning tool and most noted that the MR models were better than existing didactic methods of learning. Identified advantages of MR included the ability to provide true 3D spatial representation, improved visualization of smaller structures in detail by upscaling the models, and improved interactivity. Our results demonstrate that holographic learning is able to mimic in-person learning for visual-spatial motor skills and could be a new effective form of self-directed apprenticeship learning.
Collapse
Affiliation(s)
- Regina Leung
- Division of Plastic and Reconstructive Surgery, Western University, London, Canada
| | - Ge Shi
- Division of General Surgery, Western University, London, Canada
| |
Collapse
|
8
|
Fijačko N, Metličar Š, Kleesiek J, Egger J, Chang TP. Virtual Reality, Augmented Reality, Augmented Virtuality, or Mixed Reality in cardiopulmonary resuscitation: Which Extended Reality am I using for teaching adult basic life support? Resuscitation 2023; 192:109973. [PMID: 37730097 DOI: 10.1016/j.resuscitation.2023.109973] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 09/22/2023]
Affiliation(s)
- Nino Fijačko
- University of Maribor, Faculty of Health Sciences, Maribor, Slovenia; ERC Research Net, Niels, Belgium; Maribor University Medical Centre, Maribor, Slovenia.
| | - Špela Metličar
- University of Maribor, Faculty of Health Sciences, Maribor, Slovenia; Medical Dispatch Centre Maribor, University Clinical Centre Ljubljana, Ljubljana, Slovenia
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, Essen, Germany; Department of Physics, TU Dortmund University, Dortmund, Germany; German Cancer Consortium, Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, Essen, Germany; Center for Virtual and Extended Reality in Medicine, Essen University Hospital, Essen, Germany
| | - Todd P Chang
- Children's Hospital Los Angeles, Las Madrinas Simulation Center, Los Angeles, CA, USA
| |
Collapse
|
9
|
Zhang X, Keller A, Armand M, Gomez AM. Feasibility Study of Using Augmented Mirrors for Alignment Task during Orthopaedic Procedures in Mixed Reality. IEEE Int Symp Mixed Augment Real Adjun ISMAR Adjun 2023; 2023:650-651. [PMID: 38566770 PMCID: PMC10986428 DOI: 10.1109/ismar-adjunct60411.2023.00139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Accurate depth estimation poses a significant challenge in egocentric Augmented Reality (AR), particularly for precision-dependent tasks in the medical field, such as needle or tool insertions during percutaneous procedures. Augmented Mirrors (AMs) provide a unique solution to this problem by offering additional non-egocentric viewpoints that enhance spatial understanding of an AR scene. Despite the perceptual advantages of using AMs, their practical utility has yet to be thoroughly tested. In this work, we present results from a pilot study involving five participants tasked with simulating epidural injection procedures in an AR environment, both with and without the aid of an AM. Our findings indicate that using AM contributes to reducing mental effort while improving alignment accuracy. These results highlight the potential of AM as a powerful tool for AR-enabled medical procedures, setting the stage for future exploration involving medical professionals.
Collapse
|
10
|
Cade AE, Stevens K, Lee A, Baptista L. Differences in learning retention and experience of augmented reality notes compared to traditional paper notes in a chiropractic technique course: A randomized trial. J Chiropr Educ 2023; 37:137-150. [PMID: 37270710 DOI: 10.7899/jce-21-33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 09/27/2021] [Accepted: 08/17/2022] [Indexed: 06/05/2023]
Abstract
OBJECTIVE To investigate if a written guide or augmented reality (AR) guide improves free recall of diversified chiropractic adjusting technique and to capture participants' impressions of the study in a poststudy questionnaire. METHODS Thirty-eight chiropractic students were evaluated for diversified listing (a nomenclature denoting vertebral malposition and correction) recall, pre-AR and post-AR, or written guide review. The vertebral segments used were C7 and T6. Two randomized groups reviewed an original course written guide (n = 18) or a new AR guide (n = 20). A Wilcoxon-Mann-Whitney (C7) and t test (T6) compared group differences in reevaluation scores. A poststudy questionnaire was given to capture participants' impressions of the study. RESULTS Both groups showed no significant differences in free recall scores after reviewing the guides for C7 or T6. The poststudy questionnaire suggested a number of strategies could be used to improve current teaching material such as more detail in the written guides and organizing content into smaller blocks. CONCLUSION Use of an AR or written guide does not seem to change participants' free recall ability when used to review diversified technique listings. The poststudy questionnaire was useful to identify strategies to improve currently used teaching material.
Collapse
|
11
|
Haouchine N, Dorent R, Juvekar P, Torio E, Wells WM, Kapur T, Golby AJ, Frisken S. Learning Expected Appearances for Intraoperative Registration during Neurosurgery. Med Image Comput Comput Assist Interv 2023; 14228:227-237. [PMID: 38371724 PMCID: PMC10870253 DOI: 10.1007/978-3-031-43996-4_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.
Collapse
Affiliation(s)
- Nazim Haouchine
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Reuben Dorent
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - William M Wells
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Alexandra J Golby
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
12
|
Hong J, Kong HJ. Digital Therapeutic Exercises Using Augmented Reality Glasses for Frailty Prevention among Older Adults. Healthc Inform Res 2023; 29:343-351. [PMID: 37964456 PMCID: PMC10651397 DOI: 10.4258/hir.2023.29.4.343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/07/2023] [Accepted: 10/17/2023] [Indexed: 11/16/2023] Open
Abstract
OBJECTIVES The objective of this study was to investigate the effects of a digital therapeutic exercise platform for pre-frail or frail elderly individuals using augmented reality (AR) technology accessed through glasses. A tablet-based exercise program was utilized for the control group, and a non-inferiority assessment was employed. METHODS The participants included older adult women aged 65 years and older residing in Incheon, South Korea. A digital therapeutic exercise program involving AR glasses or tablet-based exercise was administered twice a week for 12 weeks, with gradually increasing exercise duration. Statistical analysis was conducted using the t-test and Wilcoxon rank sum test for non-inferiority assessment. RESULTS In the primary efficacy assessment, regarding the change in lower limb strength, a non-inferior result was observed for the intervention group (mean change, 5.46) relative to the control group (mean change, 4.83), with a mean difference of 0.63 between groups (95% confidence interval, -2.33 to 3.58). Changes in body composition and physical fitness-related variables differed non-significantly between the groups. However, the intervention group demonstrated a significantly greater increase in cardiorespiratory endurance (p < 0.005) and a significantly larger decrease in the frailty index (p < 0.001). CONCLUSIONS An AR-based digital therapeutic program significantly and positively contributed to the improvement of cardiovascular endurance and the reduction of indicators of aging among older adults. These findings underscore the value of digital therapeutics in mitigating the effects of aging.
Collapse
Affiliation(s)
- Jeeyoung Hong
- Exercise Prescription Research Institute, Kongju National University, Kongju,
Korea
- Biomedical Research Institute, Seoul National University Hospital, Seoul,
Korea
| | - Hyoun-Joong Kong
- Department of Transdisciplinary Medicine and Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul,
Korea
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul,
Korea
| |
Collapse
|
13
|
Shidende D, Kessel T, Treydte A, Moebs S. A Systematic Literature Review of Accessibility Evaluation Methods for Augmented Reality Applications. Stud Health Technol Inform 2023; 306:575-582. [PMID: 37638964 DOI: 10.3233/shti230681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Augmented reality is increasingly becoming significant in people's everyday life in different sectors. Particularly to users with disabilities, augmented reality can be an instructional tool and assistive technology, making it worth a vital tool for users with disabilities. For such an important tool, it is essential to understand how these applications are evaluated in order to improve their throughput and extend their accessibility. In that regard, a systematic literature review for peer-reviewed articles published between 2012 and 2022 was conducted to discover which methods, metrics, and tools/techniques researchers use during the accessibility evaluation of augmented reality applications. The PRISMA methodology allowed us to identify, screen, and include 60 articles from three databases. The finding shows that most researchers use task scenarios as the method, qualitative feedback as the metric, and questionnaire as a tool to collect data for accessibility evaluation. The conclusion and future studies are also discussed.
Collapse
Affiliation(s)
- Deogratias Shidende
- Baden-Württemberg Cooperative State University (DHBW) Heidenheim, Germany
- University of Hohenheim, Stuttgart, Germany
| | - Thomas Kessel
- Baden-Württemberg Cooperative State University (DHBW) Stuttgart, Germany
| | | | - Sabine Moebs
- Baden-Württemberg Cooperative State University (DHBW) Heidenheim, Germany
| |
Collapse
|
14
|
Golomingi R, Dobay A, Franckenberg S, Ebert L, Sieberth T. Augmented reality in forensics and forensic medicine - Current status and future prospects. Sci Justice 2023; 63:451-455. [PMID: 37453776 DOI: 10.1016/j.scijus.2023.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/27/2023] [Accepted: 04/22/2023] [Indexed: 07/18/2023]
Abstract
Forensic investigations require a vast variety of knowledge and expertise of each specialist involved. With the increase in digitization and advanced technical possibilities, the traditional use of a computer with a screen for visualization and a mouse and keyboard for interactions has limitations, especially when visualizing the content in relation to the real world. Augmented reality (AR) can be used in such instances to support investigators in various tasks at the scene as well as later in the investigation process. In this article, we present current applications of AR in forensics and forensic medicine, the technological basics of AR, and the advantages that AR brings for forensic investigations. Furthermore, we will have a brief look at other fields of application and at future developments of AR in forensics.
Collapse
Affiliation(s)
- Raffael Golomingi
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Akos Dobay
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Sabine Franckenberg
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland; Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland.
| | - Lars Ebert
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Till Sieberth
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| |
Collapse
|
15
|
Koo K, Park T, Jeong H, Khang S, Koh CS, Park M, Kim MJ, Jung HH, Shin J, Kim KW, Lee J. Simulation Method for the Physical Deformation of a Three-Dimensional Soft Body in Augmented Reality-Based External Ventricular Drainage. Healthc Inform Res 2023; 29:218-227. [PMID: 37591677 PMCID: PMC10440195 DOI: 10.4258/hir.2023.29.3.218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 06/09/2023] [Indexed: 08/19/2023] Open
Abstract
OBJECTIVES Intraoperative navigation reduces the risk of major complications and increases the likelihood of optimal surgical outcomes. This paper presents an augmented reality (AR)-based simulation technique for ventriculostomy that visualizes brain deformations caused by the movements of a surgical instrument in a three-dimensional brain model. This is achieved by utilizing a position-based dynamics (PBD) physical deformation method on a preoperative brain image. METHODS An infrared camera-based AR surgical environment aligns the real-world space with a virtual space and tracks the surgical instruments. For a realistic representation and reduced simulation computation load, a hybrid geometric model is employed, which combines a high-resolution mesh model and a multiresolution tetrahedron model. Collision handling is executed when a collision between the brain and surgical instrument is detected. Constraints are used to preserve the properties of the soft body and ensure stable deformation. RESULTS The experiment was conducted once in a phantom environment and once in an actual surgical environment. The tasks of inserting the surgical instrument into the ventricle using only the navigation information presented through the smart glasses and verifying the drainage of cerebrospinal fluid were evaluated. These tasks were successfully completed, as indicated by the drainage, and the deformation simulation speed averaged 18.78 fps. CONCLUSIONS This experiment confirmed that the AR-based method for external ventricular drain surgery was beneficial to clinicians.
Collapse
Affiliation(s)
- Kyoyeong Koo
- School of Computer Science and Engineering, Soongsil University, Seoul,
Korea
| | - Taeyong Park
- Department of Biomedical Informatics, Hallym University Medical Center, Anyang,
Korea
| | - Heeryeol Jeong
- School of Computer Science and Engineering, Soongsil University, Seoul,
Korea
| | - Seungwoo Khang
- School of Computer Science and Engineering, Soongsil University, Seoul,
Korea
| | - Chin Su Koh
- Department of Neurosurgery, Yonsei University College of Medicine, Seoul,
Korea
| | - Minkyung Park
- Department of Neurosurgery, Yonsei University College of Medicine, Seoul,
Korea
- Brain Korea 21 PLUS Project for Medical Science and Brain Research Institute, Yonsei University College of Medicine, Seoul,
Korea
| | - Myung Ji Kim
- Department of Neurosurgery, Korea University Ansan Hospital, Ansan,
Korea
| | - Hyun Ho Jung
- Department of Neurosurgery, Yonsei University College of Medicine, Seoul,
Korea
| | - Juneseuk Shin
- Department of Systems Management Engineering, Sungkyunkwan University, Suwon,
Korea
| | - Kyung Won Kim
- Department of Radiology & Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul,
Korea
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, Seoul,
Korea
- iAID Inc., Seoul,
Korea
| |
Collapse
|
16
|
Nikitas C, Kikidis D, Pardalis A, Tsoukatos M, Papadopoulou S, Bibas A, Bamiou DE. Head mounted display effect on vestibular rehabilitation exercises performance. J Frailty Sarcopenia Falls 2023; 8:66-73. [PMID: 37275662 PMCID: PMC10233325 DOI: 10.22540/jfsf-08-066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/22/2023] [Indexed: 06/07/2023] Open
Abstract
Objectives Vestibular rehabilitation clinical guidelines document the additional benefit offered by the Mixed Reality environments in the reduction of symptoms and the improvement of balance in peripheral vestibular hypofunction. The HOLOBalance platform offers vestibular rehabilitation exercises, in an Augmented Reality (AR) environment, projecting them using a low- cost Head Mounted Display. The effect of the AR equipment on the performance in three of the commonest vestibular rehabilitation exercises is investigated in this pilot study. Methods Twenty-five healthy adults (12/25 women) participated, executing the predetermined exercises with or without the use of the AR equipment. Results Statistically significant difference was obtained only in the frequency of head movements in the yaw plane during the execution of a vestibular adaptation exercise by healthy adults (0.97 Hz; 95% CI=(0.56, 1.39), p<0.001). In terms of difficulty in exercise execution, the use of the equipment led to statistically significant differences at the vestibular-oculomotor adaptation exercise in the pitch plane (OR=3.64, 95% CI (-0.22, 7.50), p=0.049), and in the standing exercise (OR=28.28. 95% CI (23.6, 32.96), p=0.0001). Conclusion Τhe use of AR equipment in vestibular rehabilitation protocols should be adapted to the clinicians' needs.
Collapse
Affiliation(s)
- Christos Nikitas
- 1 Department of Otorhinolaryngology, Head and Neck Surgery, National and Kapodistrian University of Athens, Hippocrateion General Hospital, Athens, Greece
| | - Dimitris Kikidis
- 1 Department of Otorhinolaryngology, Head and Neck Surgery, National and Kapodistrian University of Athens, Hippocrateion General Hospital, Athens, Greece
| | - Athanasios Pardalis
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
| | - Michalis Tsoukatos
- 1 Department of Otorhinolaryngology, Head and Neck Surgery, National and Kapodistrian University of Athens, Hippocrateion General Hospital, Athens, Greece
| | - Sofia Papadopoulou
- 1 Department of Otorhinolaryngology, Head and Neck Surgery, National and Kapodistrian University of Athens, Hippocrateion General Hospital, Athens, Greece
| | - Athanasios Bibas
- 1 Department of Otorhinolaryngology, Head and Neck Surgery, National and Kapodistrian University of Athens, Hippocrateion General Hospital, Athens, Greece
| | - Doris E. Bamiou
- Ear Institute, University College London, London, United Kingdom
- Biomedical Research Centre Hearing and Deafness, University College London Hospitals, London, United Kingdom
| |
Collapse
|
17
|
Schneider C, Rameder P, Kolmann P, Trukeschitz B. Remote Assistance for Home Care Workers: Concept and Technical Implementation at a Glance. Stud Health Technol Inform 2023; 301:39-47. [PMID: 37172150 DOI: 10.3233/shti230009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
BACKGROUND Long-term care faces severe challenges on the supply (shortages of formal and informal carers) as well as on the demand side (increasing number of care-dependent people). To cope with these challenges, new forms of support for the professional care network are needed. OBJECTIVES This paper describes the concept and implementation of a Remote Care Assist (RCA) service, consisting of a web-application for the Care Expert Center (CXC) and Remote Support (RS) applications for the HoloLens 2 as well as for Android and iOS smartphones. METHODS Using the evidence-based and user-centred innovation process (EUIP), a Remote Care Assist service was conceptualized and implemented for home care service settings in three European countries. RESULTS After five iterations within two phases of the EUIP, the final feature set of the RCA-service was determined and implemented. CONCLUSION By working closely with the target group, it was possible to identify potential hurdles and additional requirements such as a well-thought-out interaction concept for the HoloLens or a good organizational embedding of the service.
Collapse
Affiliation(s)
- Cornelia Schneider
- University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria
| | - Philipp Rameder
- University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria
| | - Philipp Kolmann
- University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria
| | | |
Collapse
|
18
|
Yoo I, Kong HJ, Joo H, Choi Y, Kim SW, Lee KE, Hong J. User Experience of Augmented Reality Glasses-based Tele-Exercise in Elderly Women. Healthc Inform Res 2023; 29:161-167. [PMID: 37190740 DOI: 10.4258/hir.2023.29.2.161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 04/21/2023] [Indexed: 05/17/2023] Open
Abstract
OBJECTIVES The purpose of this study was to identify any difference in user experience between tablet- and augmented reality (AR) glasses-based tele-exercise programs in elderly women. METHODS Participants in the AR group (n = 14) connected Nreal glasses with smartphones to display a pre-recorded exercise program, while each member of the tablet group (n = 13) participated in the same exercise program using an all-in-one personal computer. The program included sitting or standing on a chair, bare-handed calisthenics, and muscle strengthening using an elastic band. The exercise movements were presented first for the upper and then the lower extremities, and the total exercise time was 40 minutes (5 minutes of warm-up exercises, 30 minutes of main exercises, and 5 minutes of cool-down exercises). To evaluate the user experience, a questionnaire consisting of a 7-point Likert scale was used as a measurement tool. In addition, the Wilcoxon rank-sum test was used to assess differences between the two groups. RESULTS Of the six user experience scales, attractiveness (p = 0.114), stimulation (p = 0.534), and novelty (p = 0.916) did not differ significantly between the groups. However, efficiency (p = 0.006), perspicuity (p = 0.008), and dependability (p = 0.049) did vary significantly between groups. CONCLUSIONS When developing an AR glasses-based exercise program for the elderly, the efficiency, clarity, and stability of the program must be considered to meet the participants' needs.
Collapse
Affiliation(s)
- Inhwa Yoo
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, Korea
| | - Hyoun-Joong Kong
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
- Department of Medicine, Seoul National University Hospital, Seoul, Korea
| | - Hyunjin Joo
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Yeonjin Choi
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Suk Wha Kim
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Plastic and Reconstructive Surgery, CHA Bundang Medical Center, CHA University School of Medicine, Seongnam, Korea
| | - Kyu Eun Lee
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Surgery, Seoul National University College of Medicine, Seoul, Korea
| | - Jeeyoung Hong
- Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
19
|
Mangano FG, Admakin O, Lerner H, Mangano C. Artificial Intelligence and Augmented Reality for Guided Implant Surgery Planning: a Proof of Concept. J Dent 2023; 133:104485. [PMID: 36965859 DOI: 10.1016/j.jdent.2023.104485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 03/08/2023] [Accepted: 03/13/2023] [Indexed: 03/27/2023] Open
Abstract
PURPOSE To present a novel protocol for authentic three-dimensional (3D) planning of dental implants, using artificial intelligence (AI) and augmented reality (AR). METHODS The novel protocol consists of (1) 3D data acquisition, with an intraoral scanner (IOS) and cone-beam computed tomography (CBCT); (2) application of AI for CBCT segmentation to obtain standard tessellation language (STL) models and automatic alignment with IOS models; (3) loading of selected STL models within the AR system and surgical planning with holograms; (4) surgical guide design with open-source computer-assisted-design (CAD) software; and (5) surgery on the patient. RESULTS This novel protocol is effective and time-efficient when used for planning simple cases of static guided implant surgery in the partially edentulous patient. The clinician can plan the implants in an authentic 3D environment, without using any radiological guided surgery software. The precision of implant placement looks clinically acceptable, with minor deviations. CONCLUSIONS AI and AR technologies can be successfully used for planning guided implant surgery for authentic 3D planning that may replace conventional guided surgery software. However, further clinical studies are needed to validate this protocol. STATEMENT OF CLINICAL RELEVANCE The combined use of AI and AR may change the perspectives of modern guided implant surgery for authentic 3D planning that may replace conventional guided surgery software.
Collapse
Affiliation(s)
- Francesco Guido Mangano
- Department of Pediatric, Preventive Dentistry and Orthodontics, Sechenov First State Medical University, Moscow, Russian Federation; Honorary Professor in Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, China.
| | - Oleg Admakin
- Department of Pediatric, Preventive Dentistry and Orthodontics, Sechenov First State Medical University, Moscow, Russian Federation.
| | - Henriette Lerner
- Academic Teaching and Research Institution of Johann Wolfgang Goethe University, Frankfurt, Germany.
| | | |
Collapse
|
20
|
Farronato M, Torres A, Pedano MS, Jacobs R. Novel method for augmented reality guided endodontics: an in vitro study. J Dent 2023;:104476. [PMID: 36905949 DOI: 10.1016/j.jdent.2023.104476] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/02/2023] [Accepted: 02/28/2023] [Indexed: 03/11/2023] Open
Abstract
OBJECTIVE The aim of this study is to evaluate the accuracy in endodontics of a novel augmented reality (AR) method for guided access cavity preparation in 3D-printed jaws. METHODS Two operators with different levels of experience in endodontics performed pre-planned virtually guided access cavities through a novel markerless AR system developed by a team among the authors on three sets of 3D-printed jaw models using a 3D printer (Objet Connex 350, Stratasys) mounted on a phantom. After the treatment, a post-operative high-resolution CBCT scan (NewTom VGI Evo, Cefla) was taken for each model and registered to the pre-operative model. All the access cavities were then digitally reconstructed by filling the cavity area using 3D medical software (3-Matic 15.0, Materialise). For the anterior teeth and the premolars, the deviation at the coronal and apical entry points as well as the angular deviation of the access cavity were compared to the virtual plan. For the molars, the deviation at the coronal entry point was compared to the virtual plan. Additionally, the surface area of all access cavities at the entry point was measured and compared to the virtual plan. Descriptive statistics for each parameter were performed. A 95% confidence interval was calculated. RESULTS A total of 90 access cavities were drilled up to a depth of 4 mm inside the tooth. The mean deviation in the frontal teeth and in the premolars at the entry point was 0.51 mm and 0.77 mm at the apical point, with a mean angular deviation of 8.5° and a mean surface overlap of 57%. The mean deviation for the molars at the entry point was 0.63 mm, with a mean surface overlap of 82%. CONCLUSION The use of AR as a digital guide for endodontic access cavity drilling on different teeth showed promising results and might have potential for clinical use. However, further development and research might be needed before in vivo validation to overcome the limitations of the study.
Collapse
|
21
|
Afrashtehfar KI, Yang JW, Al-Sammarraie A, Chen H, Saeed MH. Pre-clinical undergraduate students' perspectives on the adoption of virtual and augmented reality to their dental learning experience: A one-group pre- and post-test design protocol. F1000Res 2023; 10:473. [PMID: 36703700 PMCID: PMC9837452 DOI: 10.12688/f1000research.53059.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/06/2022] [Indexed: 01/07/2023] Open
Abstract
Background: We live in a time where traditional education has rapidly incorporated online modalities due to the recent SARS-CoV-2 (COVID-19) safety measures such as social distancing. Regardless of these challenges, health education constantly strives to implement the best technologies available for an effective student deep learning outcome. Virtual (VR) and augmented reality (AR) in the dental pre-clinical stage may help stimulate students to better understand the foundation material prescribed in the curriculum. Most visual material available for students is still mainly based on 2D graphics. Thus, this study will attempt to evaluate the students' perceptions about implementing VR/AR technologies in the learning setting. Methods: A single-group pretest-posttest design will be implemented where students will be exposed to VR/AR and fill out two questionnaires, one before and one after the exposure. Conclusions: This project is intended to start once the institutional ethical approval is obtained. It is expected that the analysis from the current project will provide recommendations to improve the students' academic curriculum pre-clinical experience. The recommendations will be provided in the form of at least three scientific publications, with one publication for each subject area intended to be evaluated (i.e., head and neck anatomy, dental anatomy, and removable prosthodontics).
Collapse
Affiliation(s)
- Kelvin I. Afrashtehfar
- Department of Clinical Sciences, College of Dentistry, Ajman University, Ajman City, PO Box 346, United Arab Emirates,Department of Reconstructive Dentistry & Gerodontology, School of Dental Medicine, Faculty of Medicine, Universität Bern, Bern, 3010 Bern, Switzerland,
| | - Jing-Wen Yang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, National Engineering Laboratory for Digital and Material Technology of Stomatology, Research Center of Engineering and Technology for Digital Dentistry of Ministry of Health, Beijing Key Laboratory of Digital Stomatology, Beijing, 100081, China
| | - A. Al-Sammarraie
- Department of Clinical Sciences, College of Dentistry, Ajman University, Ajman City, PO Box 346, United Arab Emirates,
| | - Hui Chen
- Division of Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Prince Philip Dental Hospital, Sai Ying Pun, Hong Kong SAR, China
| | - Musab H. Saeed
- Department of Clinical Sciences, College of Dentistry, Ajman University, Ajman City, PO Box 346, United Arab Emirates
| |
Collapse
|
22
|
Wenk N, Penalver-Andres J, Buetler KA, Nef T, Müri RM, Marchal-Crespo L. Effect of immersive visualization technologies on cognitive load, motivation, usability, and embodiment. Virtual Real 2023; 27:307-331. [PMID: 36915633 PMCID: PMC9998603 DOI: 10.1007/s10055-021-00565-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 07/22/2021] [Indexed: 05/09/2023]
Abstract
Virtual reality (VR) is a promising tool to promote motor (re)learning in healthy users and brain-injured patients. However, in current VR-based motor training, movements of the users performed in a three-dimensional space are usually visualized on computer screens, televisions, or projection systems, which lack depth cues (2D screen), and thus, display information using only monocular depth cues. The reduced depth cues and the visuospatial transformation from the movements performed in a three-dimensional space to their two-dimensional indirect visualization on the 2D screen may add cognitive load, reducing VR usability, especially in users suffering from cognitive impairments. These 2D screens might further reduce the learning outcomes if they limit users' motivation and embodiment, factors previously associated with better motor performance. The goal of this study was to evaluate the potential benefits of more immersive technologies using head-mounted displays (HMDs). As a first step towards potential clinical implementation, we ran an experiment with 20 healthy participants who simultaneously performed a 3D motor reaching and a cognitive counting task using: (1) (immersive) VR (IVR) HMD, (2) augmented reality (AR) HMD, and (3) computer screen (2D screen). In a previous analysis, we reported improved movement quality when movements were visualized with IVR than with a 2D screen. Here, we present results from the analysis of questionnaires to evaluate whether the visualization technology impacted users' cognitive load, motivation, technology usability, and embodiment. Reports on cognitive load did not differ across visualization technologies. However, IVR was more motivating and usable than AR and the 2D screen. Both IVR and AR rea ched higher embodiment level than the 2D screen. Our results support our previous finding that IVR HMDs seem to be more suitable than the common 2D screens employed in VR-based therapy when training 3D movements. For AR, it is still unknown whether the absence of benefit over the 2D screen is due to the visualization technology per se or to technical limitations specific to the device.
Collapse
Affiliation(s)
- N. Wenk
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - J. Penalver-Andres
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - K. A. Buetler
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - T. Nef
- Gerontechnology & Rehabilitation, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - R. M. Müri
- Gerontechnology & Rehabilitation, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Neurology, University Neurorehabilitation, University Hospital Bern (Inselspital), University of Bern, Bern, Switzerland
| | - L. Marchal-Crespo
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
23
|
Cerci P, Kendirlinan R, Dalgıç CT. The perspective of allergy and immunology specialists on the innovations of metaverse: A survey study. Allergol Immunopathol (Madr) 2023; 51:186-193. [PMID: 37169577 DOI: 10.15586/aei.v51i3.829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 03/15/2023] [Indexed: 05/13/2023]
Abstract
BACKGROUND New technologies have resulted in dramatic shifts in the field of medicine, and it stands to reason that metaverse will also affect the practice of allergy and immunology. This study aimed to determine the attitudes of allergists and raise awareness about metaverse applications in allergy and immunology. METHODS A nationwide survey-based study was conducted in Turkey. First, a 28-item questionnaire was developed and sent to Turkish allergists. After completing the first questionnaire, the participants were asked to watch a 5-min informative video about the metaverse. Lastly, a second survey was conducted to evaluate the changes in the views of the participants. RESULTS A total of 148 allergy doctors in Turkey participated in the survey. After watching a video containing updated information about the metaverse, there was a significant increase in the importance that participants attributed to the use of virtual reality and augmented reality applications in the field of immunology and allergy (P < 0.05). Additionally, there was a significant increase in the percentage of participants who thought that Metaverse applications could be integrated into the existing system and said that this possibility excited them (P < 0.05). There was also a significant increase in the percentage of participants who thought this innovative technology could be helpful in patient examination, student and physician education, allergy testing, and patient education (P < 0.05). CONCLUSIONS Our results demonstrate that providing information to professionals working in the field can positively influence physicians' views on the potential of the metaverse, which is a valuable tool in the field of immunology and allergy.
Collapse
Affiliation(s)
- Pamir Cerci
- Division of Immunology and Allergy, Department of Internal Medicine, Eskisehir City Hospital, Eskisehir, Turkey;
| | - Resat Kendirlinan
- Division of Immunology and Allergy, Department of Chest Diseases, Izmir Atatürk Education and Research Hospital, Izmir, Turkey
| | - Ceyda Tunakan Dalgıç
- Division of Immunology and Allergy, Department of Internal Medicine, Ege University Medical Faculty, Izmir, Turkey
| |
Collapse
|
24
|
Mulyani EY, Jus’at I, Sumaedi S. The effect of Augmented-Reality media-based health education on healthy lifestyle knowledge, attitude, and healthy lifestyle behaviors among pregnant women during COVID-19 pandemic in Jakarta, Indonesia. Digit Health 2023; 9:20552076231167255. [PMID: 37051566 PMCID: PMC10084582 DOI: 10.1177/20552076231167255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 03/16/2023] [Indexed: 04/14/2023] Open
Abstract
Pregnancy is a critical period. Pregnant women need to perform healthy lifestyle behavior in order to ensure good fetal development. During COVID-19 pandemic, Augmented-Reality (AR) media may be used in pregnant women health education. However, there is a lack of research that investigated the effect of AR media use in pregnant women health education. Therefore, this research aimed to investigate the impact of AR media use on healthy lifestyle knowledge, attitude, and behaviors among pregnant women during COVID-19 pandemic. This cohort-longitudinal study involved 86 pregnant women aged 18-45 years. The subjects who participated in this research received health education interventions using AR media for 5 months. The data collection was performed at the pre-post-intervention through a survey with questionnaire. Changes in subjects' healthy lifestyle knowledge, attitude, and behaviors were analyzed by using t-test. The research results show that the AR media use in health education significantly improved the subjects' scores for healthy lifestyle knowledge (5.0 ± 10.9; p < .05) and behaviors (9.7 ± 17.5; p < .05). However, the subject score for attitude was not significantly improved (0.3 ± 7.1; p ≥ .05). This research results provide evidence of the importance of using AR media in health education for pregnant women during COVID-19 pandemic.
Collapse
Affiliation(s)
- Erry Y Mulyani
- Department of Nutritional Science, Faculty of Health Sciences, Universitas Esa Unggul, Jakarta, Indonesia
- Erry Y Mulyani, Department of Nutritional Science, Faculty of Health Sciences, Universitas Esa Unggul, Jl Arjuna Utara Tol Tomang, Kebon Jeruk Jakarta Barat, Indonesia.
| | - Idrus Jus’at
- Department of Nutritional Science, Faculty of Health Sciences, Universitas Esa Unggul, Jakarta, Indonesia
| | - Sik Sumaedi
- Quality Management Research Group, Research Center for Testing Technology and Standards, National Research and Innovation Agency (BRIN), South Tangerang, Indonesia
| |
Collapse
|
25
|
Kögl FV, Léger É, Haouchine N, Torio E, Juvekar P, Navab N, Kapur T, Pieper S, Golby A, Frisken S. A Tool-free Neuronavigation Method based on Single-view Hand Tracking. Comput Methods Biomech Biomed Eng Imaging Vis 2022; 11:1307-1315. [PMID: 37457380 PMCID: PMC10348700 DOI: 10.1080/21681163.2022.2163428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/30/2022]
Abstract
This work presents a novel tool-free neuronavigation method that can be used with a single RGB commodity camera. Compared with freehand craniotomy placement methods, the proposed system is more intuitive and less error prone. The proposed method also has several advantages over standard neuronavigation platforms. First, it has a much lower cost, since it doesn't require the use of an optical tracking camera or electromagnetic field generator, which are typically the most expensive parts of a neuronavigation system, making it much more accessible. Second, it requires minimal setup, meaning that it can be performed at the bedside and in circumstances where using a standard neuronavigation system is impractical. Our system relies on machine-learning-based hand pose estimation that acts as a proxy for optical tool tracking, enabling a 3D-3D pre-operative to intra-operative registration. Qualitative assessment from clinical users showed that the concept is clinically relevant. Quantitative assessment showed that on average a target registration error (TRE) of 1.3cm can be achieved. Furthermore, the system is framework-agnostic, meaning that future improvements to hand-tracking frameworks would directly translate to a higher accuracy.
Collapse
Affiliation(s)
- Fryderyk Victor Kögl
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Étienne Léger
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
- Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Steve Pieper
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Isomics, Inc., Cambridge, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| |
Collapse
|
26
|
Navab N, Martin-Gomez A, Seibold M, Sommersperger M, Song T, Winkler A, Yu K, Eck U. Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process. J Imaging 2022; 9:jimaging9010004. [PMID: 36662102 PMCID: PMC9866223 DOI: 10.3390/jimaging9010004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/15/2022] [Accepted: 12/19/2022] [Indexed: 12/28/2022] Open
Abstract
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.
Collapse
Affiliation(s)
- Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alejandro Martin-Gomez
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Matthias Seibold
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, CH-8008 Zurich, Switzerland
| | - Michael Sommersperger
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Tianyu Song
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alexander Winkler
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University Hospital, DE-80336 Munich, Germany
| | - Kevin Yu
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- medPhoton GmbH, AT-5020 Salzburg, Austria
| | - Ulrich Eck
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Correspondence:
| |
Collapse
|
27
|
Zhang G, Bartels J, Martin-Gomez A, Armand M. Towards Reducing Visual Workload in Surgical Navigation: Proof-of-concept of an Augmented Reality Haptic Guidance System. Comput Methods Biomech Biomed Eng Imaging Vis 2022; 11:1073-1080. [PMID: 38487569 PMCID: PMC10938944 DOI: 10.1080/21681163.2022.2152372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 11/19/2022] [Indexed: 12/12/2022]
Abstract
The integration of navigation capabilities into the operating room has enabled surgeons take on more precise procedures guided by a pre-operative plan. Traditionally, navigation information based on this plan is presented using monitors in the surgical theater. But the monitors force the surgeon to frequently look away from the surgical area. Alternative technologies, such as augmented reality, have enabled surgeons to visualize navigation information in-situ. However, burdening the visual field with additional information can be distracting. In this work, we propose integrating haptic feedback into a surgical tool handle to enable surgical guidance capabilities. This property reduces the amount of visual information, freeing surgeons to maintain visual attention over the patient and the surgical site. To investigate the feasibility of this guidance paradigm we conducted a pilot study with six subjects. Participants traced paths, pinpointed locations and matched alignments with a mock surgical tool featuring a novel haptic handle. We collected quantitative data, tracking user's accuracy and time to completion as well as subjective cognitive load. Our results show that haptic feedback can guide participants using a tool to sub-millimeter and sub-degree accuracy with only little training. Participants were able to match a location with an average error of 0.82 mm , desired pivot alignments with an average error of 0.83 ° and desired rotations to 0.46 °.
Collapse
Affiliation(s)
- Gesiren Zhang
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jan Bartels
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
28
|
Sommer F, Hussain I, Kirnaz S, Goldberg J, McGrath L, Navarro-Ramirez R, Waterkeyn F, Schmidt F, Gadjradj PS, Härtl R. Safety and Feasibility of Augmented Reality Assistance in Minimally Invasive and Open Resection of Benign Intradural Extramedullary Tumors. Neurospine 2022; 19:501-512. [PMID: 36203278 PMCID: PMC9537853 DOI: 10.14245/ns.2244222.111] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 04/27/2022] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVE Surgical resection of benign intradural extramedullary tumors (BIETs) is effective for appropriately selected patients. Minimally invasive surgical (MIS) techniques have been described for successful resection of BIET while minimizing soft tissue injury. Augmented reality (AR) is a promising new technology that can accurately allow for intraoperative localization from skin through the intradural compartment. We present a case series evaluating the timing, steps, and accuracy at which this technology is able to enhance BIET resection. METHODS A protocol for MIS and open AR-guided BIET resection was developed and applied to determine the feasibility. The tumor is marked on diagnostic magnetic resonance imaging (MRI) using AR software. Intraoperatively, the planning MRI is fused with the intraoperative computed tomography. The position and size of the tumor is projected into the surgical microscope and directly into the surgeon's field of view. Intraoperative orientation is performed exclusively via navigation and AR projection. Demographic and perioperative factors were collected. RESULTS Eight patients were enrolled. The average operative time for MIS cases was 128 ± 8 minutes and for open cases 206 ± 55 minutes. The estimated intraoperative blood loss was 97 ± 77 mL in MIS and 240 ± 206 mL in open procedures. AR tumor location and margins were considered sufficiently precise by the surgeon in every case. Neither correction of the approach trajectory nor ultrasound assistance to localize the tumor were necessary in any case. No intraoperative complications were observed. CONCLUSION Current findings suggest that AR may be a feasible technique for tumor localization in the MIS and open resection of benign spinal extramedullary tumors.
Collapse
Affiliation(s)
- Fabian Sommer
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Ibrahim Hussain
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Sertac Kirnaz
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Jacob Goldberg
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Lynn McGrath
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Rodrigo Navarro-Ramirez
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Francois Waterkeyn
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Franziska Schmidt
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Pravesh Shankar Gadjradj
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA
| | - Roger Härtl
- Department of Neurosurgery, Weill Cornell Medicine, New York Presbyterian Hospital/Och Spine, New York, NY, USA,Corresponding Author Roger Härtl Department of Neurosurgery, New York-Presbyterian Hospital, 525 E 68th Street, Box 99, New York, New York 10065, USA
| |
Collapse
|
29
|
Koulouris D, Gallos P, Menychtas A, Maglogiannis I. Exploiting Augmented Reality and Computer Vision for Healthcare Education: The Case of Pharmaceutical Substances Visualization and Information Retrieval. Stud Health Technol Inform 2022; 298:87-91. [PMID: 36073462 DOI: 10.3233/shti220913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Augmented Reality (AR) is already used as the primary visualization and user interaction tool in several scientific and business areas. At the same time new AR technologies and frameworks considerably facilitate both the development of innovative applications and also their wide adoption in different domains of everyday life. In the area of healthcare AR solutions make use of mobile or wearable devices and glasses to support, among others, education and healthcare professionals training. The aim of this paper is to present a prototype mHealth app for education, which uses AR and computer vision technologies for pharmaceutical substances recognition on drug packaging. The conceptual design of the system includes three main components which are responsible for a) Text recognition, b) Drug identification and c) AR operations for interactivity. The prototype application is available in Android or iOS platforms and has been evaluated in real-world scenarios. Camera and screen of the mobile phones fulfill the text recognition and AR operations, which eliminates the need for special equipment, while PubChem and 3D Model databases provide assets required for the drug identification and AR visualizations. The results highlight the value of AR for educational purposes, especially when combined with advanced image recognition technologies to build interactive AR encyclopedias.
Collapse
Affiliation(s)
- Dionysios Koulouris
- Computational Biomedicine Research Lab, Department of Digital Systems, University of Piraeus, Greece
| | - Parisis Gallos
- Computational Biomedicine Research Lab, Department of Digital Systems, University of Piraeus, Greece
| | - Andreas Menychtas
- Computational Biomedicine Research Lab, Department of Digital Systems, University of Piraeus, Greece
| | - Ilias Maglogiannis
- Computational Biomedicine Research Lab, Department of Digital Systems, University of Piraeus, Greece
| |
Collapse
|
30
|
Jeffers JM, Schreurs BA, Dean JL, Scott B, Canares T, Tackett S, Smith B, Billings E, Billioux V, Sampathkumar HD, Kleinman K. Paediatric chest compression performance improves via novel augmented-reality cardiopulmonary resuscitation feedback system: A mixed-methods pilot study in a simulation-based setting. Resusc Plus 2022; 11:100273. [PMID: 35844631 PMCID: PMC9283661 DOI: 10.1016/j.resplu.2022.100273] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 06/17/2022] [Accepted: 06/27/2022] [Indexed: 12/21/2022] Open
Abstract
Aim More than 20,000 children experience a cardiac arrest event each year in the United States. Most children do not survive. High-quality cardiopulmonary resuscitation (CPR) has been associated with improved outcomes yet adherence to guidelines is poor. We developed and tested an augmented reality head mounted display chest compression (CC) feedback system (AR-CPR) designed to provide real-time CC feedback and guidance. Methods We conducted an unblinded randomized crossover simulation-based study to determine whether AR-CPR changes a user's CC performance. A convenience sample of healthcare providers who perform CC on children were included. Subjects performed three two-minute cycles of CC during a simulated 18-minute paediatric cardiac arrest. Subjects were randomized to utilize AR-CPR in the second or third CC cycle. After, subjects participated in a qualitative portion to inquire about their experience with AR-CPR and offer criticisms and suggestions for future development. Results There were 34 subjects recruited. Sixteen subjects were randomly assigned to have AR-CPR in cycle two (Group A) and 18 subjects were randomized to have AR-CPR in cycle three (Group B). There were no differences between groups CC performance in cycle one (baseline). In cycle two, subjects in Group A had 73% (SD 18%) perfect CC epochs compared to 17% (SD 26%) in Group B (p < 0.001). Overall, subjects enjoyed using AR-CPR and felt it improved their CC performance. Conclusion This novel AR-CPR feedback system showed significant CC performance change closer to CC guidelines. Numerous hardware, software, and user interface improvements were made during this pilot study.
Collapse
Affiliation(s)
- Justin M. Jeffers
- Department of Paediatrics, The Johns Hopkins University, Bloomberg Children’s Center, 1800 Orleans St., Baltimore, MD 21287, United States,Corresponding author at: Bloomberg Children’s Center, 1800 Orleans St, Suite G-1509, United States.
| | - Blake A. Schreurs
- The Johns Hopkins University Applied Physics Laboratory, LLC, The Johns Hopkins University, 11100 Johns Hopkins Rd, Laurel, MD 20723, United States
| | - James L. Dean
- The Johns Hopkins University Applied Physics Laboratory, LLC, The Johns Hopkins University, 11100 Johns Hopkins Rd, Laurel, MD 20723, United States
| | - Brandon Scott
- The Johns Hopkins University Applied Physics Laboratory, LLC, The Johns Hopkins University, 11100 Johns Hopkins Rd, Laurel, MD 20723, United States
| | - Therese Canares
- Department of Paediatrics, The Johns Hopkins University, Bloomberg Children’s Center, 1800 Orleans St., Baltimore, MD 21287, United States
| | - Sean Tackett
- Biostatistics, Epidemiology, and Data Management Core, Johns Hopkins Bayview Medical Center, Baltimore, MD 21224, United States
| | - Brittany Smith
- Department of Paediatrics, The Johns Hopkins University, Bloomberg Children’s Center, 1800 Orleans St., Baltimore, MD 21287, United States
| | - Emma Billings
- Department of Paediatrics, The Johns Hopkins University, Bloomberg Children’s Center, 1800 Orleans St., Baltimore, MD 21287, United States
| | - Veena Billioux
- Department of Paediatrics, The Johns Hopkins University, Bloomberg Children’s Center, 1800 Orleans St., Baltimore, MD 21287, United States
| | - Harshini D. Sampathkumar
- Department of International Health, Johns Hopkins University School of Public Health, 615 N Wolfe St, Baltimore, MD 21205, United States
| | - Keith Kleinman
- Department of Paediatrics, The Johns Hopkins University, Bloomberg Children’s Center, 1800 Orleans St., Baltimore, MD 21287, United States
| |
Collapse
|
31
|
Coughlan JM, Biggs B, Shen H. Non-Visual Access to an Interactive 3D Map. Comput Help People Spec Needs 2022; 13341:253-260. [PMID: 36108327 PMCID: PMC9467469 DOI: 10.1007/978-3-031-08648-9_29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Maps are indispensable for helping people learn about unfamiliar environments and plan trips. While tactile (2D) and 3D maps offer non-visual map access to people who are blind or visually impaired (BVI), this access is greatly enhanced by adding interactivity to the maps: when the user points at a feature of interest on the map, the name and other information about the feature is read aloud in audio. We explore how the use of an interactive 3D map of a playground, containing over seventy play structures and other features, affects spatial learning and cognition. Specifically, we perform experiments in which four blind participants answer questions about the map to evaluate their grasp of three types of spatial knowledge: landmark, route and survey. The results of these experiments demonstrate that participants are able to acquire this knowledge, most of which would be inaccessible without the interactivity of the map.
Collapse
Affiliation(s)
- James M Coughlan
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | - Brandon Biggs
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
- Georgia Institute of Technology, Atlanta, GA, USA
| | - Huiying Shen
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| |
Collapse
|
32
|
Neves Lopes V, Dantas I, Barbosa JP, Barbosa J. Telestration in the Teaching of Basic Surgical Skills: A Randomized Trial. J Surg Educ 2022; 79:1031-1042. [PMID: 35331681 DOI: 10.1016/j.jsurg.2022.02.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 01/16/2022] [Accepted: 02/28/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE To evaluate how an affordable course using telestration with augmented reality can be compared to the traditional teaching of basic surgical skills. DESIGN Prospective, randomized and blinded study. SETTING Faculty of Medicine of Porto University. PARTICIPANTS AND METHODS Twenty medical students without any experience in basic surgical skills were randomized into two different learning groups: telestration and traditional teaching (on-site mentoring) groups. Five different types of sutures were taught: the single interrupted, the cruciate mattress, the horizontal mattress, the vertical mattress and the simple continuous sutures. Data was obtained on the time taken to learn each of the techniques and to perform each exercise without any support from the faculty, tension of the suture, quality of the procedure using a modified Objective Structured Assessment of Technical Skills and participants' answers to a Likert questionnaire in terms of their learning experience, confidence, and self-evaluation. RESULTS Trainees in the telestration group were globally faster when performing independently (1393.40 [SD 288.89] vs 1679.00 [SD 328.22] seconds, p = 0.04) particularly during the cruciate mattress suture (235.50 [SD 61.81] vs 290.00 [SD 68.77] seconds, p = 0.05) and the simple continuous suture (492.40 [SD 87.49] vs 630.30 [SD 132.34] seconds, p = 0.01).Time needed for students to learn the procedures was similar between the groups. There were also no statistically significant differences in terms of the quality of the surgical gesture, tension of the suture, self-evaluation or confidence. CONCLUSIONS A basic surgical skills course using telestration through a head-mounted device with augmented reality capabilities can be a viable alternative to traditional teaching, considering time and quality of the gesture. Though costs can discourage from using this technology in basic procedures, the use of free software may turn it into an affordable option in the context of distant learning.
Collapse
Affiliation(s)
- Vítor Neves Lopes
- Department of General Surgery, University Hospital Center of São João, Porto, Portugal; Faculty of Medicine, University of Porto, Porto, Portugal.
| | - Isabel Dantas
- Faculty of Medicine, University of Porto, Porto, Portugal
| | | | - José Barbosa
- Department of General Surgery, University Hospital Center of São João, Porto, Portugal; Faculty of Medicine, University of Porto, Porto, Portugal
| |
Collapse
|
33
|
Othman SB, Zgaya H, Vasseur M, Décaudin B, Odou P, Hammadi S. Introducing Augmented Reality Technique to Enhance the Preparation Circuit of Injectable Chemotherapy Drugs. Stud Health Technol Inform 2022; 290:474-478. [PMID: 35673060 DOI: 10.3233/shti220121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Chemotherapy preparations are often complex and subject to a strict regulatory context. The existing control methods are often limited to Double Visual Control (DVC). In this paper, the preparation circuit of chemotherapy drugs is evaluated through data collection and statistical analysis in order to highlight the difficulties encountered. The results regarding preparation and control times and the number of task interruptions highlight the unreliability of the DVC and its impact on processing time. As a solution, we propose a decision support system "Smart Prep" based on Augmented Reality (AR), co-developed, and commercialized by the Faculty of Pharmacy of Lille, Ecole Centrale de Lille and the company Computer Engineering. This system allows the preparation of chemotherapy drugs according to a step-by-step mode, a traceability of the preparation steps and a reduction of tasks' interruptions.
Collapse
Affiliation(s)
- Sarah Ben Othman
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
| | - Hayfa Zgaya
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
| | - Michèle Vasseur
- Lille University Hospital - Institut de Pharmacie, F-59000 Lille, France
| | | | - Pascal Odou
- Univ. Lille, CHU Lille, ULR 7365 - GRITA, F-59000 Lille, France
| | - Slim Hammadi
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
| |
Collapse
|
34
|
Abstract
BACKGROUND Postural imbalance can be adopted for the early detection of age-related diseases or monitoring the course of the disease treatment; especially in monitoring, frequent balance measurement is crucial. This is mainly done through regular in-person examinations by a physician currently. Feedback in between examinations is often missing. OBJECTIVES This paper proposes mBalance, a mobile application that uses the Romberg test to detect postural imbalance. mBalance provides a camera-based, low-cost approach to measure imbalance frequently at home using mobile devices. METHODS Imbalance detection accuracy and usability was evaluated in two separate studies with 31 and 30 participants, respectively. RESULTS mBalance correctly detected imbalance with a sensitivity of 80% and a specificity of 87%. The study found good usability with no significant problems. CONCLUSION Overall, this study solves the problem of postural imbalance detection by digitizing a validated balance test into an easy-to-use mobile application.
Collapse
Affiliation(s)
| | - Lara Marie Reimer
- Technical University of Munich, Garching, Germany
- University Hospital Bonn, Bonn, Germany
| | | |
Collapse
|
35
|
Tan SY, Tay NNW. Integrating augmented reality technology in education: vector personal computer augmented reality. F1000Res 2022; 10:987. [PMID: 37767360 PMCID: PMC10520515 DOI: 10.12688/f1000research.72948.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/17/2022] [Indexed: 09/29/2023] Open
Abstract
Background: Educators often face difficulties in explaining abstract concepts such as vectors. During the ongoing coronavirus disease 2019 (COVID-19) pandemic, fully online classes have also caused additional challenges to using conventional teaching methods. To explain a vector concept of more than 2 dimensions, visualization becomes a problem. Although Microsoft PowerPoint can integrate animation, the illustration is still in 2-dimensions. Augmented reality (AR) technology is recommended to aid educators and students in teaching-learning vectors, namely via a vector personal computer augmented reality system (VPCAR), to fulfil the demand for tools to support the learning and teaching of vectors. Methods: A PC learning module for vectors was developed in a 3-dimensional coordinate system by using AR technology. Purposive sampling was applied to get feedback from educators and students in Malaysia through an online survey. The supportiveness of using VPCAR based on six items (attractiveness, easiness, visualization, conceptual understanding, inspiration and helpfulness) was recorded on 5-points Likert-type scales. Findings are presented descriptively and graphically. Results: Surprisingly, both students and educators adapted to the new technology easily and provided significant positive feedback that showed a left-skewed and J-shaped distribution for each measurement item, respectively. The distributions were proven significantly different among the students and educators, where supportive level result of educators was higher than students. This study introduced a PC learning module other than mobile apps as students mostly use laptops to attend online class and educators also engage other IT tools in their teaching. Conclusions: Based on these findings, VPCAR provides a good prospect in supporting educators and students during their online teaching-learning process. However, the findings may not be generalizable to all students and educators in Malaysia as purposive sampling was applied. Further studies may focus on government-funded schools using the newly developed VPCAR system, which is the novelty of this study.
Collapse
Affiliation(s)
- Sin Yin Tan
- Faculty of Information Science and Technology, Multimedia University, Melaka, 75450, Malaysia
| | - Noel Nuo Wi Tay
- Faculty of Information Science and Technology, Multimedia University, Melaka, 75450, Malaysia
| |
Collapse
|
36
|
Ryan GV, Callaghan S, Rafferty A, Higgins MF, Mangina E, McAuliffe F. Learning Outcomes of Immersive Technologies in Health Care Student Education: Systematic Review of the Literature. J Med Internet Res 2022; 24:e30082. [PMID: 35103607 PMCID: PMC8848248 DOI: 10.2196/30082] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/11/2021] [Accepted: 10/26/2021] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND There is a lack of evidence in the literature regarding the learning outcomes of immersive technologies as educational tools for teaching university-level health care students. OBJECTIVE The aim of this review is to assess the learning outcomes of immersive technologies compared with traditional learning modalities with regard to knowledge and the participants' learning experience in medical, midwifery, and nursing preclinical university education. METHODS A systematic review was conducted according to the Cochrane Collaboration guidelines. Randomized controlled trials comparing traditional learning methods with virtual, augmented, or mixed reality for the education of medicine, nursing, or midwifery students were evaluated. The identified studies were screened by 2 authors independently. Disagreements were discussed with a third reviewer. The quality of evidence was assessed using the Medical Education Research Study Quality Instrument (MERSQI). The review protocol was registered with PROSPERO (International Prospective Register of Systematic Reviews) in April 2020. RESULTS Of 15,627 studies, 29 (0.19%) randomized controlled trials (N=2722 students) were included and evaluated using the MERSQI tool. Knowledge gain was found to be equal when immersive technologies were compared with traditional learning modalities; however, the learning experience increased with immersive technologies. The mean MERSQI score was 12.64 (SD 1.6), the median was 12.50, and the mode was 13.50. Immersive technology was predominantly used to teach clinical skills (15/29, 52%), and virtual reality (22/29, 76%) was the most commonly used form of immersive technology. Knowledge was the primary outcome in 97% (28/29) of studies. Approximately 66% (19/29) of studies used validated instruments and scales to assess secondary learning outcomes, including satisfaction, self-efficacy, engagement, and perceptions of the learning experience. Of the 29 studies, 19 (66%) included medical students (1706/2722, 62.67%), 8 (28%) included nursing students (727/2722, 26.71%), and 2 (7%) included both medical and nursing students (289/2722, 10.62%). There were no studies involving midwifery students. The studies were based on the following disciplines: anatomy, basic clinical skills and history-taking skills, neurology, respiratory medicine, acute medicine, dermatology, communication skills, internal medicine, and emergency medicine. CONCLUSIONS Virtual, augmented, and mixed reality play an important role in the education of preclinical medical and nursing university students. When compared with traditional educational modalities, the learning gain is equal with immersive technologies. Learning outcomes such as student satisfaction, self-efficacy, and engagement all increase with the use of immersive technology, suggesting that it is an optimal tool for education.
Collapse
Affiliation(s)
- Grace V Ryan
- Perinatal Research Centre, Obstetrics and Gynaecology, School of Medicine, University College Dublin, Dublin, Ireland
| | - Shauna Callaghan
- Perinatal Research Centre, Obstetrics and Gynaecology, School of Medicine, University College Dublin, Dublin, Ireland
| | - Anthony Rafferty
- Perinatal Research Centre, Obstetrics and Gynaecology, School of Medicine, University College Dublin, Dublin, Ireland
| | - Mary F Higgins
- Perinatal Research Centre, Obstetrics and Gynaecology, School of Medicine, University College Dublin, Dublin, Ireland
| | - Eleni Mangina
- School of Computer Science, University College Dublin, Dublin, Ireland
| | - Fionnuala McAuliffe
- Perinatal Research Centre, Obstetrics and Gynaecology, School of Medicine, University College Dublin, Dublin, Ireland
| |
Collapse
|
37
|
Makarov I, Bakhanova M, Nikolenko S, Gerasimova O. Self-supervised recurrent depth estimation with attention mechanisms. PeerJ Comput Sci 2022; 8:e865. [PMID: 35494794 PMCID: PMC9044223 DOI: 10.7717/peerj-cs.865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/05/2022] [Indexed: 06/14/2023]
Abstract
Depth estimation has been an essential task for many computer vision applications, especially in autonomous driving, where safety is paramount. Depth can be estimated not only with traditional supervised learning but also via a self-supervised approach that relies on camera motion and does not require ground truth depth maps. Recently, major improvements have been introduced to make self-supervised depth prediction more precise. However, most existing approaches still focus on single-frame depth estimation, even in the self-supervised setting. Since most methods can operate with frame sequences, we believe that the quality of current models can be significantly improved with the help of information about previous frames. In this work, we study different ways of integrating recurrent blocks and attention mechanisms into a common self-supervised depth estimation pipeline. We propose a set of modifications that utilize temporal information from previous frames and provide new neural network architectures for monocular depth estimation in a self-supervised manner. Our experiments on the KITTI dataset show that proposed modifications can be an effective tool for exploiting temporal information in a depth prediction pipeline.
Collapse
Affiliation(s)
- Ilya Makarov
- HSE University, Moscow, Russia
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Big Data Research Center, National University of Science and Technology MISIS, Moscow, Russia
| | | | - Sergey Nikolenko
- Steklov Institute of Mathematics at St. Petersburg, St. Petersburg, Russia
- St. Petersburg State University, St. Petersburg, Russia
| | | |
Collapse
|
38
|
Neidhardt A, Schneiderwind C, Klein F. Perceptual Matching of Room Acoustics for Auditory Augmented Reality in Small Rooms - Literature Review and Theoretical Framework. Trends Hear 2022; 26:23312165221092919. [PMID: 35505625 PMCID: PMC9073123 DOI: 10.1177/23312165221092919] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
For the realization of auditory augmented reality (AAR), it is important that the room acoustical properties of the virtual elements are perceived in agreement with the acoustics of the actual environment. This perceptual matching of room acoustics is the subject reviewed in this paper. Realizations of AAR that fulfill the listeners’ expectations were achieved based on pre-characterization of the room acoustics, for example, by measuring acoustic impulse responses or creating detailed room models for acoustic simulations. For future applications, the goal is to realize an online adaptation in (close to) real-time. Perfect physical matching is hard to achieve with these practical constraints. For this reason, an understanding of the essential psychoacoustic cues is of interest and will help to explore options for simplifications. This paper reviews a broad selection of previous studies and derives a theoretical framework to examine possibilities for psychoacoustical optimization of room acoustical matching.
Collapse
|
39
|
Luzzi S, Giotta Lucifero A, Baldoncini M, Del Maestro M, Galzio R. Postcentral Gyrus High-Grade Glioma: Maximal Safe Anatomical Resection Guided by Augmented Reality with Fiber Tractography and Fluorescein. World Neurosurg 2021:S1878-8750(21)01917-3. [PMID: 34968755 DOI: 10.1016/j.wneu.2021.12.072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/23/2022]
|
40
|
Ramesh PV, Aji K, Joshua T, Ramesh SV, Ray P, Raj PM, Ramesh MK, Rajasekaran R. Immersive photoreal new-age innovative gameful pedagogy for e-ophthalmology with 3D augmented reality. Indian J Ophthalmol 2021; 70:275-280. [PMID: 34937254 PMCID: PMC8917591 DOI: 10.4103/ijo.ijo_2133_21] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Augmented reality (AR) has come a long way from a science-fiction concept to a science-based reality. AR is a view of the real, physical world in which the elements are enhanced by computer-generated inputs. AR is available on mobile handsets, which constitutes an essential e-learning platform. Today, AR is a real technology and not a science-fiction concept. The use of an e-ophthalmology platform with AR will pave the pathway for new-age gameful pedagogy. In this manuscript, we present a newly innovated AR program named “Eye MG AR” to simplify ophthalmic concept learning and to serve as a new-age immersive 3D pedagogical tool for gameful learning.
Collapse
Affiliation(s)
- Prasanna V Ramesh
- Medical Officer, Department of Glaucoma and Research, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - K Aji
- Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Tensingh Joshua
- Head of the Department and 3D Generalist, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Shruthy V Ramesh
- Medical Officer, Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Prajnya Ray
- Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Pragash M Raj
- Consultant, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Meena K Ramesh
- Head of the Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Ramesh Rajasekaran
- Chief Medical Officer, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| |
Collapse
|
41
|
Maule L, Zanetti M, Luchetti A, Tomasin P, Dallapiccoa M, Covre N, Guandalini G, De Cecco M. Wheelchair Driving Strategies: a comparison between standard joystick and gaze-based control. Assist Technol 2021; 35:180-192. [PMID: 34871532 DOI: 10.1080/10400435.2021.2009593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
This paper aims to evaluate and compare the driving performances achieved with a power wheelchair using a standard joystick versus a novel gaze-based technology. The gaze-based interface, called RoboEYE, involves a novel paradigm of computer interaction that handles the receipt of information from an eye tracker, using it as a continuous input for wheelchair navigation. A pool of 36 subjects has tested both technologies in a circuit designed considering the Wheelchair Skill Test. The experimental analysis involved evaluations of specific metrics of motion and the submission of questionnaires to collect required information about perceived feelings and mental workload. The joystick proved to be the best driving interface. It turned out to be more accurate and efficient than the gaze-based solution. However, the latter achieved only small differences in driving kinematics. These differences can be considered negligible from an operational point of view, offering a driving experience similar to that achievable with the joystick. Testers reported no particular stress, fatigue, or frustration when switching from one interface to another. These elements suggest that the proposed gaze-based solution is an appropriate alternative for a technology transition driven by a pathological change in the user's condition.
Collapse
|
42
|
Abele ND, Kluth K. [Interaction-ergonomic design and compatibility of AR-supported information displays using the example of a head-mounted display for industrial set-up processes]. ACTA ACUST UNITED AC 2021; 76:303-317. [PMID: 34703075 PMCID: PMC8531579 DOI: 10.1007/s41449-021-00286-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2021] [Indexed: 11/25/2022]
Abstract
Im Zuge des technologischen Fortschritts werden zunehmend analoge durch digitale Lösungen ersetzt. In diesem Zusammenhang treten neuartige Visualisierungsmöglichkeiten in den Vordergrund, wie z. B. Augmented Reality (AR). Der Umgang mit AR-basierten Informationsdarstellungen kann jedoch bei inkompatibler Gestaltung zu Performance-Verlusten und erhöhten physischen sowie psychischen Beanspruchungen führen. Bestehende ergonomische und nutzerzentrierte Richtlinien sowie gesetzliche Vorschriften können die Konzipierung und Usability derartiger Systeme aufgrund der rasanten technischen Weiterentwicklung nur bedingt unterstützen. Die vorliegende Ausarbeitung stellt an einem Praxisbeispiel dar, inwieweit eine cyber-physische Rüstapplikation in Form eines Head-Mounted Displays den geltenden interaktions-ergonomischen und kompatibilitäts-bezogenen Standards im Kontext industrieller Tätigkeiten und gesten-gesteuerter, binokularer AR-Systeme entspricht bzw. gerecht wird. Praktische Relevanz: Die präsentierten Ergebnisse nehmen für die Praxis eine wichtige Rolle ein, da die vorgestellte Systematik Arbeitspersonen bei komplexen Rüst- und Montagevorgängen unterstützen soll. Auf Grundlage dieser Erkenntnisse und mithilfe weiterer Optimierungsmöglichkeiten soll ein prozesssicherer, wertschöpfender und beanspruchungsminimaler Einsatz von AR-Systemen im industriellen Umfeld angestrebt werden.
Collapse
Affiliation(s)
- Nils Darwin Abele
- Fachgebiet Arbeitswissenschaft/Ergonomie, Universität Siegen, Paul-Bonatz-Straße 9-11, 57076 Siegen, Deutschland
| | - Karsten Kluth
- Fachgebiet Arbeitswissenschaft/Ergonomie, Universität Siegen, Paul-Bonatz-Straße 9-11, 57076 Siegen, Deutschland
| |
Collapse
|
43
|
Ghaednia H, Fourman MS, Lans A, Detels K, Dijkstra H, Lloyd S, Sweeney A, Oosterhoff JHF, Schwab JH. Augmented and virtual reality in spine surgery, current applications and future potentials. Spine J 2021; 21:1617-25. [PMID: 33774210 DOI: 10.1016/j.spinee.2021.03.018] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS AR and VR are promising technologies that will soon become part of standard of care in spine surgery.
Collapse
|
44
|
Höhler C, Rasamoel ND, Rohrbach N, Hansen JP, Jahn K, Hermsdörfer J, Krewer C. The impact of visuospatial perception on distance judgment and depth perception in an Augmented Reality environment in patients after stroke: an exploratory study. J Neuroeng Rehabil 2021; 18:127. [PMID: 34419086 PMCID: PMC8379833 DOI: 10.1186/s12984-021-00920-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 07/29/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Augmented Reality (AR)-based interventions are applied in neurorehabilitation with increasing frequency. Depth perception is required for the intended interaction within AR environments. Until now, however, it is unclear whether patients after stroke with impaired visuospatial perception (VSP) are able to perceive depth in the AR environment. METHODS Different aspects of VSP (stereovision and spatial localization/visuoconstruction) were assessed in 20 patients after stroke (mean age: 64 ± 14 years) and 20 healthy subjects (HS, mean age: 28 ± 8 years) using clinical tests. The group of HS was recruited to assess the validity of the developed AR tasks in testing stereovision. To measure perception of holographic objects, three distance judgment tasks and one three-dimensionality task were designed. The effect of impaired stereovision on performance in each AR task was analyzed. AR task performance was modeled by aspects of VSP using separate regression analyses for HS and for patients. RESULTS In HS, stereovision had a significant effect on the performance in all AR distance judgment tasks (p = 0.021, p = 0.002, p = 0.046) and in the three-dimensionality task (p = 0.003). Individual quality of stereovision significantly predicted the accuracy in each distance judgment task and was highly related to the ability to perceive holograms as three-dimensional (p = 0.001). In stroke-survivors, impaired stereovision had a specific deterioration effect on only one distance judgment task (p = 0.042), whereas the three-dimensionality task was unaffected (p = 0.317). Regression analyses confirmed a lacking impact of patients' quality of stereovision on AR task performance, while spatial localization/visuoconstruction significantly prognosticated the accuracy in distance estimation of geometric objects in two AR tasks. CONCLUSION Impairments in VSP reduce the ability to estimate distance and to perceive three-dimensionality in an AR environment. While stereovision is key for task performance in HS, spatial localization/visuoconstruction is predominant in patients. Since impairments in VSP are present after stroke, these findings might be crucial when AR is applied for neurorehabilitative treatment. In order to maximize the therapy outcome, the design of AR games should be adapted to patients' impaired VSP. Trial registration: The trial was not registered, as it was an observational study.
Collapse
Affiliation(s)
- Chiara Höhler
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany.
- Schoen Clinic Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany.
| | - Nils David Rasamoel
- Technical University of Denmark, Anker Engelunds Vej 1, 2800, Kgs. Lyngby, Denmark
| | - Nina Rohrbach
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany
| | - John Paulin Hansen
- Technical University of Denmark, Anker Engelunds Vej 1, 2800, Kgs. Lyngby, Denmark
| | - Klaus Jahn
- Schoen Clinic Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany
- Ludwig-Maximilians University of Munich, University Hospital Grosshadern, Marchioninistrasse 15, 81377, Munich, Germany
| | - Joachim Hermsdörfer
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany
| | - Carmen Krewer
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany
- Schoen Clinic Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany
| |
Collapse
|
45
|
Affiliation(s)
- Wilfredo López-Ojeda
- Veterans Affairs Mid-Atlantic Mental Illness Research, Education, and Clinical Center, and Research and Academic Affairs Service Line, W.G. Hefner Veterans Affairs Medical Center, Salisbury, N.C. (López-Ojeda, Hurley); Department of Psychiatry and Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, N.C. (López-Ojeda); Departments of Psychiatry and Radiology, Wake Forest School of Medicine, Winston-Salem, N.C. (Hurley); and Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston (Hurley)
| | - Robin A Hurley
- Veterans Affairs Mid-Atlantic Mental Illness Research, Education, and Clinical Center, and Research and Academic Affairs Service Line, W.G. Hefner Veterans Affairs Medical Center, Salisbury, N.C. (López-Ojeda, Hurley); Department of Psychiatry and Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, N.C. (López-Ojeda); Departments of Psychiatry and Radiology, Wake Forest School of Medicine, Winston-Salem, N.C. (Hurley); and Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston (Hurley)
| |
Collapse
|
46
|
Rad AA, Vardanyan R, Lopuszko A, Alt C, Stoffels I, Schmack B, Ruhparwar A, Zhigalov K, Zubarevich A, Weymann A. Virtual and Augmented Reality in Cardiac Surgery. Braz J Cardiovasc Surg 2021; 37:123-127. [PMID: 34236814 PMCID: PMC8973146 DOI: 10.21470/1678-9741-2020-0511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Virtual and augmented reality can be defined as a three-dimensional real-world
simulation allowing the user to directly interact with it. Throughout the years,
virtual reality has gained great popularity in medicine and is currently being
adopted for a wide range of purposes. Due to its dynamic anatomical nature,
permanent drive towards decreasing invasiveness, and strive for innovation,
cardiac surgery depicts itself as a unique environment for virtual reality.
Despite substantial research limitations in cardiac surgery, the current
literature has shown great applicability of this technology, and promising
opportunities.
Collapse
Affiliation(s)
| | - Robert Vardanyan
- Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Aleksandra Lopuszko
- Faculty of Medicine, Barts and The London School of Medicine and Dentistry, London, United Kingdom
| | - Christina Alt
- Department of Dermatology, University of Duisburg-Essen, Essen, Germany
| | - Ingo Stoffels
- Department of Dermatology, University of Duisburg-Essen, Essen, Germany
| | - Bastian Schmack
- Department of Thoracic and Cardiovascular Surgery, West German Heart and Vascular Center, University of Duisburg-Essen, Essen, Germany
| | - Arjang Ruhparwar
- Department of Thoracic and Cardiovascular Surgery, West German Heart and Vascular Center, University of Duisburg-Essen, Essen, Germany
| | - Konstantin Zhigalov
- Department of Thoracic and Cardiovascular Surgery, West German Heart and Vascular Center, University of Duisburg-Essen, Essen, Germany
| | - Alina Zubarevich
- Department of Thoracic and Cardiovascular Surgery, West German Heart and Vascular Center, University of Duisburg-Essen, Essen, Germany
| | - Alexander Weymann
- Department of Thoracic and Cardiovascular Surgery, West German Heart and Vascular Center, University of Duisburg-Essen, Essen, Germany
| |
Collapse
|
47
|
Bauerfeind K, Drüke J, Schneider J, Haar A, Bendewald L, Baumann M. Navigating with Augmented Reality - How does it affect drivers' mental load? Appl Ergon 2021; 94:103398. [PMID: 33721620 DOI: 10.1016/j.apergo.2021.103398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/17/2021] [Accepted: 02/23/2021] [Indexed: 06/12/2023]
Abstract
Drivers have been proven to easily understand Augmented Reality (AR) information. Especially in an ambiguous navigation task, drivers are expected to benefit from AR information. The driving simulator study was aimed at examining differences in mental load while navigating in an urban area with ambiguous intersection situations (N = 59). The navigation information was presented to the driver through a head-up display (HUD): a conventional HUD or an AR display, which relates information to the surroundings. Additionally, the driver had to solve a non-driving-related task (NDRT) which was an auditory cognitive, spatial task. Results showed that while driving with the AR display, participants performed better in the NDRT, which indicates a reduced mental load compared with the HUD. Participants drove on average 3 km/h slower with the HUD, showing compensation behaviour.
Collapse
Affiliation(s)
| | - Julia Drüke
- Volkswagen AG, Group Innovation, HMI Augmentation, Wolfsburg, Germany
| | - Jens Schneider
- Volkswagen AG, Technical Development, Infotainment Services, Wolfsburg, Germany
| | - Adrian Haar
- Volkswagen AG, Group Innovation, HMI Augmentation, Wolfsburg, Germany
| | - Lennart Bendewald
- Volkswagen AG, Technical Development, HMI Domain Concepts, Wolfsburg, Germany
| | - Martin Baumann
- University of Ulm, Institute of Psychology and Education Dept., Human Factors, Ulm, Germany
| |
Collapse
|
48
|
Iqbal H, Tatti F, Rodriguez Y Baena F. Augmented reality in robotic assisted orthopaedic surgery: A pilot study. J Biomed Inform 2021; 120:103841. [PMID: 34146717 DOI: 10.1016/j.jbi.2021.103841] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 06/11/2021] [Accepted: 06/14/2021] [Indexed: 01/18/2023]
Abstract
BACKGROUND The research and development of augmented-reality (AR) technologies in surgical applications has seen an evolution of the traditional user-interfaces (UI) utilised by clinicians when conducting robot-assisted orthopaedic surgeries. The typical UI for such systems relies on surgeons managing 3D medical imaging data in the 2D space of a touchscreen monitor, located away from the operating site. Conversely, AR can provide a composite view overlaying the real surgical scene with co-located virtual holographic representations of medical data, leading to a more immersive and intuitive operator experience. MATERIALS AND METHODS This work explores the integration of AR within an orthopaedic setting by capturing and replicating the UI of an existing surgical robot within an AR head-mounted display worn by the clinician. The resulting mixed-reality workflow enabled users to simultaneously view the operating-site and real-time holographic operating informatics when carrying out a robot-assisted patellofemoral-arthroplasty (PFA). Ten surgeons were recruited to test the impact of the AR system on procedure completion time and operating surface roughness. RESULTS AND DISCUSSION The integration of AR did not appear to require subjects to significantly alter their surgical techniques, which was demonstrated by non-significant changes to the study's clinical metrics, with a statistically insignificant mean increase in operating time (+0.778 s, p = 0.488) and a statistically insignificant change in mean surface roughness (p = 0.274). Additionally, a post-operative survey indicated a positive consensus on the usability of the AR system without incurring noticeable physical distress such as eyestrain or fatigue. CONCLUSIONS Overall, these study results demonstrated a successful integration of AR technologies within the framework of an existing robot-assisted surgical platform with no significant negative effects in two quantitative metrics of surgical performance, and a positive outcome relating to user-centric and ergonomic evaluation criteria.
Collapse
Affiliation(s)
- Hisham Iqbal
- Mechatronics in Medicine Laboratory, Imperial College London, London, UK.
| | - Fabio Tatti
- Mechatronics in Medicine Laboratory, Imperial College London, London, UK
| | | |
Collapse
|
49
|
Finn E, Kuusinen J. Innovation Through Universal Design in Agile UX Software Development Teams. A Collaborative Case Study of an Under Graduate AR Tourist Guide Project. Stud Health Technol Inform 2021; 282:252-8. [PMID: 34085973 DOI: 10.3233/SHTI210401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
Abstract
The study has the objective of designing AR tourist guide mobile app within an academic teaching framework facilitating collaborative (e.g. external commercial partners), cooperative (i.e. external academic experts) and user-centred design (UCD). [1]The tourist guide app, VisitAR, is a digitized tour application that portrays information in the form of landmarks and information windows. VisitAR provides a seamless walking experience in real-time by using your location, and triggering pop up information windows while you walk at Carlingford Ireland. The application testing was completed by using several usability evaluation methods i.e. technical field testing, living lab testing including speaking thoughts out loud, usability focus group testing and usability analysis As a result, by teaching UD within an experiential, living lab, a more realistic design context is provided, addressing realistic UX and SD, allowing deployment of potentially commercially viable solutions, which address the needs of a more diverse range of end users. As part of this case study, both qualitative and quantitative data related to UX, usability and SD from each stage of development was evaluated.
Collapse
|
50
|
Vortmann LM, Knychalla J, Annerer-Walcher S, Benedek M, Putze F. Imaging Time Series of Eye Tracking Data to Classify Attentional States. Front Neurosci 2021; 15:664490. [PMID: 34121994 PMCID: PMC8193942 DOI: 10.3389/fnins.2021.664490] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 05/03/2021] [Indexed: 12/25/2022] Open
Abstract
It has been shown that conclusions about the human mental state can be drawn from eye gaze behavior by several previous studies. For this reason, eye tracking recordings are suitable as input data for attentional state classifiers. In current state-of-the-art studies, the extracted eye tracking feature set usually consists of descriptive statistics about specific eye movement characteristics (i.e., fixations, saccades, blinks, vergence, and pupil dilation). We suggest an Imaging Time Series approach for eye tracking data followed by classification using a convolutional neural net to improve the classification accuracy. We compared multiple algorithms that used the one-dimensional statistical summary feature set as input with two different implementations of the newly suggested method for three different data sets that target different aspects of attention. The results show that our two-dimensional image features with the convolutional neural net outperform the classical classifiers for most analyses, especially regarding generalization over participants and tasks. We conclude that current attentional state classifiers that are based on eye tracking can be optimized by adjusting the feature set while requiring less feature engineering and our future work will focus on a more detailed and suited investigation of this approach for other scenarios and data sets.
Collapse
Affiliation(s)
- Lisa-Marie Vortmann
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jannes Knychalla
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | | | - Mathias Benedek
- Creative Cognition Lab, Institute of Psychology, University of Graz, Graz, Austria
| | - Felix Putze
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| |
Collapse
|