1
|
Diegritz C, Fotiadou C, Fleischer F, Reymus M. Tooth Anatomy Inspector: A comprehensive assessment of an extended reality (XR) application designed for teaching and learning of root canal anatomy by students. Int Endod J 2024; 57:1682-1688. [PMID: 39046181 DOI: 10.1111/iej.14124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 07/10/2024] [Accepted: 07/11/2024] [Indexed: 07/25/2024]
Abstract
AIM To develop and evaluate a suitable software application for mobile devices designed for teaching root canal anatomy to undergraduate students in an informative and engaging manner. METHODOLOGY Extracted human teeth were scanned by μCT and digitized by converting into STL files. An extended reality (XR) application illustrating the root canal anatomy of the scanned teeth was developed. Prior to deployment, undergraduate dental students were voluntarily asked about their expectations regarding an educational application on tooth anatomy. After a testing phase of the application on a mobile device and within a virtual reality environment, a subsequent evaluation was conducted to assess their overall experience in relation to their initial expectations. Data were analysed using Kolmogorov-Smirnov test and Mann-Whitney U test. The level of significance was set to .05 (p = .05). RESULTS The application was able to meet the expectations of the students in all categories (p < .466-.731). Furthermore, it was evaluated as user-friendly (98.2%) and highly motivating for the purpose of learning more on root canal anatomy (100%). CONCLUSION Given the overwhelmingly positive reception from undergraduate dental students, the application emerges to be a promising supplementary teaching method for the endodontic curriculum.
Collapse
Affiliation(s)
- Christian Diegritz
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Christina Fotiadou
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Felix Fleischer
- Department of Conservative Dentistry and Periodontology, University Hospital Innsbruck, Innsbruck, Austria
| | - Marcel Reymus
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
2
|
Chiossi F, Trautmannsheimer I, Ou C, Gruenefeld U, Mayer S. Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6997-7007. [PMID: 39264778 DOI: 10.1109/tvcg.2024.3456172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2024]
Abstract
Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction.
Collapse
|
3
|
Edalati S, Slobin J, Harsinay A, Vasan V, Taha MA, Del Signore A, Govindaraj S, Iloreta AM. Augmented and Virtual Reality Applications in Rhinology: A Scoping Review. Laryngoscope 2024; 134:4433-4440. [PMID: 38924127 DOI: 10.1002/lary.31602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 05/22/2024] [Accepted: 06/12/2024] [Indexed: 06/28/2024]
Abstract
OBJECTIVES Virtual reality (VR) and augmented reality (AR) are innovative technologies that have a wide range of potential applications in the health care industry. The aim of this study was to investigate the body of research on AR and VR applications in rhinology by performing a scoping review. DATA SOURCES PubMed, Scopus, and Embase. REVIEW METHODS According to PRISM-ScR guidelines, a scoping review of literature on the application of AR and/or VR in the context of Rhinology was conducted using PubMed, Scopus, and Embase. RESULTS Forty-nine articles from 1996 to 2023 met the criteria for review. Five broad types of AR and/or VR applications were found: preoperative, intraoperative, training/education, feasibility, and technical. The subsequent clinical domains were recognized: craniovertebral surgery, nasal endoscopy, transsphenoidal surgery, skull base surgery, endoscopic sinus surgery, and sinonasal malignancies. CONCLUSION AR and VR have comprehensive applications in Rhinology. AR for surgical navigation may have the most emerging potential in skull base surgery and endoscopic sinus surgery. VR can be utilized as an engaging training tool for surgeons and residents and as a distraction analgesia for patients undergoing office-based procedures. Additional research is essential to further understand the tangible effects of these technologies on measurable clinical results. Laryngoscope, 134:4433-4440, 2024.
Collapse
Affiliation(s)
- Shaun Edalati
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Jacqueline Slobin
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Ariel Harsinay
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Vikram Vasan
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Mohamed A Taha
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Anthony Del Signore
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Satish Govindaraj
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Alfred Marc Iloreta
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
4
|
Crouse MD. Learning to direct attention: Consequences for procedural task training programs. Acta Psychol (Amst) 2024; 250:104502. [PMID: 39326200 DOI: 10.1016/j.actpsy.2024.104502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 09/04/2024] [Accepted: 09/19/2024] [Indexed: 09/28/2024] Open
Abstract
Procedural training programs such as augmented and virtual reality programs often present cues that direct trainees' attention to particular locations and/or items to facilitate learning. However, the impact of different types of cues on trainees' learning is poorly understood. For example, cues that indicate the location of to-be-pressed buttons might cause a trainee to focus on button locations rather than their icons. If the trainee later needs to use a differently-arranged interface, they may be unable to complete the tasks and may need retraining. The current study trained people with either location cues or icon cues and then had them perform the same tasks with a rearranged layout. The results indicate that what a trainee learns is impacted by the type of cue and the type of icons in the interface. When the interface contained icons that represented their function, participants trained with location cues had poorer accuracy and reported experiencing higher difficulty using the interface than participants trained with icon cues, suggesting that icon cues may lead to greater learning than location cues. Both groups, though, maintained similar accuracy when the interface rearranged, indicating they both learned button icons. When the interface contained abstract icons, participants trained with icon cues were able to maintain higher accuracy with the rearranged interface compared to participants trained with location cues suggesting they had greater knowledge of button icons. This finding indicates designers of procedural training programs should consider how cue type could impact a trainee's learning, particularly with abstract icons.
Collapse
Affiliation(s)
- Monique D Crouse
- Department of Psychology, University of California, Santa Cruz, USA.
| |
Collapse
|
5
|
Cannon PC, Setia SA, Klein-Gardner S, Kavoussi NL, Webster RJ, Herrell SD. Are 3D Image Guidance Systems Ready for Use? A Comparative Analysis of 3D Image Guidance Implementations in Minimally Invasive Partial Nephrectomy. J Endourol 2024; 38:395-407. [PMID: 38251637 PMCID: PMC10979686 DOI: 10.1089/end.2023.0059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Introduction: Three-dimensional image-guided surgical (3D-IGS) systems for minimally invasive partial nephrectomy (MIPN) can potentially improve the efficiency and accuracy of intraoperative anatomical localization and tumor resection. This review seeks to analyze the current state of research regarding 3D-IGS, including the evaluation of clinical outcomes, system functionality, and qualitative insights regarding 3D-IGS's impact on surgical procedures. Methods: We have systematically reviewed the clinical literature pertaining to 3D-IGS deployed for MIPN. For inclusion, studies must produce a patient-specific 3D anatomical model from two-dimensional imaging. Data extracted from the studies include clinical results, registration (alignment of the 3D model to the surgical scene) method used, limitations, and data types reported. A subset of studies was qualitatively analyzed through an inductive coding approach to identify major themes and subthemes across the studies. Results: Twenty-five studies were included in the review. Eight (32%) studies reported clinical results that point to 3D-IGS improving multiple surgical outcomes. Manual registration was the most utilized (48%). Soft tissue deformation was the most cited limitation among the included studies. Many studies reported qualitative statements regarding surgeon accuracy improvement, but quantitative surgeon accuracy data were not reported. During the qualitative analysis, six major themes emerged across the nine applicable studies. They are as follows: 3D-IGS is necessary, 3D-IGS improved surgical outcomes, researcher/surgeon confidence in 3D-IGS system, enhanced surgeon ability/accuracy, anatomical explanation for qualitative assessment, and claims without data or reference to support. Conclusions: Currently, clinical outcomes are the main source of quantitative data available to point to 3D-IGS's efficacy. However, the literature qualitatively suggests the benefit of accurate 3D-IGS for robotic partial nephrectomy.
Collapse
Affiliation(s)
- Piper C. Cannon
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Shaan A. Setia
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Stacy Klein-Gardner
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Nicholas L. Kavoussi
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
6
|
Hults CM, Ding Y, Xie GG, Raja R, Johnson W, Lee A, Simons DJ. Inattentional blindness in medicine. Cogn Res Princ Implic 2024; 9:18. [PMID: 38536589 PMCID: PMC10973299 DOI: 10.1186/s41235-024-00537-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 02/09/2024] [Indexed: 05/15/2024] Open
Abstract
People often fail to notice unexpected stimuli when their attention is directed elsewhere. Most studies of this "inattentional blindness" have been conducted using laboratory tasks with little connection to real-world performance. Medical case reports document examples of missed findings in radiographs and CT images, unintentionally retained guidewires following surgery, and additional conditions being overlooked after making initial diagnoses. These cases suggest that inattentional blindness might contribute to medical errors, but relatively few studies have directly examined inattentional blindness in realistic medical contexts. We review the existing literature, much of which focuses on the use of augmented reality aids or inspection of medical images. Although these studies suggest a role for inattentional blindness in errors, most of the studies do not provide clear evidence that these errors result from inattentional blindness as opposed to other mechanisms. We discuss the design, analysis, and reporting practices that can make the contributions of inattentional blindness unclear, and we describe guidelines for future research in medicine and similar contexts that could provide clearer evidence for the role of inattentional blindness.
Collapse
Affiliation(s)
- Connor M Hults
- University of Illinois at Champaign-Urbana, Champaign, USA
| | - Yifan Ding
- University of Illinois at Champaign-Urbana, Champaign, USA
| | - Geneva G Xie
- University of Illinois at Champaign-Urbana, Champaign, USA
| | - Rishi Raja
- University of Illinois at Champaign-Urbana, Champaign, USA
| | | | - Alexis Lee
- University of Illinois at Champaign-Urbana, Champaign, USA
| | | |
Collapse
|
7
|
Abstract
BACKGROUND In recent years, numerous innovative yet challenging surgeries, such as minimally invasive procedures, have introduced an overwhelming amount of new technologies, increasing the cognitive load for surgeons and potentially diluting their attention. Cognitive support technologies (CSTs) have been in development to reduce surgeons' cognitive load and minimize errors. Despite its huge demands, it still lacks a systematic review. METHODS Literature was searched up until May 21st, 2021. Pubmed, Web of Science, and IEEExplore. Studies that aimed at reducing the cognitive load of surgeons were included. Additionally, studies that contained an experimental trial with real patients and real surgeons were prioritized, although phantom and animal studies were also included. Major outcomes that were assessed included surgical error, anatomical localization accuracy, total procedural time, and patient outcome. RESULTS A total of 37 studies were included. Overall, the implementation of CSTs had better surgical performance than the traditional methods. Most studies reported decreased error rate and increased efficiency. In terms of accuracy, most CSTs had over 90% accuracy in identifying anatomical markers with an error margin below 5 mm. Most studies reported a decrease in surgical time, although some were statistically insignificant. DISCUSSION CSTs have been shown to reduce the mental workload of surgeons. However, the limited ergonomic design of current CSTs has hindered their widespread use in the clinical setting. Overall, more clinical data on actual patients is needed to provide concrete evidence before the ubiquitous implementation of CSTs.
Collapse
Affiliation(s)
- Zhong Shi Zhang
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Yun Wu
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Bin Zheng
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
8
|
Cullen HJ, Paterson HM, Dutton TS, van Golde C. A survey of what legal populations believe and know about inattentional blindness and visual detection. PLoS One 2024; 19:e0296489. [PMID: 38180989 PMCID: PMC10769081 DOI: 10.1371/journal.pone.0296489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 12/14/2023] [Indexed: 01/07/2024] Open
Abstract
Inattentional blindness refers to when people fail to notice obvious and unexpected events when their attention is elsewhere. Existing research suggests that inattentional blindness is a poorly understood concept that violates the beliefs that are commonly held by the public about vision and attention. Given that legal cases may involve individuals who may have experienced inattentional blindness, it is important to understand the beliefs legal populations and members of the community have about inattentional blindness, and their general familiarity and experience with the concept. Australian police officers (n = 94) and lawyers (n = 98), along with psychology students (n = 99) and community members (n = 100) completed a survey where they: a) stated whether an individual would have noticed an event in six legal vignettes, b) rated whether factors would make an individual more, less, or just as likely to notice an unexpected event, c) reported their familiarity with and personal experiences of inattentional blindness, and d) indicated whether they believed individuals could make themselves more likely to notice unexpected events. Respondents in all populations frequently responded "yes" to detecting the unexpected event in most legal vignettes. They also held misconceptions about some factors (expertise and threat) that would influence the noticing of unexpected events. Additionally, personal experiences with inattentional blindness were commonly reported. Finally, respondents provided strategies for what individuals can do to make themselves more likely to notice of unexpected events, despite a lack of evidence to support them. Overall, these findings provide direction for where education and training could be targeted to address misconceptions about inattentional blindness held by legal populations, which may lead to improved decision-making in legal settings.
Collapse
Affiliation(s)
- Hayley J. Cullen
- School of Psychology, The University of Sydney, Sydney, Australia
- School of Psychological Sciences, The University of Newcastle, Newcastle, Australia
- School of Psychological Sciences, Macquarie University, Sydney, Australia
| | | | | | - Celine van Golde
- School of Psychology, The University of Sydney, Sydney, Australia
| |
Collapse
|
9
|
Matinfar S, Salehi M, Suter D, Seibold M, Dehghani S, Navab N, Wanivenhaus F, Fürnstahl P, Farshad M, Navab N. Sonification as a reliable alternative to conventional visual surgical navigation. Sci Rep 2023; 13:5930. [PMID: 37045878 PMCID: PMC10097653 DOI: 10.1038/s41598-023-32778-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 04/02/2023] [Indexed: 04/14/2023] Open
Abstract
Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany.
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany.
| | - Mehrdad Salehi
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| | - Daniel Suter
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Matthias Seibold
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Shervin Dehghani
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, H3G 2W1, Canada
| | - Florian Wanivenhaus
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| |
Collapse
|
10
|
Ceccariglia F, Cercenelli L, Badiali G, Marcelli E, Tarsitano A. Application of Augmented Reality to Maxillary Resections: A Three-Dimensional Approach to Maxillofacial Oncologic Surgery. J Pers Med 2022; 12:jpm12122047. [PMID: 36556268 PMCID: PMC9785494 DOI: 10.3390/jpm12122047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/03/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
In the relevant global context, although virtual reality, augmented reality, and mixed reality have been emerging methodologies for several years, only now have technological and scientific advances made them suitable for revolutionizing clinical care and medical settings through the provision of advanced features and improved healthcare services. Over the past fifteen years, tools and applications using augmented reality (AR) have been designed and tested in the context of various surgical and medical disciplines, including maxillofacial surgery. The purpose of this paper is to show how a marker-less AR guidance system using the Microsoft® HoloLens 2 can be applied in mandible and maxillary demolition surgery to guide maxillary osteotomies. We describe three mandibular and maxillary oncologic resections performed during 2021 using AR support. In these three patients, we applied a marker-less tracking method based on recognition of the patient's facial profile. The surgeon, using HoloLens 2 smart glasses, could see the virtual surgical planning superimposed on the patient's anatomy. We showed that performing osteotomies under AR guidance is feasible and viable, as demonstrated by comparison with osteotomies performed using CAD-CAM cutting guides. This technology has advantages and disadvantages. However, further research is needed to improve the stability and robustness of the marker-less tracking method applied to patient face recognition.
Collapse
Affiliation(s)
- Francesco Ceccariglia
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Correspondence: ; Tel.: +39-051-2144197
| | - Laura Cercenelli
- eDimes Lab-Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
| | - Giovanni Badiali
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
| | - Emanuela Marcelli
- eDimes Lab-Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
| | - Achille Tarsitano
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
| |
Collapse
|
11
|
Zhang G, Bartels J, Martin-Gomez A, Armand M. Towards Reducing Visual Workload in Surgical Navigation: Proof-of-concept of an Augmented Reality Haptic Guidance System. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1073-1080. [PMID: 38487569 PMCID: PMC10938944 DOI: 10.1080/21681163.2022.2152372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 11/19/2022] [Indexed: 12/12/2022]
Abstract
The integration of navigation capabilities into the operating room has enabled surgeons take on more precise procedures guided by a pre-operative plan. Traditionally, navigation information based on this plan is presented using monitors in the surgical theater. But the monitors force the surgeon to frequently look away from the surgical area. Alternative technologies, such as augmented reality, have enabled surgeons to visualize navigation information in-situ. However, burdening the visual field with additional information can be distracting. In this work, we propose integrating haptic feedback into a surgical tool handle to enable surgical guidance capabilities. This property reduces the amount of visual information, freeing surgeons to maintain visual attention over the patient and the surgical site. To investigate the feasibility of this guidance paradigm we conducted a pilot study with six subjects. Participants traced paths, pinpointed locations and matched alignments with a mock surgical tool featuring a novel haptic handle. We collected quantitative data, tracking user's accuracy and time to completion as well as subjective cognitive load. Our results show that haptic feedback can guide participants using a tool to sub-millimeter and sub-degree accuracy with only little training. Participants were able to match a location with an average error of 0.82 mm , desired pivot alignments with an average error of 0.83 ° and desired rotations to 0.46 °.
Collapse
Affiliation(s)
- Gesiren Zhang
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jan Bartels
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Biomechanical- and Image-Guided Surgical Systems (BIGSS) Lab, Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
12
|
Ward M, Helton WS. More or less? Improving monocular head mounted display assisted visual search by reducing guidance precision. APPLIED ERGONOMICS 2022; 102:103720. [PMID: 35247830 DOI: 10.1016/j.apergo.2022.103720] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 02/11/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE To test six different methods of directing a user's attention in a peripheral head mounted display assisted visual search task. BACKGROUND Each time a user needs to shift their attention between virtual information and their environment has a cost. The faster a user can process a guiding cue and the fewer times they need to return to it, the more efficient that cue will be at directing a user's attention. The most effective method, creating a visual effect at the location of the target, is not suitable for peripheral head mounted displays. This study tests alternative guiding cues better suited to these devices. METHOD Participants searched for a singleton target hidden among 299 distractors while directed with one of six device-delivered guiding cues. Search times were recorded. RESULTS A static region map was the most efficient and most preferred cue. Static and dynamic directional cues were also effective in comparison to non-guided search. Cues designed to work solely within the participants' peripheral vision were relatively ineffective CONCLUSION: Guidance cues that direct a user's attention to targets within the real environment do not need to precisely lead to the target. It is instead more efficient to lead a user to the general vicinity of the target quickly and then have the user revert to their natural visual search behaviour. APPLICATION This finding is broadly useful when assisting visual search tasks with handheld or worn devices which do not cover the user's full field of view. PRéCIS: This study tested six different methods of guiding attention in a peripheral head-mounted display assisted visual search task. This study compared static, dynamic and peripheral-vision endogenous cues to targets and found a static simple map cue both fastest and most preferred by users.
Collapse
Affiliation(s)
- Matthew Ward
- University of Canterbury, Christchurch, New Zealand.
| | - William S Helton
- University of Canterbury, Christchurch, New Zealand; George Mason University, Fairfax, USA.
| |
Collapse
|
13
|
Cao S, Wei X, Hu J, Zhang H. Which Seat Facilitates the Detection of Off-Seat Behaviours? An Inattentional Blindness Test on Location Effect in the Classroom. Front Psychol 2022; 13:899696. [PMID: 35846683 PMCID: PMC9281894 DOI: 10.3389/fpsyg.2022.899696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/31/2022] [Indexed: 11/23/2022] Open
Abstract
Off-seat behaviour refers to students leaving their seats and walking out of a classroom without the teacher noticing. This behaviour occurs in special education for students with certain special needs, which would lead to serious safety problems. This study carried out an inattentional blindness test to explore whether the location of seats in classrooms would impact teachers’ detection rate regarding off-seat behaviours. The participants were 126 pre-service teachers (Mage = 18.72 ± 0.723; 92% female) who were invited to perform the primary task of counting students raising their hands up whilst the disappearance of one of the students was introduced as an unexpected occurrence. The results show that peripheral seats were more detectable than the central ones for the teachers to notice the “missing student.” Meanwhile, the left and below oriented seats were more likely to be ignored compared to those that were right and upper oriented. These results suggest the existence of a location effect in the classroom that is associated with teachers’ attention regarding off-seat behaviour. This study has implications for classroom management in terms of arranging students’ seats appropriately to assist in increasing teachers’ identification of this hazard.
Collapse
Affiliation(s)
- Shuqin Cao
- School of Special Education, Zhejiang Normal University, Hangzhou, China
| | - Xiuying Wei
- School of Special Education, Zhejiang Normal University, Hangzhou, China
- Zi Jinghua School, Hangzhou, China
| | - Jiangbo Hu
- Hangzhou Preschool Teachers College, Zhejiang Normal University, Hangzhou, China
- *Correspondence: Jiangbo Hu,
| | - Hui Zhang
- School of Special Education, Zhejiang Normal University, Hangzhou, China
- Hui Zhang,
| |
Collapse
|
14
|
Augmented Reality in Arthroplasty: An Overview of Clinical Applications, Benefits, and Limitations. J Am Acad Orthop Surg 2022; 30:e760-e768. [PMID: 35245236 DOI: 10.5435/jaaos-d-21-00964] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 01/30/2022] [Indexed: 02/01/2023] Open
Abstract
Augmented reality (AR) is a natural extension of computer-assisted surgery whereby a computer-generated image is superimposed on the surgeon's field of vision to assist in the planning and execution of the procedure. This emerging technology shows great potential in the field of arthroplasty, improving efficiency, limb alignment, and implant position. AR has shown the capacity to build on computer navigation systems while providing more elaborate information in a streamlined workflow to the user. This review investigates the current uses of AR in the field of arthroplasty and discusses outcomes, limitations, and potential future directions.
Collapse
|
15
|
Role of Augmented Reality in Changing Consumer Behavior and Decision Making: Case of Pakistan. SUSTAINABILITY 2021. [DOI: 10.3390/su132414064] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Marketers and advertisers ignore new technology and diverse marketing tactics when attempting to increase product exposure, customer engagement, customer behavior and buying intention in fashion accessory marketplaces in developing countries. This research sought to discover how the Augmented Reality (AR) experience influenced consumer behavior, buying intention and pleasure when purchasing a fashion item in developing countries. This study employs positivist ideas to investigate the connections between various factors, believing that reality is unwavering, stable, and static. Experiential marketing following stimulus exposure will gather cross-sectional data. The undertaken study has developed proper experimental design (within group) from business innovation models, for instance, uses and gratification and user experience models. User experience is disclosed by its four defining characteristics: hedonic quality (identification and simulation), aesthetic quality, and pragmatic quality. After encountering an enhanced user experience, users have a more favorable attitude about purchasing; in contrast, pleasure from using the application directly impacts buying intention. It was also shown that knowledge of AR apps impacts user experience and attitude. The novelty of this research is multifarious, for instance, the smart lab was used as a marketing technology to explore a virtual mirror of the Ray-Ban products. Secondly, the augmented reality experiential marketing activities have been developed by the developers as bearing in mind the four different aspects of the user experience—haptic, hedonic, aesthetic, and pragmatic. It should be functional, simple to learn and use, symmetrical, pleasant, and appealing, while fulfilling the unconscious emotional elements of a customer’s purchase. The research is the first known study in Pakistan to evaluate the influence of augmented reality on consumer proficiency and its consequent effects on attitude and satisfaction for fashion accessory brands. The research also advances the notion that application familiarity is the most important moderator between attitude and an augmented reality-enriched user experience, contradicting the prior studies, which focus on gender and age. This research has important theoretical implications for future researchers, who may wish to replicate the proposed final model in developed and developing countries’ fashion brands. This research also has imperative managerial implications for brand managers and marketing managers, who could include the recommendations of this study in their marketing strategies.
Collapse
|
16
|
Wriessnegger SC, Raggam P, Kostoglou K, Müller-Putz GR. Mental State Detection Using Riemannian Geometry on Electroencephalogram Brain Signals. Front Hum Neurosci 2021; 15:746081. [PMID: 34899215 PMCID: PMC8663761 DOI: 10.3389/fnhum.2021.746081] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 10/12/2021] [Indexed: 11/16/2022] Open
Abstract
The goal of this study was to implement a Riemannian geometry (RG)-based algorithm to detect high mental workload (MWL) and mental fatigue (MF) using task-induced electroencephalogram (EEG) signals. In order to elicit high MWL and MF, the participants performed a cognitively demanding task in the form of the letter n-back task. We analyzed the time-varying characteristics of the EEG band power (BP) features in the theta and alpha frequency band at different task conditions and cortical areas by employing a RG-based framework. MWL and MF were considered as too high, when the Riemannian distances of the task-run EEG reached or surpassed the threshold of the baseline EEG. The results of this study showed a BP increase in the theta and alpha frequency bands with increasing experiment duration, indicating elevated MWL and MF that impedes/hinders the task performance of the participants. High MWL and MF was detected in 8 out of 20 participants. The Riemannian distances also showed a steady increase toward the threshold with increasing experiment duration, with the most detections occurring toward the end of the experiment. To support our findings, subjective ratings (questionnaires concerning fatigue and workload levels) and behavioral measures (performance accuracies and response times) were also considered.
Collapse
Affiliation(s)
- Selina C Wriessnegger
- Institute of Neural Engineering, Graz University of Technology, Graz, Austria.,BioTechMed-Graz, Graz, Austria
| | - Philipp Raggam
- Research Group Neuroinformatics, Faculty of Computer Science, University of Vienna, Vienna, Austria.,Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Kyriaki Kostoglou
- Institute of Neural Engineering, Graz University of Technology, Graz, Austria
| | - Gernot R Müller-Putz
- Institute of Neural Engineering, Graz University of Technology, Graz, Austria.,BioTechMed-Graz, Graz, Austria
| |
Collapse
|
17
|
Matias J, Belletier C, Izaute M, Lutz M, Silvert L. The role of perceptual and cognitive load on inattentional blindness: A systematic review and three meta-analyses. Q J Exp Psychol (Hove) 2021; 75:1844-1875. [PMID: 34802311 DOI: 10.1177/17470218211064903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The inattentional blindness phenomenon refers to situations in which a visible but unexpected stimulus remains consciously unnoticed by observers. This phenomenon is classically explained as the consequence of insufficient attention, because attentional resources are already engaged elsewhere or vary between individuals. However, this attentional-resources view is broad and often imprecise regarding the variety of attentional models, the different pools of resources that can be involved in attentional tasks, and the heterogeneity of the experimental paradigms. Our aim was to investigate whether a classic theoretical model of attention, namely the Load Theory, could account for a large range of empirical findings in this field by distinguishing the role of perceptual and cognitive resources in attentional selection and attentional capture by irrelevant stimuli. As this model has been mostly built on implicit measures of distractor interference, it is unclear whether its predictions also hold when explicit and subjective awareness of an unexpected stimulus is concerned. Therefore, we conducted a systematic review and meta-analyses of inattentional blindness studies investigating the role of perceptual and/or cognitive resources. The results reveal that, in line with the perceptual account of the Load Theory, inattentional blindness significantly increases with the perceptual load of the task. However, the cognitive account of this theory is not clearly supported by the empirical findings analysed here. Furthermore, the interaction between perceptual and cognitive load on inattentional blindness remains understudied. Theoretical implications for the Load Theory are discussed, notably regarding the difference between attentional capture and subjective awareness paradigms, and further research directions are provided.
Collapse
Affiliation(s)
- Jérémy Matias
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| | - Clément Belletier
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| | - Marie Izaute
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| | - Matthieu Lutz
- Innovation Procédés Industriels, Michelin Recherche et Développement, Clermont-Ferrand, France
| | - Laetitia Silvert
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| |
Collapse
|
18
|
Perceptual and cognitive processes in augmented reality - comparison between binocular and monocular presentations. Atten Percept Psychophys 2021; 84:490-508. [PMID: 34426931 PMCID: PMC8888418 DOI: 10.3758/s13414-021-02346-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2021] [Indexed: 11/08/2022]
Abstract
In the present study, we investigated the difference between monocular augmented reality (AR) and binocular AR in terms of perception and cognition by using a task that combines the flanker task with the oddball task. A right- or left-facing arrowhead was presented as a central stimulus at the central vision, and participants were instructed to press a key only when the direction in which the arrowhead faced was a target. In a small number of trials, arrowheads that were facing in the same or opposite direction (flanker stimuli) were presented beside the central stimulus binocularly or monocularly as an AR image. In the binocular condition, the flanker stimuli were presented to both eyes, and, in the monocular condition, only to the dominant eye. The results revealed that participants could respond faster in the binocular condition than in the monocular one; however, only when the flanker stimuli were in the opposite direction was the response faster in the monocular condition. Moreover, the results of event-related brain potentials (ERPs) showed that all stimuli were processed in both the monocular and the binocular conditions in the perceptual stage; however, the influence of the flanker stimuli was attenuated in the monocular condition in the cognitive stage. The influence of flanker stimuli might be more unstable in the monocular condition than in the binocular condition, but more precise examination should be conducted in a future study.
Collapse
|
19
|
Maharjan N, Alsadoon A, Prasad PWC, Abdullah S, Rashid TA. A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm. Int J Med Robot 2021; 17:e2223. [PMID: 33421286 DOI: 10.1002/rcs.2223] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Revised: 08/18/2020] [Accepted: 08/19/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIM Image registration and alignment are the main limitations of augmented reality (AR)-based knee replacement surgery. This research aims to decrease the registration error, eliminate outcomes that are trapped in local minima to improve the alignment problems, handle the occlusion and maximize the overlapping parts. METHODOLOGY Markerless image registration method was used for AR-based knee replacement surgery to guide and visualize the surgical operation. While weight least square algorithm was used to enhance stereo camera-based tracking by filling border occlusion in right-to-left direction and non-border occlusion from left-to-right direction. RESULTS This study has improved video precision to 0.57-0.61 mm alignment error. Furthermore, with the use of bidirectional points, that is, forward and backward directional cloud point, the iteration on image registration was decreased. This has led to improve the processing time as well. The processing time of video frames was improved to 7.4-11.74 frames per second. CONCLUSIONS It seems clear that this proposed system has focused on overcoming the misalignment difficulty caused by the movement of patient and enhancing the AR visualization during knee replacement surgery. The proposed system was reliable and favourable which helps in eliminating alignment error by ascertaining the optimal rigid transformation between two cloud points and removing the outliers and non-Gaussian noise. The proposed AR system helps in accurate visualization and navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels and so forth.
Collapse
Affiliation(s)
- Nitish Maharjan
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney Campus, Wagga Wagga, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney Campus, Wagga Wagga, Australia.,School of Computer Data and Mathematical Sciences, University of Western Sydney (UWS), Sydney, Australia.,School of Information Technology, Southern Cross University (SCU), Sydney, Australia.,Asia Pacific International College (APIC), Information Technology Department, Sydney, Australia.,Kent Institute Australia, Sydney, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney Campus, Wagga Wagga, Australia
| | - Salma Abdullah
- Department of Computer Engineering, University of Technology, Baghdad, Iraq
| | - Tarik A Rashid
- Asia Pacific International College (APIC), Information Technology Department, Sydney, Australia
| |
Collapse
|
20
|
Abstract
Current developments in the field of extended reality (XR) could prove useful in the optimization of surgical workflows, time effectiveness and postoperative outcome. Although still primarily a subject of research, the state of XR technologies is rapidly improving and approaching feasibility for a broad clinical application. Surgical fields of application of XR technologies are currently primarily training, preoperative planning and intraoperative assistance. For all three areas, products already exist (some clinically approved) and technical feasibility studies have been conducted. In teaching, the use of XR can already be assessed as fundamentally practical and meaningful but still needs to be evaluated in large multicenter studies. In preoperative planning XR can also offer advantages, although technical limitations often impede routine use; however, for cases of intraoperative use informative evaluation studies are mostly lacking, so that an assessment is not yet possible in a meaningful way. Furthermore, there is a lack of assessments regarding cost-effectiveness in all three areas. The XR technologies enable proven advantages in surgical workflows despite the lack of high-quality evaluation with respect to the practical and clinical use of XR. New concepts for effective interaction with XR media also need to be developed. In the future, further research progress and technical developments in the field can be expected.
Collapse
Affiliation(s)
- Christoph Rüger
- Chirurgische Klinik, Campus Charité Mitte|Campus Virchow-Klinikum, Experimentelle Chirurgie, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Deutschland
| | - Simon Moosburner
- Chirurgische Klinik, Campus Charité Mitte|Campus Virchow-Klinikum, Experimentelle Chirurgie, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Deutschland
| | - Igor M Sauer
- Chirurgische Klinik, Campus Charité Mitte|Campus Virchow-Klinikum, Experimentelle Chirurgie, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Deutschland.
- Matters of Activity. Image Space Material, Berlin, Deutschland.
| |
Collapse
|
21
|
Kortschot SW, Jamieson GA. Classification of Attentional Tunneling Through Behavioral Indices. HUMAN FACTORS 2020; 62:973-986. [PMID: 31260334 DOI: 10.1177/0018720819857266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE The objective of this study was to develop a machine learning classifier to infer attentional tunneling through behavioral indices. This research serves as a proof of concept for a method for inferring operator state to trigger adaptations to user interfaces. BACKGROUND Adaptive user interfaces adapt their information content or configuration to changes in operating context. Operator attentional states represent a promising class of triggers for these adaptations. Behavioral indices may be a viable alternative to physiological correlates for triggering interface adaptations based on attentional state. METHOD A visual search task sought to induce attentional tunneling in participants. We analyzed user interaction under tunnel and non-tunnel conditions to determine whether the paradigm was successful. We then examined the performance trade-offs stemming from attentional tunnels. Finally, we developed a machine learning classifier to identify patterns of interaction characteristics associated with attentional tunnels. RESULTS The experimental paradigm successfully induced attentional tunnels. Attentional tunnels were shown to improve performance when information appeared within them, but to hinder performance when it appeared outside. Participants were found to be more tunneled in their second tunnel trial relative to their first. Our classifier achieved a classification accuracy similar to comparable studies (area under curve = 0.74). CONCLUSION Behavioral indices can be used to infer attentional tunneling. There is a performance trade-off from attentional tunneling, suggesting the opportunity for adaptive systems. APPLICATION This research applies to adaptive automation aimed at managing operator attention in information-dense work domains.
Collapse
|
22
|
Laverdière C, Corban J, Khoury J, Ge SM, Schupbach J, Harvey EJ, Reindl R, Martineau PA. Augmented reality in orthopaedics. Bone Joint J 2019; 101-B:1479-1488. [DOI: 10.1302/0301-620x.101b12.bjj-2019-0315.r1] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Aims Computer-based applications are increasingly being used by orthopaedic surgeons in their clinical practice. With the integration of technology in surgery, augmented reality (AR) may become an important tool for surgeons in the future. By superimposing a digital image on a user’s view of the physical world, this technology shows great promise in orthopaedics. The aim of this review is to investigate the current and potential uses of AR in orthopaedics. Materials and Methods A systematic review of the PubMed, MEDLINE, and Embase databases up to January 2019 using the keywords ‘orthopaedic’ OR ‘orthopedic AND augmented reality’ was performed by two independent reviewers. Results A total of 41 publications were included after screening. Applications were divided by subspecialty: spine (n = 15), trauma (n = 16), arthroplasty (n = 3), oncology (n = 3), and sports (n = 4). Out of these, 12 were clinical in nature. AR-based technologies have a wide variety of applications, including direct visualization of radiological images by overlaying them on the patient and intraoperative guidance using preoperative plans projected onto real anatomy, enabling hands-free real-time access to operating room resources, and promoting telemedicine and education. Conclusion There is an increasing interest in AR among orthopaedic surgeons. Although studies show similar or better outcomes with AR compared with traditional techniques, many challenges need to be addressed before this technology is ready for widespread use. Cite this article: Bone Joint J 2019;101-B:1479–1488
Collapse
Affiliation(s)
- Carl Laverdière
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Jason Corban
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Jason Khoury
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Susan Mengxiao Ge
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Justin Schupbach
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Edward J. Harvey
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Rudy Reindl
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Paul A. Martineau
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| |
Collapse
|
23
|
Perfect Registration Leads to Imperfect Performance: A Randomized Trial of Multimodal Intraoperative Image Guidance. Ann Surg 2019; 269:236-242. [PMID: 29727330 DOI: 10.1097/sla.0000000000002793] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE To compare surgical safety and efficiency of 2 image guidance modalities, perfect augmented reality (AR) and side-by-side unregistered image guidance (IG), against a no guidance control (NG), when performing a simulated laparoscopic cholecystectomy (LC). BACKGROUND Image guidance using AR offers the potential to improve understanding of subsurface anatomy, with positive ramifications for surgical safety and efficiency. No intra-abdominal study has demonstrated any advantage for the technology. Perfect AR cannot be provided in the operative setting in a patient; however, it can be generated in the simulated setting. METHODS Thirty-six experienced surgeons performed a baseline LC using the LapMentor simulator before randomization to 1 of 3 study arms: AR, IG, or NG. Each performed 3 further LC. Safety and efficiency-related simulator metrics, and task workload (SURG-TLX) were collected. RESULTS The IG group had a shorter total instrument path length and fewer movements than NG and AR groups. Both IG and NG took a significantly shorter time than AR to complete dissection of Calot triangle. Use of IG and AR resulted in significantly fewer perforations and serious complications than the NG group. IG had significantly fewer perforations and serious complications than the AR group. Compared with IG, AR guidance was found to be significantly more distracting. CONCLUSION Side-by-side unregistered image guidance (IG) improved safety and surgical efficiency in a simulated setting when compared with AR or NG. IG provides a more tangible opportunity for integrating image guidance into existing surgical workflow as well as delivering the safety and efficiency benefits desired.
Collapse
|
24
|
Kitamura A, Kinosada Y, Shinohara K. Monocular Presentation Attenuates Change Blindness During the Use of Augmented Reality. Front Psychol 2019; 10:1688. [PMID: 31417452 PMCID: PMC6684742 DOI: 10.3389/fpsyg.2019.01688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 07/04/2019] [Indexed: 11/13/2022] Open
Abstract
Augmented reality (AR) is an emerging technology in which information is superimposed onto the real world directly in front of observers. AR images may behave as distractors because they are inside the observer’s field of view and may cause observers to overlook important information in the real world. This kind of overlooking of events or objects is known as “change blindness.” In change blindness, a distractor may cause someone to overlook a change between an original image and a modified image. In the present study, we investigated whether change blindness occurs when AR is used and whether the AR presentation method influences change blindness. An AR image was presented binocularly or monocularly as a distractor in a typical flicker paradigm. In the binocular presentation, the AR image was presented to the both of the participants’ eyes, so, it was not different from the typical flicker paradigm. By contrast, in the monocular presentation, the AR image was presented to only one eye. Therefore, it was hypothesized that if participants could observe the real-world image through the eye to which the AR image was not presented, change blindness would be avoided because the moment of change itself could be observed. In addition, the luminance of the AR image was expected to influence the ease to observe the real world because the AR image is somewhat translucent. Hence, the AR distractor had three luminance conditions (high, medium, and low), and we compared how many alternations were needed to detect changes among the conditions. Result revealed that more alternations were needed in the binocular presentation and in the high luminance condition. However, in all luminance conditions in the monocular presentation, the number of alternations needed to detect the change was not significantly different from that when the AR distractor was not presented. This result indicates that the monocular presentation could attenuate change blindness, and this might be because the observers’ visual attention is attracted to the location where the change has occurred automatically.
Collapse
Affiliation(s)
- Akihiko Kitamura
- Applied Cognitive Psychology Lab, Graduate School of Human Sciences, Osaka University, Suita, Japan
| | - Yasunori Kinosada
- Faculty of Informatics, Shizuoka Institute of Science and Technology, Fukuroi, Japan
| | - Kazumitsu Shinohara
- Applied Cognitive Psychology Lab, Graduate School of Human Sciences, Osaka University, Suita, Japan
| |
Collapse
|
25
|
Chen T, Wei G, Xu L, Shi W, Xu Y, Zhu Y, Hayashi Y, Oda H, Oda M, Hu Y, Yu J, Jiang Z, Li G, Mori K. A deformable model for navigated laparoscopic gastrectomy based on finite elemental method. MINIM INVASIV THER 2019; 29:210-216. [PMID: 31187660 DOI: 10.1080/13645706.2019.1625926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Background: Accurate registration for surgical navigation of laparoscopic surgery is highly challenging due to vessel deformation. Here, we describe the design of a deformable model with improved matching accuracy by applying the finite element method (FEM).Material and methods: ANSYS software was used to simulate an FEM model of the vessel after pull-up based on laparoscopic gastrectomy requirements. The central line of the FEM model and the central line of the ground truth were drawn and compared. Based on the material and parameters determined from the animal experiment, a perigastric vessel FEM model of a gastric cancer patient was created, and its accuracy in a laparoscopic gastrectomy surgical scene was evaluated.Results: In the animal experiment, the FEM model created with Ogden foam material exhibited better results. The average distance between the two central lines was 6.5mm, and the average distance between their closest points was 3.8 mm. In the laparoscopic gastrectomy surgical scene, the FEM model and the true artery deformation demonstrated good coincidence.Conclusion: In this study, a deformable vessel model based on FEM was constructed using preoperative CT images to improve matching accuracy and to supply a reference for further research on deformation matching to facilitate laparoscopic gastrectomy navigation.
Collapse
Affiliation(s)
- Tao Chen
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangzhou, Guangdong Province, China.,Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Guodong Wei
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Lili Xu
- Medical Image Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong Province, China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Yikai Xu
- Medical Image Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong Province, China
| | - Yongyi Zhu
- Medical Image Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong Province, China
| | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Hirohisa Oda
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Yanfeng Hu
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangzhou, Guangdong Province, China
| | - Jiang Yu
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangzhou, Guangdong Province, China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Guoxin Li
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangzhou, Guangdong Province, China
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| |
Collapse
|
26
|
Abstract
BACKGROUND Despite great advances in the development of hardware and software components, surgical navigation systems have only seen limited use in current clinical settings due to their reported complexity, difficulty of integration into clinical workflows and questionable advantages over traditional imaging modalities. OBJECTIVES Development of augmented reality (AR) visualization for surgical navigation without the need for infrared (IR) tracking markers and comparison of the navigation system to conventional imaging. MATERIAL AND METHODS Novel navigation system combining a cone beam computed tomography (CBCT) capable C‑arm with a red-green-blue depth (RGBD) camera. Testing of the device by Kirschner wire (K-wire) placement in phantoms and evaluation of the necessary operating time, number of fluoroscopic images and overall radiation dose were compared to conventional x‑ray imaging. RESULTS We found a significant reduction of the required time, number of fluoroscopic images and overall radiation dose in 3D AR navigation in comparison to x‑ray imaging. CONCLUSION Our AR navigation using RGBD cameras offers a flexible and intuitive visualization of the operating field for the navigated osteosynthesis without IR tracking markers, enabling surgeons to complete operations quicker and with a lower radiation exposure to the patient and surgical staff.
Collapse
|
27
|
Supporting mandibular resection with intraoperative navigation utilizing augmented reality technology – A proof of concept study. J Craniomaxillofac Surg 2019; 47:854-859. [DOI: 10.1016/j.jcms.2019.03.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 01/30/2019] [Accepted: 03/04/2019] [Indexed: 11/22/2022] Open
|
28
|
Pascale MT, Sanderson P, Liu D, Mohamed I, Brecknell B, Loeb RG. The Impact of Head-Worn Displays on Strategic Alarm Management and Situation Awareness. HUMAN FACTORS 2019; 61:537-563. [PMID: 30608190 DOI: 10.1177/0018720818814969] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To investigate whether head-worn displays (HWDs) help mobile participants make better alarm management decisions and achieve better situation awareness than alarms alone. BACKGROUND Patient alarms occur frequently in hospitals but often do not require clinical intervention. Clinicians may become desensitized to alarms and fail to respond to clinically relevant alarms. HWDs could make patient information continuously accessible, support situation awareness, and help clinicians prioritize alarms. METHOD Experiment 1 ( n = 76) tested whether nonclinicians monitoring simulated patients benefited from vital sign information continuously displayed on an HWD while they performed a secondary calculation task. Experiment 2 ( n = 13) tested, across three separate experimental sessions, how effectively nursing trainees monitored simulated patients' vital signs under three different display conditions while they assessed a simulated patient. RESULTS In Experiment 1, participants who had access to continuous patient information on an HWD responded to clinically important alarms 25.9% faster and were 6.7 times less likely to miss alarms compared to participants who only heard alarms. In Experiment 2, participants using an HWD answered situation awareness questions 18.9% more accurately overall than when they used alarms only. However, the effect was significant in only two of the three experimental sessions. CONCLUSION HWDs may help users maintain continuous awareness of multiple remote processes without affecting their performance on ongoing tasks. APPLICATION The outcomes may apply to contexts where access to continuous streams of information from remote locations is useful, such as patient monitoring or clinical supervision.
Collapse
|
29
|
Pietruski P, Majak M, Świątek-Najwer E, Żuk M, Popek M, Jaworowski J, Mazurek M. Supporting fibula free flap harvest with augmented reality: A proof-of-concept study. Laryngoscope 2019; 130:1173-1179. [PMID: 31132152 DOI: 10.1002/lary.28090] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/23/2019] [Accepted: 05/13/2019] [Indexed: 12/30/2022]
Abstract
OBJECTIVE To analyze a novel navigation system utilizing augmented reality (AR) as a supporting method for fibula free flap (FFF) harvest and fabrication. METHODS A total of 126 simulated osteotomies supported with a cutting guide or one of two AR-based intraoperative navigation modules-simple AR (sAR) or navigated AR (nAR)-were carried out on 18 identical models of the fibula (42 osteotomies per method). After fusing postoperative computed tomography scans of the operated fibulas with the virtual surgical plan based on preoperative images, the objective outcomes-angular deviations from the planned osteotomy trajectory (o ) and deviations of control points marked on the trajectory (mm)-were determined. RESULTS All analyzed methods provided similar accuracy of assisted osteotomies. The only significant difference referred to angular deviation in the sagittal plane, which was smaller after the cutting guide-assisted procedures than after the application of sAR and nAR (4.1 ± 2.29 vs. 5.08 ± 3.64 degrees, P = 0.031 and 4.1 ± 2.29 vs. 4.97 ± 2.91, P = 0.002, respectively). Mean deviation of control points after the cutting guide-assisted procedures was 2.76 ± 1.06 mm, as compared with 2.67 ± 1.09 mm for sAR and 2.95 ± 1.11 mm for nAR. CONCLUSION Our study demonstrated that both novel AR-based methods provided similar accuracy of assisted harvesting and contouring of the FFF as the cutting guides. This fact, as well as the acceptability of the concept by clinicians, justify their further development and evaluation in preclinical settings. LEVEL OF EVIDENCE NA Laryngoscope, 130:1173-1179, 2020.
Collapse
Affiliation(s)
- Piotr Pietruski
- Department of Applied Pharmacy and Bioengineering, Medical University of Warsaw, Warsaw, Poland
| | - Marcin Majak
- Department of Biomedical Engineering, Mechatronics and Theory of Mechanisms, Wroclaw University of Technology, Wroclaw, Poland.,Department of Radiology, Medical Centre of Postgraduate Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock, Poland
| | - Ewelina Świątek-Najwer
- Department of Biomedical Engineering, Mechatronics and Theory of Mechanisms, Wroclaw University of Technology, Wroclaw, Poland
| | - Magdalena Żuk
- Department of Biomedical Engineering, Mechatronics and Theory of Mechanisms, Wroclaw University of Technology, Wroclaw, Poland
| | - Michał Popek
- Department of Biomedical Engineering, Mechatronics and Theory of Mechanisms, Wroclaw University of Technology, Wroclaw, Poland
| | - Janusz Jaworowski
- Department of Applied Pharmacy and Bioengineering, Medical University of Warsaw, Warsaw, Poland.,Timeless Plastic Surgery Clinic, Warsaw, Poland
| | - Maciej Mazurek
- Department of Applied Pharmacy and Bioengineering, Medical University of Warsaw, Warsaw, Poland
| |
Collapse
|
30
|
|
31
|
|
32
|
Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med Biol Eng Comput 2018; 57:995-1013. [PMID: 30511205 DOI: 10.1007/s11517-018-1929-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 11/03/2018] [Indexed: 01/14/2023]
Abstract
Minimally invasive techniques, such as laparoscopy and radiofrequency ablation of tumors, bring important advantages in surgery: by minimizing incisions on the patient's body, they can reduce the hospitalization period and the risk of postoperative complications. Unfortunately, they come with drawbacks for surgeons, who have a restricted vision of the operation area through an indirect access and 2D images provided by a camera inserted in the body. Augmented reality provides an "X-ray vision" of the patient anatomy thanks to the visualization of the internal organs of the patient. In this way, surgeons are free from the task of mentally associating the content from CT images to the operative scene. We present a navigation system that supports surgeons in preoperative and intraoperative phases and an augmented reality system that superimposes virtual organs on the patient's body together with depth and distance information. We implemented a combination of visual and audio cues allowing the surgeon to improve the intervention precision and avoid the risk of damaging anatomical structures. The test scenarios proved the good efficacy and accuracy of the system. Moreover, tests in the operating room suggested some modifications to the tracking system to make it more robust with respect to occlusions. Graphical Abstract Augmented visualization in minimally invasive surgery.
Collapse
|
33
|
Jayender J, Xavier B, King F, Hosny A, Black D, Pieper S, Tavakkoli A. A Novel Mixed Reality Navigation System for Laparoscopy Surgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2018; 11073:72-80. [PMID: 31098598 PMCID: PMC6512867 DOI: 10.1007/978-3-030-00937-3_9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
OBJECTIVE To design and validate a novel mixed reality head-mounted display for intraoperative surgical navigation. DESIGN A mixed reality navigation for laparoscopic surgery (MRNLS) system using a head mounted display (HMD) was developed to integrate the displays from a laparoscope, navigation system, and diagnostic imaging to provide context-specific information to the surgeon. Further, an immersive auditory feedback was also provided to the user. Sixteen surgeons were recruited to quantify the differential improvement in performance based on the mode of guidance provided to the user (laparoscopic navigation with CT guidance (LN-CT) versus mixed reality navigation for laparoscopic surgery (MRNLS)). The users performed three tasks: (1) standard peg transfer, (2) radiolabeled peg identification and transfer, and (3) radiolabeled peg identification and transfer through sensitive wire structures. RESULTS For the more complex task of peg identification and transfer, significant improvements were observed in time to completion, kinematics such as mean velocity, and task load index subscales of mental demand and effort when using the MRNLS (p < 0.05) compared to the current standard of LN-CT. For the final task of peg identification and transfer through sensitive structures, time taken to complete the task and frustration were significantly lower for MRNLS compared to the LN-CT approach. CONCLUSIONS A novel mixed reality navigation for laparoscopic surgery (MRNLS) has been designed and validated. The ergonomics of laparoscopic procedures could be improved while minimizing the necessity of additional monitors in the operating room.
Collapse
Affiliation(s)
- Jagadeesan Jayender
- Brigham and Women's Hospital, Boston, MA 02115, USA
- Harvard Medical School, Boston, MA 02115, USA
| | | | | | - Ahmed Hosny
- Boston Medical School, Boston, MA 02115, USA
| | | | | | - Ali Tavakkoli
- Brigham and Women's Hospital, Boston, MA 02115, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
34
|
Hettig J, Engelhardt S, Hansen C, Mistelbauer G. AR in VR: assessing surgical augmented reality visualizations in a steerable virtual reality environment. Int J Comput Assist Radiol Surg 2018; 13:1717-1725. [DOI: 10.1007/s11548-018-1825-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 07/05/2018] [Indexed: 12/28/2022]
|
35
|
Bong JH, Song HJ, Oh Y, Park N, Kim H, Park S. Endoscopic navigation system with extended field of view using augmented reality technology. Int J Med Robot 2017; 14. [DOI: 10.1002/rcs.1886] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 10/30/2017] [Accepted: 11/21/2017] [Indexed: 11/11/2022]
Affiliation(s)
- Jae Hwan Bong
- Department of Mechanical Engineering; Korea University; Seoul Korea
| | | | - Yoojin Oh
- Department of Mechanical Engineering; Korea University; Seoul Korea
| | - Namji Park
- Department of Biomedical Engineering; Columbia University; New York United States
| | - Hyungmin Kim
- Center for Bionics; Korea Institute of Science and Technology; Seoul Korea
| | - Shinsuk Park
- Department of Mechanical Engineering; Korea University; Seoul Korea
| |
Collapse
|
36
|
Black D, Hahn HK, Kikinis R, Wårdell K, Haj-Hosseini N. Auditory display for fluorescence-guided open brain tumor surgery. Int J Comput Assist Radiol Surg 2017; 13:25-35. [PMID: 28929305 DOI: 10.1007/s11548-017-1667-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 09/07/2017] [Indexed: 02/02/2023]
Abstract
PURPOSE Protoporphyrin (PpIX) fluorescence allows discrimination of tumor and normal brain tissue during neurosurgery. A handheld fluorescence (HHF) probe can be used for spectroscopic measurement of 5-ALA-induced PpIX to enable objective detection compared to visual evaluation of fluorescence. However, current technology requires that the surgeon either views the measured values on a screen or employs an assistant to verbally relay the values. An auditory feedback system was developed and evaluated for communicating measured fluorescence intensity values directly to the surgeon. METHODS The auditory display was programmed to map the values measured by the HHF probe to the playback of tones that represented three fluorescence intensity ranges and one error signal. Ten persons with no previous knowledge of the application took part in a laboratory evaluation. After a brief training period, participants performed measurements on a tray of 96 wells of liquid fluorescence phantom and verbally stated the perceived measurement values for each well. The latency and accuracy of the participants' verbal responses were recorded. The long-term memorization of sound function was evaluated in a second set of 10 participants 2-3 and 7-12 days after training. RESULTS The participants identified the played tone accurately for 98% of measurements after training. The median response time to verbally identify the played tones was 2 pulses. No correlation was found between the latency and accuracy of the responses, and no significant correlation with the musical proficiency of the participants was observed on the function responses. Responses for the memory test were 100% accurate. CONCLUSION The employed auditory display was shown to be intuitive, easy to learn and remember, fast to recognize, and accurate in providing users with measurements of fluorescence intensity or error signal. The results of this work establish a basis for implementing and further evaluating auditory displays in clinical scenarios involving fluorescence guidance and other areas for which categorized auditory display could be useful.
Collapse
Affiliation(s)
- David Black
- Medical Image Computing, University of Bremen, Bremen, Germany.
- Jacobs University, Bremen, Germany.
- Fraunhofer MEVIS, Bremen, Germany.
| | - Horst K Hahn
- Jacobs University, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
| | - Ron Kikinis
- Medical Image Computing, University of Bremen, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Karin Wårdell
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Neda Haj-Hosseini
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| |
Collapse
|
37
|
Detmer FJ, Hettig J, Schindele D, Schostak M, Hansen C. Virtual and Augmented Reality Systems for Renal Interventions: A Systematic Review. IEEE Rev Biomed Eng 2017; 10:78-94. [PMID: 28885161 DOI: 10.1109/rbme.2017.2749527] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
PURPOSE Many virtual and augmented reality systems have been proposed to support renal interventions. This paper reviews such systems employed in the treatment of renal cell carcinoma and renal stones. METHODS A systematic literature search was performed. Inclusion criteria were virtual and augmented reality systems for radical or partial nephrectomy and renal stone treatment, excluding systems solely developed or evaluated for training purposes. RESULTS In total, 52 research papers were identified and analyzed. Most of the identified literature (87%) deals with systems for renal cell carcinoma treatment. About 44% of the systems have already been employed in clinical practice, but only 20% in studies with ten or more patients. Main challenges remaining for future research include the consideration of organ movement and deformation, human factor issues, and the conduction of large clinical studies. CONCLUSION Augmented and virtual reality systems have the potential to improve safety and outcomes of renal interventions. In the last ten years, many technical advances have led to more sophisticated systems, which are already applied in clinical practice. Further research is required to cope with current limitations of virtual and augmented reality assistance in clinical environments.
Collapse
|
38
|
Aghajani H, Garbey M, Omurtag A. Measuring Mental Workload with EEG+fNIRS. Front Hum Neurosci 2017; 11:359. [PMID: 28769775 PMCID: PMC5509792 DOI: 10.3389/fnhum.2017.00359] [Citation(s) in RCA: 111] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 06/23/2017] [Indexed: 01/21/2023] Open
Abstract
We studied the capability of a Hybrid functional neuroimaging technique to quantify human mental workload (MWL). We have used electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) as imaging modalities with 17 healthy subjects performing the letter n-back task, a standard experimental paradigm related to working memory (WM). The level of MWL was parametrically changed by variation of n from 0 to 3. Nineteen EEG channels were covering the whole-head and 19 fNIRS channels were located on the forehead to cover the most dominant brain region involved in WM. Grand block averaging of recorded signals revealed specific behaviors of oxygenated-hemoglobin level during changes in the level of MWL. A machine learning approach has been utilized for detection of the level of MWL. We extracted different features from EEG, fNIRS, and EEG+fNIRS signals as the biomarkers of MWL and fed them to a linear support vector machine (SVM) as train and test sets. These features were selected based on their sensitivity to the changes in the level of MWL according to the literature. We introduced a new category of features within fNIRS and EEG+fNIRS systems. In addition, the performance level of each feature category was systematically assessed. We also assessed the effect of number of features and window size in classification performance. SVM classifier used in order to discriminate between different combinations of cognitive states from binary- and multi-class states. In addition to the cross-validated performance level of the classifier other metrics such as sensitivity, specificity, and predictive values were calculated for a comprehensive assessment of the classification system. The Hybrid (EEG+fNIRS) system had an accuracy that was significantly higher than that of either EEG or fNIRS. Our results suggest that EEG+fNIRS features combined with a classifier are capable of robustly discriminating among various levels of MWL. Results suggest that EEG+fNIRS should be preferred to only EEG or fNIRS, in developing passive BCIs and other applications which need to monitor users' MWL.
Collapse
Affiliation(s)
- Haleh Aghajani
- Department of Biomedical Engineering, University of HoustonHouston, TX, United States
| | - Marc Garbey
- Center for Computational Surgery, Department of Surgery, Research Institute, Houston MethodistHouston, TX, United States
| | - Ahmet Omurtag
- Department of Biomedical Engineering, University of HoustonHouston, TX, United States
| |
Collapse
|
39
|
A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artif Intell Med 2017; 80:39-47. [DOI: 10.1016/j.artmed.2017.07.004] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Revised: 05/19/2017] [Accepted: 07/06/2017] [Indexed: 12/21/2022]
|
40
|
Augmented Reality in Neurosurgery: A Review of Current Concepts and Emerging Applications. Can J Neurol Sci 2017; 44:235-245. [PMID: 28434425 DOI: 10.1017/cjn.2016.443] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Augmented reality (AR) superimposes computer-generated virtual objects onto the user's view of the real world. Among medical disciplines, neurosurgery has long been at the forefront of image-guided surgery, and it continues to push the frontiers of AR technology in the operating room. In this systematic review, we explore the history of AR in neurosurgery and examine the literature on current neurosurgical applications of AR. Significant challenges to surgical AR exist, including compounded sources of registration error, impaired depth perception, visual and tactile temporal asynchrony, and operator inattentional blindness. Nevertheless, the ability to accurately display multiple three-dimensional datasets congruently over the area where they are most useful, coupled with future advances in imaging, registration, display technology, and robotic actuation, portend a promising role for AR in the neurosurgical operating room.
Collapse
|
41
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
42
|
Black D, Hettig J, Luz M, Hansen C, Kikinis R, Hahn H. Auditory feedback to support image-guided medical needle placement. Int J Comput Assist Radiol Surg 2017; 12:1655-1663. [PMID: 28213646 DOI: 10.1007/s11548-017-1537-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Accepted: 02/01/2017] [Indexed: 11/27/2022]
Abstract
PURPOSE During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician's visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. METHODS An auditory synthesis model using pitch comparison and stereo panning parameter mapping was developed to augment or replace visual feedback for navigated needle placement. In contrast to existing approaches which augment but still require a visual display, this method allows view-free needle placement. An evaluation with 12 novice participants compared both auditory and combined audiovisual feedback against existing visual methods. RESULTS Using combined audiovisual display, participants show similar task completion times and report similar subjective workload and accuracy while viewing the screen less compared to using the conventional visual method. The auditory feedback leads to higher task completion times and subjective workload compared to both combined and visual feedback. CONCLUSION Audiovisual feedback shows promising results and establishes a basis for applying auditory feedback as a supplement to visual information to other navigated interventions, especially those for which viewing a patient is beneficial or necessary.
Collapse
Affiliation(s)
- David Black
- Jacobs University, Bremen, Germany.
- Medical Image Computing, University of Bremen, Bremen, Germany.
- Fraunhofer MEVIS, Bremen, Germany.
| | - Julian Hettig
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Maria Luz
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Christian Hansen
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Ron Kikinis
- Medical Image Computing, University of Bremen, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
- Surgical Planning Laboratory, Brigham and Women's Hospital, Boston, MA, USA
| | - Horst Hahn
- Jacobs University, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
43
|
Aneurysm Surgery with Preoperative Three-Dimensional Planning in a Virtual Reality Environment: Technique and Outcome Analysis. World Neurosurg 2016; 96:489-499. [DOI: 10.1016/j.wneu.2016.08.124] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Revised: 08/27/2016] [Accepted: 08/30/2016] [Indexed: 11/22/2022]
|
44
|
Image-guided interventions and computer-integrated therapy: Quo vadis? Med Image Anal 2016; 33:56-63. [PMID: 27373146 DOI: 10.1016/j.media.2016.06.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Revised: 05/26/2016] [Accepted: 06/06/2016] [Indexed: 11/21/2022]
Abstract
Significant efforts have been dedicated to minimizing invasiveness associated with surgical interventions, most of which have been possible thanks to the developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided interventions have promised to dramatically change the way therapies are delivered to many organs. However, in spite of the development of many sophisticated technologies over the past two decades, other than some isolated examples of successful implementations, minimally invasive therapy is far from enjoying the wide acceptance once envisioned. This paper provides a large-scale overview of the state-of-the-art developments, identifies several barriers thought to have hampered the wider adoption of image-guided navigation, and suggests areas of research that may potentially advance the field.
Collapse
|
45
|
Dixon BJ, Chan H, Daly MJ, Qiu J, Vescan A, Witterick IJ, Irish JC. Three-dimensional virtual navigation versus conventional image guidance: A randomized controlled trial. Laryngoscope 2016; 126:1510-5. [PMID: 27075606 DOI: 10.1002/lary.25882] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Revised: 12/12/2015] [Accepted: 12/29/2015] [Indexed: 11/10/2022]
Abstract
OBJECTIVES/HYPOTHESIS Providing image guidance in a 3-dimensional (3D) format, visually more in keeping with the operative field, could potentially reduce workload and lead to faster and more accurate navigation. We wished to assess a 3D virtual-view surgical navigation prototype in comparison to a traditional 2D system. METHODS Thirty-seven otolaryngology surgeons and trainees completed a randomized crossover navigation exercise on a cadaver model. Each subject identified three sinonasal landmarks with 3D virtual (3DV) image guidance and three landmarks with conventional cross-sectional computed tomography (CT) image guidance. Subjects were randomized with regard to which side and display type was tested initially. Accuracy, task completion time, and task workload were recorded. RESULTS Display type did not influence accuracy (P > 0.2) or efficiency (P > 0.3) for any of the six landmarks investigated. Pooled landmark data revealed a trend of improved accuracy in the 3DV group by 0.44 millimeters (95% confidence interval [0.00-0.88]). High-volume surgeons were significantly faster (P < 0.01) and had reduced workload scores in all domains (P < 0.01), but they were no more accurate (P > 0.28). CONCLUSION Real-time 3D image guidance did not influence accuracy, efficiency, or task workload when compared to conventional triplanar image guidance. The subtle pooled accuracy advantage for the 3DV view is unlikely to be of clinical significance. Experience level was strongly correlated to task completion time and workload but did not influence accuracy. LEVEL OF EVIDENCE N/A. Laryngoscope, 126:1510-1515, 2016.
Collapse
Affiliation(s)
- Benjamin J Dixon
- Department of Surgery, University of Melbourne, St Vincent's Hospital and Peter MacCallum Cancer Centre, Melbourne, Australia
| | - Harley Chan
- Ontario Cancer Institute, University Health Network, Toronto, Ontario, Canada
| | - Michael J Daly
- Ontario Cancer Institute, University Health Network, Toronto, Ontario, Canada.,Institute of Medical Science, Princess Margaret Hospital, University Health Network, Toronto, Ontario, Canada
| | - Jimmy Qiu
- Ontario Cancer Institute, University Health Network, Toronto, Ontario, Canada
| | - Allan Vescan
- Department of Otolaryngology-Head and Neck Surgery, Mount Sinai Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Ian J Witterick
- Departments of Surgical Oncology, University Health Network, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Ontario Cancer Institute, University Health Network, Toronto, Ontario, Canada.,Department of Otolaryngology-Head and Neck Surgery, Mount Sinai Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan C Irish
- Departments of Surgical Oncology, University Health Network, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Ontario Cancer Institute, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
46
|
Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy. Int J Comput Assist Radiol Surg 2016; 11:881-8. [DOI: 10.1007/s11548-016-1379-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 03/07/2016] [Indexed: 10/22/2022]
|
47
|
A wearable navigation display can improve attentiveness to the surgical field. Int J Comput Assist Radiol Surg 2016; 11:1193-200. [DOI: 10.1007/s11548-016-1372-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 02/26/2016] [Indexed: 11/25/2022]
|
48
|
Malthouse T, Kasivisvanathan V, Raison N, Lam W, Challacombe B. The future of partial nephrectomy. Int J Surg 2016; 36:560-567. [PMID: 26975430 DOI: 10.1016/j.ijsu.2016.03.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Accepted: 03/10/2016] [Indexed: 12/29/2022]
Abstract
Innovation in recent times has accelerated due to factors such as the globalization of communication; but there are also more barriers/safeguards in place than ever before as we strive to streamline this process. From the first planned partial nephrectomy completed in 1887, it took over a century to become recommended practice for small renal tumours. At present, identified areas for improvement/innovation are 1) to preserve renal parenchyma, 2) to optimise pre-operative eGFR and 3) to reduce global warm ischaemia time. All 3 of these, are statistically significant predictors of post-operative renal function. Urologists, have a proud history of embracing innovation & have experimented with different clamping techniques of the renal vasculature, image guidance in robotics, renal hypothermia, lasers and new robots under development. The DaVinci model may soon no longer have a monopoly on this market, as it loses its stranglehold with novel technology emerging including added features, such as haptic feedback with reduced costs. As ever, our predictions of the future may well fall wide of the mark, but in order to progress, one must open the mind to the possibilities that already exist, as evolution of existing technology often appears to be a revolution in hindsight.
Collapse
Affiliation(s)
- Theo Malthouse
- Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London SE1 9RT, United Kingdom.
| | - Veeru Kasivisvanathan
- University College London Hospital, 235 Euston Rd, Fitzrovia, London NW1 2BU, United Kingdom
| | - Nicholas Raison
- King's College Hospital NHS Foundation Trust, Denmark Hill, London SE5 9RS, United Kingdom
| | - Wayne Lam
- Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London SE1 9RT, United Kingdom
| | - Ben Challacombe
- Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London SE1 9RT, United Kingdom
| |
Collapse
|
49
|
|
50
|
Fallavollita P, Wang L, Weidert S, Navab N. Augmented Reality in Orthopaedic Interventions and Education. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/978-3-319-23482-3_13] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|