1
|
Zhang H, Killeen BD, Ku Y, Seenivasan L, Zhao Y, Liu M, Yang Y, Gu S, Martin‐Gomez A, Taylor, Osgood G, Unberath M. StraightTrack: Towards mixed reality navigation system for percutaneous K-wire insertion. Healthc Technol Lett 2024; 11:355-364. [PMID: 39720744 PMCID: PMC11665788 DOI: 10.1049/htl2.12103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 11/25/2024] [Indexed: 12/26/2024] Open
Abstract
In percutaneous pelvic trauma surgery, accurate placement of Kirschner wires (K-wires) is crucial to ensure effective fracture fixation and avoid complications due to breaching the cortical bone along an unsuitable trajectory. Surgical navigation via mixed reality (MR) can help achieve precise wire placement in a low-profile form factor. Current approaches in this domain are as yet unsuitable for real-world deployment because they fall short of guaranteeing accurate visual feedback due to uncontrolled bending of the wire. To ensure accurate feedback, StraightTrack, an MR navigation system designed for percutaneous wire placement in complex anatomy, is introduced. StraightTrack features a marker body equipped with a rigid access cannula that mitigates wire bending due to interactions with soft tissue and a covered bony surface. Integrated with an optical see-through head-mounted display capable of tracking the cannula body, StraightTrack offers real-time 3D visualization and guidance without external trackers, which are prone to losing line-of-sight. In phantom experiments with two experienced orthopedic surgeons, StraightTrack improves wire placement accuracy, achieving the ideal trajectory within5.26 ± 2.29 mm and2.88 ± 1.49 , compared to over 12.08 mm and 4.07 for comparable methods. As MR navigation systems continue to mature, StraightTrack realizes their potential for internal fracture fixation and other percutaneous orthopedic procedures.
Collapse
Affiliation(s)
- Han Zhang
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | | | - Yu‐Chun Ku
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | | | - Yuxuan Zhao
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Mingxu Liu
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Yue Yang
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Suxi Gu
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | | | - Taylor
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Greg Osgood
- Department of Orthopaedic SurgeryJohns Hopkins MedicineBaltimoreMarylandUSA
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| |
Collapse
|
2
|
Killeen BD, Zhang H, Wang LJ, Liu Z, Kleinbeck C, Rosen M, Taylor RH, Osgood G, Unberath M. Stand in surgeon's shoes: virtual reality cross-training to enhance teamwork in surgery. Int J Comput Assist Radiol Surg 2024; 19:1213-1222. [PMID: 38642297 PMCID: PMC11178441 DOI: 10.1007/s11548-024-03138-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 03/28/2024] [Indexed: 04/22/2024]
Abstract
PURPOSE Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.
Collapse
Affiliation(s)
| | - Han Zhang
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Liam J Wang
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Zixuan Liu
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Constantin Kleinbeck
- Johns Hopkins University, Baltimore, MD, 21218, USA
- Friedrich-Alexander-Universität, Erlangen, Germany
| | | | | | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore, MD, 21218, USA
| | | |
Collapse
|
3
|
Völk C, Bernhard L, Völk D, Weiten M, Wilhelm D, Biberthaler P. [Mobile C-arm-Radiation exposure and workflow killer? : Potential of an innovative assistance system for intraoperative positioning]. UNFALLCHIRURGIE (HEIDELBERG, GERMANY) 2023; 126:928-934. [PMID: 37878125 DOI: 10.1007/s00113-023-01380-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/28/2023] [Indexed: 10/26/2023]
Abstract
Despite its versatile applicability the intraoperative use of a mobile C‑arm is often problematic and potentially associated with increased radiation exposure for both the patient and the personnel. In particular, the correct positioning for adequate imaging can become a problem as the nonsterile circulating nurse has to coordinate the various maneuvers together with the surgeon without having a good view of the surgical field. The sluggishness of the equipment and the intraoperative setting (sterile borders, additional hardware, etc.) pose further challenges. A light detection and ranging (LIDAR)-based assistance system shows promise to provide accurate and intuitive repositioning support as part of an initial series of experimental trials. For this purpose, the sensors are attached to the C‑arm base unit and enable navigation of the device in the operating room to a stored target position using a simultaneous localization and mapping (SLAM) algorithm. An improvement of the workflow as well as a reduction of radiation exposure represent the possible potential of this system. The advantages over other experimental approaches are the lack of external hardware and the ease of use without isolating the operator from the rest of the operating room environment; however, the suitability for daily use in the presence of additional interfering factors should be verified in further studies.
Collapse
Affiliation(s)
- Christopher Völk
- Klinik und Poliklinik für Unfallchirurgie, Klinikum rechts der Isar der TU München, Ismaningerstr. 22, 81675, München, Deutschland.
| | - Lukas Bernhard
- Forschungsgruppe MITI, Klinikum rechts der Isar der TU München, München, Deutschland
| | - Dominik Völk
- Klinik und Poliklinik für Unfallchirurgie, Klinikum rechts der Isar der TU München, Ismaningerstr. 22, 81675, München, Deutschland
| | | | - Dirk Wilhelm
- Forschungsgruppe MITI, Klinikum rechts der Isar der TU München, München, Deutschland
- Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar der TU München, München, Deutschland
| | - Peter Biberthaler
- Klinik und Poliklinik für Unfallchirurgie, Klinikum rechts der Isar der TU München, Ismaningerstr. 22, 81675, München, Deutschland
| |
Collapse
|
4
|
Killeen BD, Gao C, Oguine KJ, Darcy S, Armand M, Taylor RH, Osgood G, Unberath M. An autonomous X-ray image acquisition and interpretation system for assisting percutaneous pelvic fracture fixation. Int J Comput Assist Radiol Surg 2023; 18:1201-1208. [PMID: 37213057 PMCID: PMC11002911 DOI: 10.1007/s11548-023-02941-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/25/2023] [Indexed: 05/23/2023]
Abstract
PURPOSE Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.
Collapse
Affiliation(s)
| | - Cong Gao
- Johns Hopkins University, Baltimore, 21210, MD, USA
| | | | - Sean Darcy
- Johns Hopkins University, Baltimore, 21210, MD, USA
| | - Mehran Armand
- Johns Hopkins University, Baltimore, 21210, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, USA
| | | | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
5
|
Gu W, Knopf J, Cast J, Higgins LD, Knopf D, Unberath M. Nail it! vision-based drift correction for accurate mixed reality surgical guidance. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02950-x. [PMID: 37231201 DOI: 10.1007/s11548-023-02950-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/02/2023] [Indexed: 05/27/2023]
Abstract
PURPOSE Mixed reality-guided surgery through head-mounted displays (HMDs) is gaining interest among surgeons. However, precise tracking of HMDs relative to the surgical environment is crucial for successful outcomes. Without fiducial markers, spatial tracking of the HMD suffers from millimeter- to centimeter-scale drift, resulting in misaligned visualization of registered overlays. Methods and workflows capable of automatically correcting for drift after patient registration are essential to assuring accurate execution of surgical plans. METHODS We present a mixed reality surgical navigation workflow that continuously corrects for drift after patient registration using only image-based methods. We demonstrate its feasibility and capabilities using the Microsoft HoloLens on glenoid pin placement in total shoulder arthroplasty. A phantom study was conducted involving five users with each user placing pins on six glenoids of different deformity, followed by a cadaver study by an attending surgeon. RESULTS In both studies, all users were satisfied with the registration overlay before drilling the pin. Postoperative CT scans showed 1.5 mm error in entry point deviation and 2.4[Formula: see text] error in pin orientation on average in the phantom study and 2.5 mm and 1.5[Formula: see text] in the cadaver study. A trained user takes around 90 s to complete the workflow. Our method also outperformed HoloLens native tracking in drift correction. CONCLUSION Our findings suggest that image-based drift correction can provide mixed reality environments precisely aligned with patient anatomy, enabling pin placement with consistently high accuracy. These techniques constitute a next step toward purely image-based mixed reality surgical guidance, without requiring patient markers or external tracking hardware.
Collapse
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, MD, USA.
| | | | - John Cast
- Johns Hopkins University, Baltimore, MD, USA
| | | | - David Knopf
- Arthrex Inc., 1 Arthrex Way, Naples, FL, USA
| | | |
Collapse
|
6
|
Remote Interactive Surgery Platform (RISP): Proof of Concept for an Augmented-Reality-Based Platform for Surgical Telementoring. J Imaging 2023; 9:jimaging9030056. [PMID: 36976107 PMCID: PMC10054087 DOI: 10.3390/jimaging9030056] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/15/2023] [Accepted: 02/17/2023] [Indexed: 02/26/2023] Open
Abstract
The “Remote Interactive Surgery Platform” (RISP) is an augmented reality (AR)-based platform for surgical telementoring. It builds upon recent advances of mixed reality head-mounted displays (MR-HMD) and associated immersive visualization technologies to assist the surgeon during an operation. It enables an interactive, real-time collaboration with a remote consultant by sharing the operating surgeon’s field of view through the Microsoft (MS) HoloLens2 (HL2). Development of the RISP started during the Medical Augmented Reality Summer School 2021 and is currently still ongoing. It currently includes features such as three-dimensional annotations, bidirectional voice communication and interactive windows to display radiographs within the sterile field. This manuscript provides an overview of the RISP and preliminary results regarding its annotation accuracy and user experience measured with ten participants.
Collapse
|
7
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
8
|
Fan X, Zhu Q, Tu P, Joskowicz L, Chen X. A review of advances in image-guided orthopedic surgery. Phys Med Biol 2023; 68. [PMID: 36595258 DOI: 10.1088/1361-6560/acaae9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qiyang Zhu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
9
|
Killeen BD, Winter J, Gu W, Martin-Gomez A, Taylor RH, Osgood G, Unberath M. Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1130-1135. [PMID: 37555199 PMCID: PMC10406465 DOI: 10.1080/21681163.2022.2154272] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/14/2022]
Abstract
Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.
Collapse
Affiliation(s)
- Benjamin D Killeen
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Jonas Winter
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Wenhao Gu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Mathias Unberath
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
10
|
Kausch L, Thomas S, Kunze H, Norajitra T, Klein A, Ayala L, El Barbari J, Mandelka E, Privalov M, Vetter S, Mahnken A, Maier-Hein L, Maier-Hein K. C-arm positioning for standard projections during spinal implant placement. Med Image Anal 2022; 81:102557. [DOI: 10.1016/j.media.2022.102557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 06/09/2022] [Accepted: 07/22/2022] [Indexed: 10/16/2022]
|
11
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Matthew Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
12
|
Bernhard L, Völk C, Völk D, Rothmeyer F, Xu Z, Ostler D, Biberthaler P, Wilhelm D. RAY-POS: a LIDAR-based assistance system for intraoperative repositioning of mobile C-arms without external aids. Int J Comput Assist Radiol Surg 2022; 17:719-729. [PMID: 35195830 PMCID: PMC8948129 DOI: 10.1007/s11548-022-02571-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 01/26/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE In current clinical practice, intraoperative repositioning of mobile C-arms is challenging due to a lack of visual cues and efficient guiding tools. This can be detrimental to the surgical workflow and lead to additional radiation burdens for both patient and personnel. To overcome this problem, we present our novel approach Lidar-based X-ray Positioning for Mobile C-arms (RAY-POS) for assisting circulating nurses during intraoperative C-arm repositioning without requiring external aids. METHODS RAY-POS consists of a localization module and a graphical user interface for guiding the user back to a previously recorded C-Arm position. We conducted a systematic comparison of simultaneous localization and mapping (SLAM) algorithms using different attachment positions of light detection and ranging (LIDAR) sensors to benchmark localization performance within the operating room (OR). For two promising combinations, we conducted further end-to-end repositioning tests within a realistic OR setup. RESULTS SLAM algorithm gmapping with a LIDAR sensor mounted 40 cm above the C-arm's horizontal unit performed best regarding localization accuracy and long-term stability. The distribution of the repositioning error yielded an effective standard deviation of 7.61 mm. CONCLUSION We conclude that a proof-of-concept for LIDAR-based C-arm repositioning without external aids has been achieved. In future work, we mainly aim at extending the capabilities of our system and evaluating the usability together with clinicians.
Collapse
Affiliation(s)
- Lukas Bernhard
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany.
| | - Christopher Völk
- Department of Trauma Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| | - Dominik Völk
- Department of Trauma Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| | - Florian Rothmeyer
- Technische Universität München, Chair of Materials Handling, Material Flow, Logistics, Munich, Germany
| | - Zhencan Xu
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany
| | - Daniel Ostler
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany
| | - Peter Biberthaler
- Department of Trauma Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| | - Dirk Wilhelm
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany
- Department of Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| |
Collapse
|
13
|
Ha J, Parekh P, Gamble D, Masters J, Jun P, Hester T, Daniels T, Halai M. Opportunities and challenges of using augmented reality and heads-up display in orthopaedic surgery: A narrative review. J Clin Orthop Trauma 2021; 18:209-215. [PMID: 34026489 PMCID: PMC8131920 DOI: 10.1016/j.jcot.2021.04.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 03/28/2021] [Accepted: 04/29/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND & AIM Utilization of augmented reality (AR) and heads-up displays (HUD) to aid orthopaedic surgery has the potential to benefit surgeons and patients alike through improved accuracy, safety, and educational benefits. With the COVID-19 pandemic, the opportunity for adoption of novel technology is more relevant. The aims are to assess the technology available, to understand the current evidence regarding the benefit and to consider challenges to implementation in clinical practice. METHODS & RESULTS PRISMA guidelines were used to filter the literature. Of 1004 articles returned the following exclusion criteria were applied: 1) reviews/commentaries 2) unrelated to orthopaedic surgery 3) use of other AR wearables beyond visual aids leaving 42 papers for review.This review illustrates benefits including enhanced accuracy and reduced time of surgery, reduced radiation exposure and educational benefits. CONCLUSION Whilst there are obstacles to overcome, there are already reports of technology being used. As with all novel technologies, a greater understanding of the learning curve is crucial, in addition to shielding our patients from this learning curve. Improvements in usability and implementing surgeons' specific needs should increase uptake.
Collapse
Affiliation(s)
- Joon Ha
- Queen Elizabeth Hospital, London, UK,Corresponding author.
| | | | | | - James Masters
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), UK
| | - Peter Jun
- University of Alberta, Edmonton, Canada
| | | | | | - Mansur Halai
- St Michael's Hospital, University of Toronto, Canada
| |
Collapse
|
14
|
The effect of artificial X-rays on C-arm positioning performance in a simulated orthopaedic surgical setting. Int J Comput Assist Radiol Surg 2020; 16:11-22. [PMID: 33146849 DOI: 10.1007/s11548-020-02280-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Accepted: 10/09/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE We designed an Artificial X-ray Imaging System (AXIS) that generates simulated fluoroscopic X-ray images on the fly and assessed its utility in improving C-arm positioning performance by C-arm users with little or no C-arm experience. METHODS The AXIS system was comprised of an optical tracking system to monitor C-arm movement, a manikin, a reference CT volume registered to the manikin, and a Digitally Reconstructed Radiograph algorithm to generate live simulated fluoroscopic images. A user study was conducted with 30 participants who had little or no C-arm experience. Each participant carried out four tasks using a real C-arm: an introduction session, an AXIS-guided set of pelvic imaging tasks, a non-AXIS guided set of pelvic imaging tasks, and a questionnaire. For each imaging task, the participant replicated a set of three target X-ray images by taking real radiographs of a manikin with a C-arm. The number of X-rays required, task time, and C-arm positioning accuracy were recorded. RESULTS We found a significant 53% decrease in the number of X-rays used and a moderate 10-26% improvement in lateral C-arm axis positioning accuracy without requiring more time to complete the tasks when the participants were guided by artificial X-rays. The questionnaires showed that the participants felt significantly more confident in their C-arm positioning ability when they were guided by AXIS. They rated the usefulness of AXIS as very good to excellent, and the realism and accuracy of AXIS as good to very good. CONCLUSION Novice users working with a C-arm machine supplemented with the ability to generate simulated X-ray images could successfully accomplish positioning tasks in a simulated surgical setting using markedly fewer X-ray images than when unassisted. In future work, we plan to determine whether such a system can produce similar results in the live operating room without lengthening surgical procedures.
Collapse
|
15
|
Early Feasibility Studies of Augmented Reality Navigation for Lateral Skull Base Surgery. Otol Neurotol 2020; 41:883-888. [DOI: 10.1097/mao.0000000000002724] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
16
|
Automatic intraoperative optical coherence tomography positioning. Int J Comput Assist Radiol Surg 2020; 15:781-789. [PMID: 32242299 PMCID: PMC7261282 DOI: 10.1007/s11548-020-02135-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 03/10/2020] [Indexed: 12/04/2022]
Abstract
Purpose Intraoperative optical coherence tomography (iOCT) was recently introduced as a new modality for ophthalmic surgeries. It provides real-time cross-sectional information at a very high resolution. However, properly positioning the scan location during surgery is cumbersome and time-consuming, as a surgeon needs both his hands for surgery. The goal of the present study is to present a method to automatically position an iOCT scan on an anatomy of interest in the context of anterior segment surgeries. Methods First, a voice recognition algorithm using a context-free grammar is used to obtain the desired pose from the surgeon. Then, the limbus circle is detected in the microscope image and the iOCT scan is placed accordingly in the X–Y plane. Next, an iOCT sweep in Z direction is conducted and the scan is placed to centre the topmost structure. Finally, the position is fine-tuned using semantic segmentation and a rule-based system.
Results The logic to position the scan location on various anatomies was evaluated on ex vivo porcine eyes (10 eyes for corneal apex and 7 eyes for cornea, sclera and iris). The mean euclidean distances (± standard deviation) was 76.7 (± 59.2) pixels and 0.298 (± 0.229) mm. The mean execution time (± standard deviation) in seconds for the four anatomies was 15 (± 1.2). The scans have a size of 1024 by 1024 pixels. The method was implemented on a Carl Zeiss OPMI LUMERA 700 with RESCAN 700. Conclusion The present study introduces a method to fully automatically position an iOCT scanner. Providing the possibility of changing the OCT scan location via voice commands removes the burden of manual device manipulation from surgeons. This in turn allows them to keep their focus on the surgical task at hand and therefore increase the acceptance of iOCT in the operating room.
Collapse
|
17
|
Park BJ, Hunt SJ, Martin C, Nadolski GJ, Wood BJ, Gade TP. Augmented and Mixed Reality: Technologies for Enhancing the Future of IR. J Vasc Interv Radiol 2020; 31:1074-1082. [PMID: 32061520 DOI: 10.1016/j.jvir.2019.09.020] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 08/01/2019] [Accepted: 09/20/2019] [Indexed: 10/25/2022] Open
Abstract
Augmented and mixed reality are emerging interactive and display technologies. These technologies are able to merge virtual objects, in either 2 or 3 dimensions, with the real world. Image guidance is the cornerstone of interventional radiology. With augmented or mixed reality, medical imaging can be more readily accessible or displayed in actual 3-dimensional space during procedures to enhance guidance, at times when this information is most needed. In this review, the current state of these technologies is addressed followed by a fundamental overview of their inner workings and challenges with 3-dimensional visualization. Finally, current and potential future applications in interventional radiology are highlighted.
Collapse
Affiliation(s)
- Brian J Park
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104.
| | - Stephen J Hunt
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104
| | - Charles Martin
- Department of Interventional Radiology, Cleveland Clinic, Cleveland, Ohio
| | - Gregory J Nadolski
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104
| | - Bradford J Wood
- Interventional Radiology, National Institutes of Health, Bethesda, Maryland
| | - Terence P Gade
- Department of Interventional Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104
| |
Collapse
|
18
|
Laverdière C, Corban J, Khoury J, Ge SM, Schupbach J, Harvey EJ, Reindl R, Martineau PA. Augmented reality in orthopaedics. Bone Joint J 2019; 101-B:1479-1488. [DOI: 10.1302/0301-620x.101b12.bjj-2019-0315.r1] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Aims Computer-based applications are increasingly being used by orthopaedic surgeons in their clinical practice. With the integration of technology in surgery, augmented reality (AR) may become an important tool for surgeons in the future. By superimposing a digital image on a user’s view of the physical world, this technology shows great promise in orthopaedics. The aim of this review is to investigate the current and potential uses of AR in orthopaedics. Materials and Methods A systematic review of the PubMed, MEDLINE, and Embase databases up to January 2019 using the keywords ‘orthopaedic’ OR ‘orthopedic AND augmented reality’ was performed by two independent reviewers. Results A total of 41 publications were included after screening. Applications were divided by subspecialty: spine (n = 15), trauma (n = 16), arthroplasty (n = 3), oncology (n = 3), and sports (n = 4). Out of these, 12 were clinical in nature. AR-based technologies have a wide variety of applications, including direct visualization of radiological images by overlaying them on the patient and intraoperative guidance using preoperative plans projected onto real anatomy, enabling hands-free real-time access to operating room resources, and promoting telemedicine and education. Conclusion There is an increasing interest in AR among orthopaedic surgeons. Although studies show similar or better outcomes with AR compared with traditional techniques, many challenges need to be addressed before this technology is ready for widespread use. Cite this article: Bone Joint J 2019;101-B:1479–1488
Collapse
Affiliation(s)
- Carl Laverdière
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Jason Corban
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Jason Khoury
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Susan Mengxiao Ge
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Justin Schupbach
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Edward J. Harvey
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Rudy Reindl
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| | - Paul A. Martineau
- Department of Orthopedic Surgery, McGill University Health Centre, Montreal, Canada
| |
Collapse
|
19
|
Unberath M, Zaech JN, Gao C, Bier B, Goldmann F, Lee SC, Fotouhi J, Taylor R, Armand M, Navab N. Enabling machine learning in X-ray-based procedures via realistic simulation of image formation. Int J Comput Assist Radiol Surg 2019; 14:1517-1528. [PMID: 31187399 PMCID: PMC7297499 DOI: 10.1007/s11548-019-02011-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/03/2019] [Indexed: 12/19/2022]
Abstract
PURPOSE Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
Collapse
Affiliation(s)
- Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA.
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.
| | - Jan-Nico Zaech
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Bastian Bier
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Florian Goldmann
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Sing Chun Lee
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Javad Fotouhi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Nassir Navab
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
20
|
Fotouhi J, Unberath M, Song T, Hajek J, Lee SC, Bier B, Maier A, Osgood G, Armand M, Navab N. Co-localized augmented human and X-ray observers in collaborative surgical ecosystem. Int J Comput Assist Radiol Surg 2019; 14:1553-1563. [PMID: 31350704 DOI: 10.1007/s11548-019-02035-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 07/18/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Image-guided percutaneous interventions are safer alternatives to conventional orthopedic and trauma surgeries. To advance surgical tools in complex bony structures during these procedures with confidence, a large number of images is acquired. While image-guidance is the de facto standard to guarantee acceptable outcome, when these images are presented on monitors far from the surgical site the information content cannot be associated easily with the 3D patient anatomy. METHODS In this article, we propose a collaborative augmented reality (AR) surgical ecosystem to jointly co-localize the C-arm X-ray and surgeon viewer. The technical contributions of this work include (1) joint calibration of a visual tracker on a C-arm scanner and its X-ray source via a hand-eye calibration strategy, and (2) inside-out co-localization of human and X-ray observers in shared tracking and augmentation environments using vision-based simultaneous localization and mapping. RESULTS We present a thorough evaluation of the hand-eye calibration procedure. Results suggest convergence when using 50 pose pairs or more. The mean translation and rotation errors at convergence are 5.7 mm and [Formula: see text], respectively. Further, user-in-the-loop studies were conducted to estimate the end-to-end target augmentation error. The mean distance between landmarks in real and virtual environment was 10.8 mm. CONCLUSIONS The proposed AR solution provides a shared augmented experience between the human and X-ray viewer. The collaborative surgical AR system has the potential to simplify hand-eye coordination for surgeons or intuitively inform C-arm technologists for prospective X-ray view-point planning.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA. .,Department of Computer Science, Johns Hopkins University, Baltimore, USA.
| | - Mathias Unberath
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Tianyu Song
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA
| | - Jonas Hajek
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sing Chun Lee
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Bastian Bier
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Greg Osgood
- Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Baltimore, USA.,Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| |
Collapse
|
21
|
Chytas D, Malahias MA, Nikolaou VS. Augmented Reality in Orthopedics: Current State and Future Directions. Front Surg 2019; 6:38. [PMID: 31316995 PMCID: PMC6610425 DOI: 10.3389/fsurg.2019.00038] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 06/12/2019] [Indexed: 12/29/2022] Open
Abstract
Augmented reality (AR) comprises special hardware and software, which is used in order to offer computer-processed imaging data to the surgeon in real time, so that real-life objects are combined with computer-generated images. AR technology has recently gained increasing interest in the surgical practice. Preclinical research has provided substantial evidence that AR might be a useful tool for intra-operative guidance and decision-making. AR has been applied to a wide spectrum of orthopedic procedures, such as tumor resection, fracture fixation, arthroscopy, and component's alignment in total joint arthroplasty. The present study aimed to summarize the current state of the application of AR in orthopedics, in preclinical and clinical level, providing future directions and perspectives concerning potential further benefits from this technology.
Collapse
Affiliation(s)
- Dimitrios Chytas
- 2nd Orthopaedic Department, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | | | - Vasileios S. Nikolaou
- 2nd Orthopaedic Department, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| |
Collapse
|
22
|
Fotouhi J, Unberath M, Song T, Gu W, Johnson A, Osgood G, Armand M, Navab N. Interactive Flying Frustums (IFFs): spatially aware surgical data visualization. Int J Comput Assist Radiol Surg 2019; 14:913-922. [PMID: 30863981 DOI: 10.1007/s11548-019-01943-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 03/07/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE As the trend toward minimally invasive and percutaneous interventions continues, the importance of appropriate surgical data visualization becomes more evident. Ineffective interventional data display techniques that yield poor ergonomics that hinder hand-eye coordination, and therefore promote frustration which can compromise on-task performance up to adverse outcome. A very common example of ineffective visualization is monitors attached to the base of mobile C-arm X-ray systems. METHODS We present a spatially and imaging geometry-aware paradigm for visualization of fluoroscopic images using Interactive Flying Frustums (IFFs) in a mixed reality environment. We exploit the fact that the C-arm imaging geometry can be modeled as a pinhole camera giving rise to an 11-degree-of-freedom view frustum on which the X-ray image can be translated while remaining valid. Visualizing IFFs to the surgeon in an augmented reality environment intuitively unites the virtual 2D X-ray image plane and the real 3D patient anatomy. To achieve this visualization, the surgeon and C-arm are tracked relative to the same coordinate frame using image-based localization and mapping, with the augmented reality environment being delivered to the surgeon via a state-of-the-art optical see-through head-mounted display. RESULTS The root-mean-squared error of C-arm source tracking after hand-eye calibration was determined as [Formula: see text] and [Formula: see text] in rotation and translation, respectively. Finally, we demonstrated the application of spatially aware data visualization for internal fixation of pelvic fractures and percutaneous vertebroplasty. CONCLUSION Our spatially aware approach to transmission image visualization effectively unites patient anatomy with X-ray images by enabling spatial image manipulation that abides image formation. Our proof-of-principle findings indicate potential applications for surgical tasks that mostly rely on orientational information such as placing the acetabular component in total hip arthroplasty, making us confident that the proposed augmented reality concept can pave the way for improving surgical performance and visuo-motor coordination in fluoroscopy-guided surgery.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA. .,Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| | - Mathias Unberath
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Tianyu Song
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Wenhao Gu
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Alex Johnson
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Mehran Armand
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA.,Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA.,Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| |
Collapse
|