1
|
Liebmann F, von Atzigen M, Stütz D, Wolf J, Zingg L, Suter D, Cavalcanti NA, Leoty L, Esfandiari H, Snedeker JG, Oswald MR, Pollefeys M, Farshad M, Fürnstahl P. Automatic registration with continuous pose updates for marker-less surgical navigation in spine surgery. Med Image Anal 2024; 91:103027. [PMID: 37992494 DOI: 10.1016/j.media.2023.103027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 10/29/2023] [Accepted: 11/09/2023] [Indexed: 11/24/2023]
Abstract
Established surgical navigation systems for pedicle screw placement have been proven to be accurate, but still reveal limitations in registration or surgical guidance. Registration of preoperative data to the intraoperative anatomy remains a time-consuming, error-prone task that includes exposure to harmful radiation. Surgical guidance through conventional displays has well-known drawbacks, as information cannot be presented in-situ and from the surgeon's perspective. Consequently, radiation-free and more automatic registration methods with subsequent surgeon-centric navigation feedback are desirable. In this work, we present a marker-less approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner. A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models, which then is refined for each vertebra individually and updated in real-time with GPU acceleration while handling surgeon occlusions. An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system. The registration method was verified on a public dataset with a median of 100% successful registrations, a median target registration error of 2.7 mm, a median screw trajectory error of 1.6°and a median screw entry point error of 2.3 mm. Additionally, the whole pipeline was validated in an ex-vivo surgery, yielding a 100% screw accuracy and a median target registration error of 1.0 mm. Our results meet clinical demands and emphasize the potential of RGB-D data for fully automatic registration approaches in combination with augmented reality guidance.
Collapse
Affiliation(s)
- Florentin Liebmann
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland; Laboratory for Orthopaedic Biomechanics, ETH Zurich, Zurich, Switzerland.
| | - Marco von Atzigen
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland; Laboratory for Orthopaedic Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Dominik Stütz
- Computer Vision and Geometry Group, ETH Zurich, Zurich, Switzerland
| | - Julian Wolf
- Product Development Group, ETH Zurich, Zurich, Switzerland
| | - Lukas Zingg
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Daniel Suter
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Nicola A Cavalcanti
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland; Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Laura Leoty
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Jess G Snedeker
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Martin R Oswald
- Computer Vision and Geometry Group, ETH Zurich, Zurich, Switzerland; Computer Vision Lab, University of Amsterdam, Amsterdam, Netherlands
| | - Marc Pollefeys
- Computer Vision and Geometry Group, ETH Zurich, Zurich, Switzerland; Microsoft Mixed Reality and AI Zurich Lab, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| |
Collapse
|
2
|
Condino S, Cutolo F, Carbone M, Cercenelli L, Badiali G, Montemurro N, Ferrari V. Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:258-267. [PMID: 38410181 PMCID: PMC10896424 DOI: 10.1109/jtehm.2023.3332088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/19/2023] [Accepted: 11/08/2023] [Indexed: 02/28/2024]
Abstract
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
Collapse
Affiliation(s)
- Sara Condino
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Marina Carbone
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Laura Cercenelli
- EDIMES Laboratory of BioengineeringDepartment of Experimental, Diagnostic and Specialty MedicineUniversity of Bologna40138BolognaItaly
| | - Giovanni Badiali
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Nicola Montemurro
- Department of NeurosurgeryAzienda Ospedaliera Universitaria Pisana (AOUP)56127PisaItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| |
Collapse
|
3
|
Daher M, Ghanimeh J, Otayek J, Ghoul A, Bizdikian AJ, EL Abiad R. Augmented reality and shoulder replacement: a state-of-the-art review article. JSES REVIEWS, REPORTS, AND TECHNIQUES 2023; 3:274-278. [PMID: 37588507 PMCID: PMC10426657 DOI: 10.1016/j.xrrt.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Since its implementation, the rates of failure of total shoulder arthroplasty which may be due to malpositioning pushed to improve this surgery by creating new techniques and tools to help perioperatively. Augmented reality, a newly used tool in orthopedic surgery can help bypass this problem and reduce the rates of failure faced in shoulder replacement surgeries. Although this technology has revolutionized orthopedic surgery and helped improve the accuracy in shoulder prosthesis components positioning, it still has some limitations such as inaccurate over-imposition that should be addressed before it becomes of standard usage.
Collapse
Affiliation(s)
- Mohammad Daher
- Hotel Dieu de France, Saint Joseph University, Beirut, Lebanon
| | - Joe Ghanimeh
- Lebanese American University Medical Center Rizk Hospital, Beirut, Lebanon
| | - Joeffroy Otayek
- Lebanese American University Medical Center Rizk Hospital, Beirut, Lebanon
| | - Ali Ghoul
- Hotel Dieu de France, Saint Joseph University, Beirut, Lebanon
| | | | - Rami EL Abiad
- Hotel Dieu de France, Saint Joseph University, Beirut, Lebanon
| |
Collapse
|
4
|
Usevitch DE, Bronheim RS, Reyes MC, Babilonia C, Margalit A, Jain A, Armand M. Review of Enhanced Handheld Surgical Drills. Crit Rev Biomed Eng 2023; 51:29-50. [PMID: 37824333 PMCID: PMC10874117 DOI: 10.1615/critrevbiomedeng.2023049106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The handheld drill has been used as a conventional surgical tool for centuries. Alongside the recent successes of surgical robots, the development of new and enhanced medical drills has improved surgeon ability without requiring the high cost and consuming setup times that plague medical robot systems. This work provides an overview of enhanced handheld surgical drill research focusing on systems that include some form of image guidance and do not require additional hardware that physically supports or guides drilling. Drilling is reviewed by main contribution divided into audio-, visual-, or hardware-enhanced drills. A vision for future work to enhance handheld drilling systems is also discussed.
Collapse
Affiliation(s)
- David E. Usevitch
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, United States
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Rachel S. Bronheim
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Miguel C. Reyes
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Carlos Babilonia
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Adam Margalit
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Amit Jain
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, United States
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
5
|
Navab N, Martin-Gomez A, Seibold M, Sommersperger M, Song T, Winkler A, Yu K, Eck U. Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process. J Imaging 2022; 9:4. [PMID: 36662102 PMCID: PMC9866223 DOI: 10.3390/jimaging9010004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/15/2022] [Accepted: 12/19/2022] [Indexed: 12/28/2022] Open
Abstract
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.
Collapse
Affiliation(s)
- Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alejandro Martin-Gomez
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Matthias Seibold
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, CH-8008 Zurich, Switzerland
| | - Michael Sommersperger
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Tianyu Song
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alexander Winkler
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University Hospital, DE-80336 Munich, Germany
| | - Kevin Yu
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- medPhoton GmbH, AT-5020 Salzburg, Austria
| | - Ulrich Eck
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| |
Collapse
|
6
|
Gupta A, Ambade R. From Diagnosis to Therapy: The Role of Virtual and Augmented Reality in Orthopaedic Trauma Surgery. Cureus 2022; 14:e29099. [PMID: 36249662 PMCID: PMC9557249 DOI: 10.7759/cureus.29099] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 09/12/2022] [Indexed: 11/28/2022] Open
Abstract
By reducing procedure-related problems, advancements in computer-assisted surgery (CAS) and surgical training aim to boost operative precision and enhance patient safety. Orthopaedic training and practice have started to change as a result of the incorporation of reality technologies like virtual reality (VR), augmented reality (AR), and mixed reality (MR) into CAS. Today's trainees can engage in realistic and highly involved operational simulations without supervision. With the coronavirus disease 2019 (COVID-19) pandemic, there is a greater need for breakthrough technology adoption. VR is an interactive technology that enables personalised care and could support successful patient-centered rehabilitation. It is a valid and trustworthy evaluation method for determining joint range of motion, function, and balance in physical rehabilitation. It may make it possible to customise care, encourage patients, boost compliance, and track their advancement. AR supplementation in orthopaedic surgery has shown promising results in pre-clinical settings, with improvements in surgical accuracy and reproducibility, decreased operating times, and less radiation exposure. As little patient observation is needed, this may lessen the workload clinicians must bear. The ability to use it for home-based therapy is often available commercially as well. The objectives of this review are to evaluate the technology available, comprehend the available evidence regarding the benefit, and take into account implementation problems in clinical practice. The use of this technology, its practical and moral ramifications, and how it will affect orthopaedic doctors and their patients are also covered. This review offers a current and thorough analysis of the reality technologies and their uses in orthopaedic surgery.
Collapse
|
7
|
Sheth N, Vagdargi P, Sisniega A, Uneri A, Osgood G, Siewerdsen JH. Preclinical evaluation of a prototype freehand drill video guidance system for orthopedic surgery. J Med Imaging (Bellingham) 2022; 9:045004. [PMID: 36046335 PMCID: PMC9411797 DOI: 10.1117/1.jmi.9.4.045004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/09/2022] [Indexed: 08/28/2023] Open
Abstract
Purpose: Internal fixation of pelvic fractures is a challenging task requiring the placement of instrumentation within complex three-dimensional bone corridors, typically guided by fluoroscopy. We report a system for two- and three-dimensional guidance using a drill-mounted video camera and fiducial markers with evaluation in first preclinical studies. Approach: The system uses a camera affixed to a surgical drill and multimodality (optical and radio-opaque) markers for real-time trajectory visualization in fluoroscopy and/or CT. Improvements to a previously reported prototype include hardware components (mount, camera, and fiducials) and software (including a system for detecting marker perturbation) to address practical requirements necessary for translation to clinical studies. Phantom and cadaver experiments were performed to quantify the accuracy of video-fluoroscopy and video-CT registration, the ability to detect marker perturbation, and the conformance in placing guidewires along realistic pelvic trajectories. The performance was evaluated in terms of geometric accuracy and conformance within bone corridors. Results: The studies demonstrated successful guidewire delivery in a cadaver, with a median entry point error of 1.00 mm (1.56 mm IQR) and median angular error of 1.94 deg (1.23 deg IQR). Such accuracy was sufficient to guide K-wire placement through five of the six trajectories investigated with a strong level of conformance within bone corridors. The sixth case demonstrated a cortical breach due to extrema in the registration error. The system was able to detect marker perturbations and alert the user to potential registration issues. Feasible workflows were identified for orthopedic-trauma scenarios involving emergent cases (with no preoperative imaging) or cases with preoperative CT. Conclusions: A prototype system for guidewire placement was developed providing guidance that is potentially compatible with orthopedic-trauma workflow. First preclinical (cadaver) studies demonstrated accurate guidance of K-wire placement in pelvic bone corridors and the ability to automatically detect perturbations that degrade registration accuracy. The preclinical prototype demonstrated performance and utility supporting translation to clinical studies.
Collapse
Affiliation(s)
- Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Gregory Osgood
- Johns Hopkins Medicine, Department of Orthopedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
8
|
Augmented Reality in Orthopedic Surgery and Its Application in Total Joint Arthroplasty: A Systematic Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105278] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
The development of augmented reality (AR) and its application in total joint arthroplasty aims at improving the accuracy and precision in implant components’ positioning, hopefully leading to increased outcomes and survivorship. However, this field is far from being thoroughly explored. We therefore performed a systematic review of the literature in order to examine the application, the results, and the different AR systems available in TJA. A systematic review of the literature according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed. A comprehensive search of PubMed, MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews was conducted for English articles on the application of augmented reality in total joint arthroplasty using various combinations of keywords since the inception of the database to 31 March 2022. Accuracy was intended as the mean error from the targeted positioning angle and compared as mean values and standard deviations. In all, 14 articles met the inclusion criteria. Among them, four studies reported on the application of AR in total knee arthroplasty, six studies on total hip arthroplasty, three studies reported on reverse shoulder arthroplasty, and one study on total elbow arthroplasty. Nine of the included studies were preclinical (sawbones or cadaveric), while five of them reported results of AR’s clinical application. The main common feature was the high accuracy and precision when implant positioning was compared with preoperative targeted angles with errors ≤2 mm and/or ≤2°. Despite the promising results in terms of increased accuracy and precision, this technology is far from being widely adopted in daily clinical practice. However, the recent exponential growth in machine learning techniques and technologies may eventually lead to the resolution of the ongoing limitations including depth perception and their high complexity, favorably encouraging the widespread usage of AR systems.
Collapse
|
9
|
Tu P, Qin C, Guo Y, Li D, Lungu AJ, Wang H, Chen X. Ultrasound image guided and mixed reality-based surgical system with real-time soft tissue deformation computing for robotic cervical pedicle screw placement. IEEE Trans Biomed Eng 2022; 69:2593-2603. [PMID: 35157575 DOI: 10.1109/tbme.2022.3150952] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Cervical pedicle screw (CPS) placement surgery remains technically demanding due to the complicated anatomy with neurovascular structures. State-of-the-art surgical navigation or robotic systems still suffer from the problem of hand-eye coordination and soft tissue deformation. In this study, we aim at tracking the intraoperative soft tissue deformation and constructing a virtual physical fusion surgical scene, and integrating them into the robotic system for CPS placement surgery. Firstly, we propose a real-time deformation computation method based on the prior shape model and intraoperative partial information acquired from ultrasound images. According to the generated posterior shape, the structure representation of deformed target tissue gets updated continuously. Secondly, a hand tremble compensation method is proposed to improve the accuracy and robustness of the virtual-physical calibration procedure, and a mixed reality based surgical scene is further constructed for CPS placement surgery. Thirdly, we integrate the soft tissue deformation method and virtual-physical fusion method into our previously proposed surgical robotic system, and the surgical workflow for CPS placement surgery is introduced. We conducted phantom and animal experiments to evaluate the feasibility and accuracy of the proposed system. Our system yielded a mean surface distance error of 1.52 ± 0.43 mm for soft tissue deformation computing, and an average distance deviation of 1.04 ± 0.27 mm for CPS placement. Results demonstrated that our system involves tremendous clinical application potential. Our proposed system promotes the efficiency and safety of the CPS placement surgery.
Collapse
|
10
|
Augmented Reality (AR) in Orthopedics: Current Applications and Future Directions. Curr Rev Musculoskelet Med 2021; 14:397-405. [PMID: 34751894 DOI: 10.1007/s12178-021-09728-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/27/2021] [Indexed: 01/05/2023]
Abstract
PURPOSE OF REVIEW Imaging technologies (X-ray, CT, MRI, and ultrasound) have revolutionized orthopedic surgery, allowing for the more efficient diagnosis, monitoring, and treatment of musculoskeletal aliments. The current review investigates recent literature surrounding the impact of augmented reality (AR) imaging technologies on orthopedic surgery. In particular, it investigates the impact that AR technologies may have on provider cognitive burden, operative times, occupational radiation exposure, and surgical precision and outcomes. RECENT FINDINGS Many AR technologies have been shown to lower provider cognitive burden and reduce operative time and radiation exposure while improving surgical precision in pre-clinical cadaveric and sawbones models. So far, only a few platforms focusing on pedicle screw placement have been approved by the FDA. These technologies have been implemented clinically with mixed results when compared to traditional free-hand approaches. It remains to be seen if current AR technologies can deliver upon their multitude of promises, and the ability to do so seems contingent upon continued technological progress. Additionally, the impact of these platforms will likely be highly conditional on clinical indication and provider type. It remains unclear if AR will be broadly accepted and utilized or if it will be reserved for niche indications where it adds significant value. One thing is clear, orthopedics' high utilization of pre- and intra-operative imaging, combined with the relative ease of tracking rigid structures like bone as compared to soft tissues, has made it the clear beachhead market for AR technologies in medicine.
Collapse
|
11
|
Maleki M, Tehrani AF, Aray A, Ranjbar M. Intramedullary nail holes laser indicator, a non-invasive technique for interlocking of intramedullary nails. Sci Rep 2021; 11:21166. [PMID: 34707138 PMCID: PMC8551185 DOI: 10.1038/s41598-021-00382-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 10/12/2021] [Indexed: 11/09/2022] Open
Abstract
Interlocking of intramedullary nails is a challenging procedure in orthopedic trauma surgery. Numerous methods have been described to facilitate this process. But they are exposed patient and surgical team to X-rays or involves trial and error. An accurate and non-invasive method has been provided to easily interlocking intramedullary nails. By transferring a safe visible light inside the nail, a drilling position appears which use to drilling bone toward the nail hole. The wavelength of this light was obtained from ex-vivo spectroscopy on biological tissues which has optimal transmission, reflectance, and absorption properties. Moreover, animal and human experiments were performed to evaluate performance of the proposed system. Ex-vivo performance experiments were performed successfully on two groups of cow and sheep samples. Output parameters were procedure time and drilling quality which there were significant differences between the two groups in procedure time (P < 0.05). But no significant differences were observed in drilling quality (P > 0.05). Moreover, an In-vivo performance experiment was performed successfully on a middle-aged man. To compare the provided method, targeting-arm, and free-hand techniques, two human experiments were performed on a middle-aged and a young man. The results indicate the advantage of the proposed technique in the procedure time (P < 0.05), while the drilling quality is equal to the free-hand technique (P = 0.05). Intramedullary nail holes laser indicator is a safe and accurate method that reduced surgical time and simplifies the process. This new technology makes it easier to interlocking the intramedullary nail which can have good clinical applications.
Collapse
Affiliation(s)
- Mohammadreza Maleki
- Department of Mechanical Engineering, Isfahan University of Technology, 84156-83111, Isfahan, Iran.
| | - Alireza Fadaei Tehrani
- Department of Mechanical Engineering, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| | - Ayda Aray
- Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| | - Mehdi Ranjbar
- Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| |
Collapse
|
12
|
Augmented and virtual reality in spine surgery, current applications and future potentials. Spine J 2021; 21:1617-1625. [PMID: 33774210 DOI: 10.1016/j.spinee.2021.03.018] [Citation(s) in RCA: 91] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS AR and VR are promising technologies that will soon become part of standard of care in spine surgery.
Collapse
|
13
|
Hu X, Baena FRY, Cutolo F. Head-Mounted Augmented Reality Platform for Markerless Orthopaedic Navigation. IEEE J Biomed Health Inform 2021; 26:910-921. [PMID: 34115600 DOI: 10.1109/jbhi.2021.3088442] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual augmented reality (AR) has the potential to improve the accuracy, efficiency and reproducibility of computer-assisted orthopaedic surgery (CAOS). AR Head-mounted displays (HMDs) further allow non-eye-shift target observation and egocentric view. Recently, a markerless tracking and registration (MTR) algorithm was proposed to avoid the artificial markers that are conventionally pinned into the target anatomy for tracking, as their use prolongs surgical workflow, introduces human-induced errors, and necessitates additional surgical invasion in patients. However, such an MTR-based method has neither been explored for surgical applications nor integrated into current AR HMDs, making the ergonomic HMD-based markerless AR CAOS navigation hard to achieve. To these aims, we present a versatile, device-agnostic and accurate HMD-based AR platform. Our software platform, supporting both video see-through (VST) and optical see-through (OST) modes, integrates two proposed fast calibration procedures using a specially designed calibration tool. According to the camera-based evaluation, our AR platform achieves a display error of 6.31 2.55 arcmin for VST and 7.72 3.73 arcmin for OST. A proof-of-concept markerless surgical navigation system to assist in femoral bone drilling was then developed based on the platform and Microsoft HoloLens 1. According to the user study, both VST and OST markerless navigation systems are reliable, with the OST system providing the best usability. The measured navigation error is 4.90 1.04 mm, 5.96 2.22 for VST system and 4.36 0.80 mm, 5.65 1.42 for OST system.
Collapse
|
14
|
Tu P, Gao Y, Lungu AJ, Li D, Wang H, Chen X. Augmented reality based navigation for distal interlocking of intramedullary nails utilizing Microsoft HoloLens 2. Comput Biol Med 2021; 133:104402. [PMID: 33895460 DOI: 10.1016/j.compbiomed.2021.104402] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 03/24/2021] [Accepted: 04/11/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND AND OBJECTIVE The distal interlocking of intramedullary nail remains a technically demanding procedure. Existing augmented reality based solutions still suffer from hand-eye coordination problem, prolonged operation time, and inadequate resolution. In this study, an augmented reality based navigation system for distal interlocking of intramedullary nail is developed using Microsoft HoloLens 2, the state-of-the-art optical see-through head-mounted display. METHODS A customized registration cube is designed to assist surgeons with better depth perception when performing registration procedures. During drilling, surgeons can obtain accurate and in-situ visualization of intramedullary nail and drilling path, and dynamic navigation is enabled. An intraoperative warning system is proposed to provide intuitive feedback of real-time deviations and electromagnetic disturbances. RESULTS The preclinical phantom experiment showed that the reprojection errors along the X, Y, and Z axes were 1.55 ± 0.27 mm, 1.71 ± 0.40 mm, and 2.84 ± 0.78 mm, respectively. The end-to-end evaluation method indicated the distance error was 1.61 ± 0.44 mm, and the 3D angle error was 1.46 ± 0.46°. A cadaver experiment was also conducted to evaluate the feasibility of the system. CONCLUSION Our system has potential advantages over the 2D-screen based navigation system and the pointing device based navigation system in terms of accuracy and time consumption, and has tremendous application prospects.
Collapse
Affiliation(s)
- Puxun Tu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yao Gao
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Abel J Lungu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dongyuan Li
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huixiang Wang
- Department of Orthopedics, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China.
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
15
|
Casari FA, Navab N, Hruby LA, Kriechling P, Nakamura R, Tori R, de Lourdes Dos Santos Nunes F, Queiroz MC, Fürnstahl P, Farshad M. Augmented Reality in Orthopedic Surgery Is Emerging from Proof of Concept Towards Clinical Studies: a Literature Review Explaining the Technology and Current State of the Art. Curr Rev Musculoskelet Med 2021; 14:192-203. [PMID: 33544367 PMCID: PMC7990993 DOI: 10.1007/s12178-021-09699-3] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/08/2021] [Indexed: 02/07/2023]
Abstract
PURPOSE OF REVIEW Augmented reality (AR) is becoming increasingly popular in modern-day medicine. Computer-driven tools are progressively integrated into clinical and surgical procedures. The purpose of this review was to provide a comprehensive overview of the current technology and its challenges based on recent literature mainly focusing on clinical, cadaver, and innovative sawbone studies in the field of orthopedic surgery. The most relevant literature was selected according to clinical and innovational relevance and is summarized. RECENT FINDINGS Augmented reality applications in orthopedic surgery are increasingly reported. In this review, we summarize basic principles of AR including data preparation, visualization, and registration/tracking and present recently published clinical applications in the area of spine, osteotomies, arthroplasty, trauma, and orthopedic oncology. Higher accuracy in surgical execution, reduction of radiation exposure, and decreased surgery time are major findings presented in the literature. In light of the tremendous progress of technological developments in modern-day medicine and emerging numbers of research groups working on the implementation of AR in routine clinical procedures, we expect the AR technology soon to be implemented as standard devices in orthopedic surgery.
Collapse
Affiliation(s)
- Fabio A Casari
- Department of Orthopedic Surgery, Balgrist University Hospital, University of Zurich, Zurich, Switzerland.
- ROCS, Research in Orthopedic Computer Science, Balgrist Campus, University of Zurich, Forchstrasse 340, 8008, Zürich, Switzerland.
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany
- Computer Aided Medical Procedures (CAMP), Johns Hopkins University, Baltimore, MD, USA
| | - Laura A Hruby
- Department of Orthopedic Surgery, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
- Department of Orthopaedics and Trauma Surgery, Medical University of Vienna, Vienna, Austria
| | - Philipp Kriechling
- Department of Orthopedic Surgery, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Ricardo Nakamura
- Computer Engineering and Digital Systems Department, Escola Politécnica, Universidade de São Paulo, São Paulo, SP, Brazil
| | - Romero Tori
- Computer Engineering and Digital Systems Department, Escola Politécnica, Universidade de São Paulo, São Paulo, SP, Brazil
| | | | - Marcelo C Queiroz
- Orthopedics and Traumatology Department, Faculty of Medical Sciences of Santa Casa de Sao Paulo, Sao Paulo, SP, Brazil
| | - Philipp Fürnstahl
- ROCS, Research in Orthopedic Computer Science, Balgrist Campus, University of Zurich, Forchstrasse 340, 8008, Zürich, Switzerland
| | - Mazda Farshad
- Department of Orthopedic Surgery, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| |
Collapse
|
16
|
Vagdargi P, Sheth N, Sisniega A, Uneri A, De Silva T, Osgood GM, Siewerdsen JH. Drill-mounted video guidance for orthopaedic trauma surgery. J Med Imaging (Bellingham) 2021; 8:015002. [PMID: 33604409 DOI: 10.1117/1.jmi.8.1.015002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 01/19/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Percutaneous fracture fixation is a challenging procedure that requires accurate interpretation of fluoroscopic images to insert guidewires through narrow bone corridors. We present a guidance system with a video camera mounted onboard the surgical drill to achieve real-time augmentation of the drill trajectory in fluoroscopy and/or CT. Approach: The camera was mounted on the drill and calibrated with respect to the drill axis. Markers identifiable in both video and fluoroscopy are placed about the surgical field and co-registered by feature correspondences. If available, a preoperative CT can also be co-registered by 3D-2D image registration. Real-time guidance is achieved by virtual overlay of the registered drill axis on fluoroscopy or in CT. Performance was evaluated in terms of target registration error (TRE), conformance within clinically relevant pelvic bone corridors, and runtime. Results: Registration of the drill axis to fluoroscopy demonstrated median TRE of 0.9 mm and 2.0 deg when solved with two views (e.g., anteroposterior and lateral) and five markers visible in both video and fluoroscopy-more than sufficient to provide Kirschner wire (K-wire) conformance within common pelvic bone corridors. Registration accuracy was reduced when solved with a single fluoroscopic view ( TRE = 3.4 mm and 2.7 deg) but was also sufficient for K-wire conformance within pelvic bone corridors. Registration was robust with as few as four markers visible within the field of view. Runtime of the initial implementation allowed fluoroscopy overlay and/or 3D CT navigation with freehand manipulation of the drill up to 10 frames / s . Conclusions: A drill-mounted video guidance system was developed to assist with K-wire placement. Overall workflow is compatible with fluoroscopically guided orthopaedic trauma surgery and does not require markers to be placed in preoperative CT. The initial prototype demonstrates accuracy and runtime that could improve the accuracy of K-wire placement, motivating future work for translation to clinical studies.
Collapse
Affiliation(s)
- Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Greg M Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States.,Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
17
|
Fotouhi J, Mehrfard A, Song T, Johnson A, Osgood G, Unberath M, Armand M, Navab N. Development and Pre-Clinical Analysis of Spatiotemporal-Aware Augmented Reality in Orthopedic Interventions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:765-778. [PMID: 33166252 PMCID: PMC8317976 DOI: 10.1109/tmi.2020.3037013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. As a consequence, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. The main contribution of this paper is to reveal how exemplary workflows are redefined by taking full advantage of head-mounted displays when entirely co-registered with the imaging system at all times. The awareness of the system from the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. Our system achieved an error of 4.76 ± 2.91mm for placing K-wire in a fracture management procedure, and yielded errors of 1.57 ± 1.16° and 1.46 ± 1.00° in the abduction and anteversion angles, respectively, for total hip arthroplasty (THA). We compared the results with the outcomes from baseline standard operative and non-immersive AR procedures, which had yielded errors of [4.61mm, 4.76°, 4.77°] and [5.13mm, 1.78°, 1.43°], respectively, for wire placement, and abduction and anteversion during THA. We hope that our holistic approach towards improving the interface of surgery not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications and provide novel approaches of documenting procedures for training purposes.
Collapse
|
18
|
Abstract
Augmented reality (AR) technology enhances a user's perception through the superimposition of digital information on physical images while still allowing for interaction with the physical world. The tracking, data processing, and display technology of traditional computer-assisted surgery (CAS) navigation have the potential to be consolidated to an AR headset equipped with high-fidelity cameras, microcomputers, and optical see-through lenses that create digital holographic images. This article evaluates AR applications specific to total knee arthroplasty, total hip arthroplasty, and the opportunities for AR to enhance arthroplasty education and professional development.
Collapse
|
19
|
Haiderbhai M, Ledesma S, Navab N, Fallavollita P. Generating X-ray Images from Point Clouds Using Conditional Generative Adversarial Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1588-1591. [PMID: 33018297 DOI: 10.1109/embc44109.2020.9175420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Simulating medical images such as X-rays is of key interest to reduce radiation in non-diagnostic visualization scenarios. Past state of the art methods utilize ray tracing, which is reliant on 3D models. To our knowledge, no approach exists for cases where point clouds from depth cameras and other sensors are the only input modality. We propose a method for estimating an X-ray image from a generic point cloud using a conditional generative adversarial network (CGAN). We train a CGAN pix2pix to translate point cloud images into X-ray images using a dataset created inside our custom synthetic data generator. Additionally, point clouds of multiple densities are examined to determine the effect of density on the image translation problem. The results from the CGAN show that this type of network can predict X-ray images from points clouds. Higher point cloud densities outperformed the two lowest point cloud densities. However, the networks trained with high-density point clouds did not exhibit a significant difference when compared with the networks trained with medium densities. We prove that CGANs can be applied to image translation problems in the medical domain and show the feasibility of using this approach when 3D models are not available. Further work includes overcoming the occlusion and quality limitations of the generic approach and applying CGANs to other medical image translation problems.
Collapse
|
20
|
Sakai D, Joyce K, Sugimoto M, Horikita N, Hiyama A, Sato M, Devitt A, Watanabe M. Augmented, virtual and mixed reality in spinal surgery: A real-world experience. J Orthop Surg (Hong Kong) 2020; 28:2309499020952698. [PMID: 32909902 DOI: 10.1177/2309499020952698] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
This review aims to identify the role of augmented, virtual or mixed reality (AR, VR or MR) technologies in setting of spinal surgery. The authors address the challenges surrounding the implementation of this technology in the operating room. A technical standpoint addresses the efficacy of these imaging modalities based on the current literature in the field. Ultimately, these technologies must be cost-effective to ensure widespread adoption. This may be achieved through reduced surgical times and decreased incidence of post-operative complications and revisions while maintaining equivalent safety profile to alternative surgical approaches. While current studies focus mainly on the successful placement of pedicle screws via AR-guided instrumentation, a wider scope of procedures may be assisted using AR, VR or MR technology once efficacy and safety have been validated. These emerging technologies offer a significant advantage in the guidance of complex procedures that require high precision and accuracy using minimally invasive interventions.
Collapse
Affiliation(s)
- Daisuke Sakai
- Department of Orthopaedic Surgery, Surgical Science, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Kieran Joyce
- SFI Research Centre for Medical Devices, National University of Ireland, Galway, Ireland
- Department of Orthopaedic Surgery, School of Medicine, National University of Ireland, Galway, Ireland
| | - Maki Sugimoto
- Innovation Lab, Teikyo University Okinaga Research Institute, Tokyo, Japan
| | - Natsumi Horikita
- Department of Orthopaedic Surgery, Surgical Science, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Akihiko Hiyama
- Department of Orthopaedic Surgery, Surgical Science, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Masato Sato
- Department of Orthopaedic Surgery, Surgical Science, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Aiden Devitt
- Department of Orthopaedic Surgery, School of Medicine, National University of Ireland, Galway, Ireland
| | - Masahiko Watanabe
- Department of Orthopaedic Surgery, Surgical Science, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| |
Collapse
|
21
|
Jud L, Fotouhi J, Andronic O, Aichmair A, Osgood G, Navab N, Farshad M. Applicability of augmented reality in orthopedic surgery - A systematic review. BMC Musculoskelet Disord 2020; 21:103. [PMID: 32061248 PMCID: PMC7023780 DOI: 10.1186/s12891-020-3110-2] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Accepted: 02/03/2020] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Computer-assisted solutions are changing surgical practice continuously. One of the most disruptive technologies among the computer-integrated surgical techniques is Augmented Reality (AR). While Augmented Reality is increasingly used in several medical specialties, its potential benefit in orthopedic surgery is not yet clear. The purpose of this article is to provide a systematic review of the current state of knowledge and the applicability of AR in orthopedic surgery. METHODS A systematic review of the current literature was performed to find the state of knowledge and applicability of AR in Orthopedic surgery. A systematic search of the following three databases was performed: "PubMed", "Cochrane Library" and "Web of Science". The systematic review followed the Preferred Reporting Items on Systematic Reviews and Meta-analysis (PRISMA) guidelines and it has been published and registered in the international prospective register of systematic reviews (PROSPERO). RESULTS 31 studies and reports are included and classified into the following categories: Instrument / Implant Placement, Osteotomies, Tumor Surgery, Trauma, and Surgical Training and Education. Quality assessment could be performed in 18 studies. Among the clinical studies, there were six case series with an average score of 90% and one case report, which scored 81% according to the Joanna Briggs Institute Critical Appraisal Checklist (JBI CAC). The 11 cadaveric studies scored 81% according to the QUACS scale (Quality Appraisal for Cadaveric Studies). CONCLUSION This manuscript provides 1) a summary of the current state of knowledge and research of Augmented Reality in orthopedic surgery presented in the literature, and 2) a discussion by the authors presenting the key remarks required for seamless integration of Augmented Reality in the future surgical practice. TRIAL REGISTRATION PROSPERO registration number: CRD42019128569.
Collapse
Affiliation(s)
- Lukas Jud
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Javad Fotouhi
- Computer Aided Medical Procedure, Johns Hopkins University, 3400 N Charles Street, Baltimore, 21210 USA
| | - Octavian Andronic
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Alexander Aichmair
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Forchstrasse 340, 8008 Zürich, Switzerland
| | - Greg Osgood
- Johns Hopkins Hospital, Department of Orthopedics Surgery, 1800 Orleans Street, Baltimore, 21287 USA
| | - Nassir Navab
- Computer Aided Medical Procedure, Johns Hopkins University, 3400 N Charles Street, Baltimore, 21210 USA
- Computer Aided Medical Procedure, Technical University of Munich, Boltzmannstrasse 3, 85748 Munich, Germany
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Forchstrasse 340, 8008 Zürich, Switzerland
| |
Collapse
|
22
|
Hosseinian S, Arefi H, Navab N. Toward an End-to-End Calibration for Mobile C-Arm in Combination with a Depth Sensor for Surgical Augmented Reality Applications. SENSORS 2019; 20:s20010036. [PMID: 31861606 PMCID: PMC6982695 DOI: 10.3390/s20010036] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/11/2019] [Accepted: 12/13/2019] [Indexed: 11/18/2022]
Abstract
C-arm X-ray imaging is commonly applied in operating rooms for guiding orthopedic surgeries. Augmented Reality (AR) with C-arm X-ray images during surgery is an efficient way to facilitate procedures for surgeons. However, the accurate calibration process for surgical AR based on C-arm is essential and still challenging due to the limitations of C-arm imaging systems, such as instability of C-arm calibration parameters and the narrow field of view. We extend existing methods using a depth camera and propose a new calibration procedure consisting of calibration of the C-arm imaging system, and 3D/2D calibration of an RGB-D camera and C-arm system with a new method to achieve reliable data and promising accuracy and, at the same time, consistent with standard surgical protocols. For the calibration procedure, we apply bundle adjustment equations with a 3D designed Lego multi-modal phantom, in contrast to the previous methods in which planar calibration phantoms were applied. By using our method, the visualization of the X-ray image upon the 3D data was done, and the achieved mean overlay error was 1.03 mm. The evaluations showed that the proposed calibration procedure provided promising accuracy for AR surgeries and it improved the flexibility and robustness of existing C-arm calibration methods for surgical augmented reality (using C-arm and RGB-D sensor). Moreover, the results showed the efficiency of our method to compensate for the effects of the C-arm movement on calibration parameters. It was shown that the obtained overlay error was improved for the non-zero rotation movement of C-arm by using a virtual detector.
Collapse
Affiliation(s)
- Sahar Hosseinian
- School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1439957131, Iran;
| | - Hossein Arefi
- School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1439957131, Iran;
- Correspondence:
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures & Augmented Reality, Faculty of Computer Science, Technical University of Munich, Boltzmannstr. 3, 85748 Garching b. Munich, Germany;
| |
Collapse
|
23
|
Fotouhi J, Unberath M, Song T, Hajek J, Lee SC, Bier B, Maier A, Osgood G, Armand M, Navab N. Co-localized augmented human and X-ray observers in collaborative surgical ecosystem. Int J Comput Assist Radiol Surg 2019; 14:1553-1563. [PMID: 31350704 DOI: 10.1007/s11548-019-02035-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 07/18/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Image-guided percutaneous interventions are safer alternatives to conventional orthopedic and trauma surgeries. To advance surgical tools in complex bony structures during these procedures with confidence, a large number of images is acquired. While image-guidance is the de facto standard to guarantee acceptable outcome, when these images are presented on monitors far from the surgical site the information content cannot be associated easily with the 3D patient anatomy. METHODS In this article, we propose a collaborative augmented reality (AR) surgical ecosystem to jointly co-localize the C-arm X-ray and surgeon viewer. The technical contributions of this work include (1) joint calibration of a visual tracker on a C-arm scanner and its X-ray source via a hand-eye calibration strategy, and (2) inside-out co-localization of human and X-ray observers in shared tracking and augmentation environments using vision-based simultaneous localization and mapping. RESULTS We present a thorough evaluation of the hand-eye calibration procedure. Results suggest convergence when using 50 pose pairs or more. The mean translation and rotation errors at convergence are 5.7 mm and [Formula: see text], respectively. Further, user-in-the-loop studies were conducted to estimate the end-to-end target augmentation error. The mean distance between landmarks in real and virtual environment was 10.8 mm. CONCLUSIONS The proposed AR solution provides a shared augmented experience between the human and X-ray viewer. The collaborative surgical AR system has the potential to simplify hand-eye coordination for surgeons or intuitively inform C-arm technologists for prospective X-ray view-point planning.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA. .,Department of Computer Science, Johns Hopkins University, Baltimore, USA.
| | - Mathias Unberath
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Tianyu Song
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA
| | - Jonas Hajek
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sing Chun Lee
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Bastian Bier
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Greg Osgood
- Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Baltimore, USA.,Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| |
Collapse
|
24
|
Azizi Koutenaei B, Fotouhi J, Alambeigi F, Wilson E, Guler O, Oetgen M, Cleary K, Navab N. Radiation-free methods for navigated screw placement in slipped capital femoral epiphysis surgery. Int J Comput Assist Radiol Surg 2019; 14:2199-2210. [PMID: 31321601 DOI: 10.1007/s11548-019-02026-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 07/03/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE For orthopedic procedures, surgeons utilize intra-operative medical images such as fluoroscopy to plan screw placement and accurately position the guide wire with the intended trajectory. The number of fluoroscopic images needed depends on the complexity of the case and skill of the surgeon. Since more fluoroscopic images lead to more exposure and higher radiation dose for both surgeon and patient, a solution that decreases the number of fluoroscopic images would be an improvement in clinical care. METHODS This article describes and compares three different novel navigation methods and techniques for screw placement using an attachable Inertial Measurement Unit device or a robotic arm. These methods provide projection and visualization of the surgical tool trajectory during the slipped capital femoral epiphysis procedure. RESULTS These techniques resulted in faster and more efficient preoperative calibration and set up times compared to other intra-operative navigation systems in our phantom study. We conducted an experiment using 120 model bones to measure the accuracy of the methods. CONCLUSION As conclusion, these approaches have the potential to improve accuracy of surgical tool navigation and decrease the number of required X-ray images without any change in the clinical workflow. The results also show 65% decrease in total error compared to the conventional manual approach.
Collapse
Affiliation(s)
- Bamshad Azizi Koutenaei
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich (TUM), Munich, Germany. .,Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA.
| | - Javad Fotouhi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Farshid Alambeigi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | | | | | - Mathew Oetgen
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich (TUM), Munich, Germany.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
25
|
Si WX, Liao XY, Qian YL, Sun HT, Chen XD, Wang Q, Heng PA. Assessing performance of augmented reality-based neurosurgical training. Vis Comput Ind Biomed Art 2019; 2:6. [PMID: 32240415 PMCID: PMC7099548 DOI: 10.1186/s42492-019-0015-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 06/04/2019] [Indexed: 11/29/2022] Open
Abstract
This paper presents a novel augmented reality (AR)-based neurosurgical training simulator which provides a very natural way for surgeons to learn neurosurgical skills. Surgical simulation with bimanual haptic interaction is integrated in this work to provide a simulated environment for users to achieve holographic guidance for pre-operative training. To achieve the AR guidance, the simulator should precisely overlay the 3D anatomical information of the hidden target organs in the patients in real surgery. In this regard, the patient-specific anatomy structures are reconstructed from segmented brain magnetic resonance imaging. We propose a registration method for precise mapping of the virtual and real information. In addition, the simulator provides bimanual haptic interaction in a holographic environment to mimic real brain tumor resection. In this study, we conduct AR-based guidance validation and a user study on the developed simulator, which demonstrate the high accuracy of our AR-based neurosurgery simulator, as well as the AR guidance mode’s potential to improve neurosurgery by simplifying the operation, reducing the difficulty of the operation, shortening the operation time, and increasing the precision of the operation.
Collapse
Affiliation(s)
- Wei-Xin Si
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, 518055, China
| | - Xiang-Yun Liao
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, 518055, China
| | - Yin-Ling Qian
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, 518055, China
| | - Hai-Tao Sun
- Department of Neurosurgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China
| | - Xiang-Dong Chen
- E.N.T.department of Shenzhen University General Hospital, Shenzhen, 518055, China
| | - Qiong Wang
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, 518055, China.
| | - Pheng Ann Heng
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, 518055, China.,Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
26
|
Auloge P, Cazzato RL, Ramamurthy N, de Marini P, Rousseau C, Garnon J, Charles YP, Steib JP, Gangi A. Augmented reality and artificial intelligence-based navigation during percutaneous vertebroplasty: a pilot randomised clinical trial. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2019; 29:1580-1589. [DOI: 10.1007/s00586-019-06054-6] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Revised: 05/29/2019] [Accepted: 06/26/2019] [Indexed: 12/24/2022]
|
27
|
Chytas D, Malahias MA, Nikolaou VS. Augmented Reality in Orthopedics: Current State and Future Directions. Front Surg 2019; 6:38. [PMID: 31316995 PMCID: PMC6610425 DOI: 10.3389/fsurg.2019.00038] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 06/12/2019] [Indexed: 12/29/2022] Open
Abstract
Augmented reality (AR) comprises special hardware and software, which is used in order to offer computer-processed imaging data to the surgeon in real time, so that real-life objects are combined with computer-generated images. AR technology has recently gained increasing interest in the surgical practice. Preclinical research has provided substantial evidence that AR might be a useful tool for intra-operative guidance and decision-making. AR has been applied to a wide spectrum of orthopedic procedures, such as tumor resection, fracture fixation, arthroscopy, and component's alignment in total joint arthroplasty. The present study aimed to summarize the current state of the application of AR in orthopedics, in preclinical and clinical level, providing future directions and perspectives concerning potential further benefits from this technology.
Collapse
Affiliation(s)
- Dimitrios Chytas
- 2nd Orthopaedic Department, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | | | - Vasileios S. Nikolaou
- 2nd Orthopaedic Department, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| |
Collapse
|
28
|
Abstract
BACKGROUND One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. OBJECTIVE In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. MATERIAL AND METHODS Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. RESULTS A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. CONCLUSION In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Collapse
Affiliation(s)
- Ulrich Eck
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| | - Alexander Winkler
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| |
Collapse
|
29
|
Weidert S, Wang L, Landes J, Sandner P, Suero EM, Navab N, Kammerlander C, Euler E, Heide A. Video‐augmented fluoroscopy for distal interlocking of intramedullary nails decreased radiation exposure and surgical time in a bovine cadaveric setting. Int J Med Robot 2019; 15:e1995. [DOI: 10.1002/rcs.1995] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 02/05/2019] [Accepted: 03/06/2019] [Indexed: 11/12/2022]
Affiliation(s)
- Simon Weidert
- Department of General, Trauma and Reconstructive SurgeryHospital of the University of Munich Munich Germany
| | - Lejing Wang
- Chair for Computer Aided Medical Procedures & Augmented RealityTechnical University of Munich Munich Germany
| | - Juergen Landes
- Klinik für Orthopädie und UnfallchirurgieIsar Klinikum Munich Germany
| | - Philipp Sandner
- Frankfurt School Blockchain CenterFrankfurt School of Finance & Management Frankfurt Germany
| | - Eduardo M. Suero
- Department of General, Trauma and Reconstructive SurgeryHospital of the University of Munich Munich Germany
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures & Augmented RealityTechnical University of Munich Munich Germany
| | - Christian Kammerlander
- Department of General, Trauma and Reconstructive SurgeryHospital of the University of Munich Munich Germany
| | - Ekkehard Euler
- Department of General, Trauma and Reconstructive SurgeryHospital of the University of Munich Munich Germany
| | - Anna Heide
- Department of General, Trauma and Reconstructive SurgeryHospital of the University of Munich Munich Germany
| |
Collapse
|
30
|
Jiang T, Zhu M, Chai G, Li Q. Precision of a Novel Craniofacial Surgical Navigation System Based on Augmented Reality Using an Occlusal Splint as a Registration Strategy. Sci Rep 2019; 9:501. [PMID: 30679507 PMCID: PMC6345963 DOI: 10.1038/s41598-018-36457-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Accepted: 11/14/2018] [Indexed: 11/30/2022] Open
Abstract
The authors have developed a novel augmented reality (AR)-based navigation system (NS) for craniofacial surgery. In this study, the authors aimed to measure the precision of the system and further analyze the primary influencing factors of the precision. The drilling of holes into the mandibles of ten beagle dogs was performed under the AR-based NS, and the precision was analyzed by comparing the deviation between the preoperational plan and the surgical outcome. The AR-based NS was successfully applied to quickly and precisely drill holes in the mandibles. The mean positional deviation between the preoperative design and intraoperative navigation was 1.29 ± 0.70 mm for the entry points and 2.47 ± 0.66 mm for the end points, and the angular deviation was 1.32° ± 1.17°. The precision linearly decreased with the distance from the marker. In conclusion, the precision of this system could satisfy clinical requirements, and this system may serve as a helpful tool for improving the precision in craniofacial surgery.
Collapse
Affiliation(s)
- Taoran Jiang
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University School of Medicine, Zhizaoju Road 639, Shanghai, 200011, People's Republic of China
| | - Ming Zhu
- Department of Plastic and Reconstructive Surgery, Zhongshan Hospital, Fudan University, No. 180 Feng Lin Road, Shanghai, 200032, People's Republic of China
| | - Gang Chai
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University School of Medicine, Zhizaoju Road 639, Shanghai, 200011, People's Republic of China
| | - Qingfeng Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University School of Medicine, Zhizaoju Road 639, Shanghai, 200011, People's Republic of China.
| |
Collapse
|
31
|
Perspective pinhole model with planar source for augmented reality surgical navigation based on C-arm imaging. Int J Comput Assist Radiol Surg 2018; 13:1671-1682. [PMID: 30014167 DOI: 10.1007/s11548-018-1823-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 07/05/2018] [Indexed: 10/28/2022]
Abstract
PURPOSE For augmented reality surgical navigation based on C-arm imaging, accuracy of the overlaid augmented reality onto the X-ray image is imperative. However, overlay displacement is generated when a conventional pinhole model describing a geometric relationship of a normal camera is adopted for C-arm calibration. Thus, a modified model for C-arm calibration is proposed to reduce this displacement, which is essential for accurate surgical navigation. METHOD Based on the analysis of displacement pattern generated for three-dimensional objects, we assumed that displacement originated by moving the X-ray source position according to the depth. In the proposed method, X-ray source movement was modeled as variable intrinsic parameters and represented in the pinhole model by replacing the point source with a planar source. RESULTS The improvement which represents a reduced displacement was verified by comparing overlay accuracy for augmented reality surgical navigation between the conventional and proposed methods. The proposed method achieved more accurate overlay on the X-ray image in spatial position as well as depth of the object volume. CONCLUSION We validated that intrinsic parameters that describe the source position were dependent on depth for a three-dimensional object and showed that displacement can be reduced and become independent of depth by using the proposed planar source model.
Collapse
|
32
|
Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V. Augmented reality in open surgery. Updates Surg 2018; 70:389-400. [PMID: 30006832 DOI: 10.1007/s13304-018-0567-8] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 07/08/2018] [Indexed: 12/17/2022]
Abstract
Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) "augmented reality" AND "open surgery", (2) "augmented reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic", (3) "mixed reality" AND "open surgery", (4) "mixed reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic". The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice.
Collapse
Affiliation(s)
- Benish Fida
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy. .,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.
| | - Gregorio di Franco
- General Surgery Unit, Department of Surgery, Translational and New Technologies, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.,Vascular Surgery Unit, Cisanello University Hospital AOUP, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
33
|
Stefan P, Habert S, Winkler A, Lazarovici M, Fürmetz J, Eck U, Navab N. A radiation-free mixed-reality training environment and assessment concept for C-arm-based surgery. Int J Comput Assist Radiol Surg 2018; 13:1335-1344. [PMID: 29943226 DOI: 10.1007/s11548-018-1807-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 06/04/2018] [Indexed: 11/29/2022]
Abstract
PURPOSE The discrepancy of continuously decreasing opportunities for clinical training and assessment and the increasing complexity of interventions in surgery has led to the development of different training and assessment options like anatomical models, computer-based simulators or cadaver trainings. However, trainees, following training, assessment and ultimately performing patient treatment, still face a steep learning curve. METHODS To address this problem for C-arm-based surgery, we introduce a realistic radiation-free simulation system that combines patient-based 3D printed anatomy and simulated X-ray imaging using a physical C-arm. To explore the fidelity and usefulness of the proposed mixed-reality system for training and assessment, we conducted a user study with six surgical experts performing a facet joint injection on the simulator. RESULTS In a technical evaluation, we show that our system simulates X-ray images accurately with an RMSE of 1.85 mm compared to real X-ray imaging. The participants expressed agreement with the overall realism of the simulation, the usefulness of the system for assessment and strong agreement with the usefulness of such a mixed-reality system for training of novices and experts. In a quantitative analysis, we furthermore evaluated the suitability of the system for the assessment of surgical skills and gather preliminary evidence for validity. CONCLUSION The proposed mixed-reality simulation system facilitates a transition to C-arm-based surgery and has the potential to complement or even replace large parts of cadaver training, to provide a safe assessment environment and to reduce the risk for errors when proceeding to patient treatment. We propose an assessment concept and outline the steps necessary to expand the system into a test instrument that provides reliable and justified assessments scores indicative of surgical proficiency with sufficient evidence for validity.
Collapse
Affiliation(s)
- Philipp Stefan
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany.
| | - Séverine Habert
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany.
| | - Alexander Winkler
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany
| | - Marc Lazarovici
- Klinikum der Universität München, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Julian Fürmetz
- Klinikum der Universität München, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Ulrich Eck
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany.,Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
34
|
Fotouhi J, Fuerst B, Unberath M, Reichenstein S, Lee SC, Johnson AA, Osgood GM, Armand M, Navab N. Automatic intraoperative stitching of nonoverlapping cone-beam CT acquisitions. Med Phys 2018; 45:2463-2475. [PMID: 29569728 PMCID: PMC5997569 DOI: 10.1002/mp.12877] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 03/05/2018] [Accepted: 03/05/2018] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | - Bernhard Fuerst
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | - Mathias Unberath
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | | | - Sing Chun Lee
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | - Alex A. Johnson
- Department of Orthopaedic SurgeryJohns Hopkins HospitalBaltimoreMDUSA
| | - Greg M. Osgood
- Department of Orthopaedic SurgeryJohns Hopkins HospitalBaltimoreMDUSA
| | - Mehran Armand
- Department of Mechanical EngineeringJohns Hopkins UniversityBaltimoreMDUSA
- Applied Physics LaboratoryJohns Hopkins UniversityLaurelMDUSA
| | - Nassir Navab
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
- Computer Aided Medical ProceduresTechnical University of MunichMunichGermany
| |
Collapse
|
35
|
Ma L, Zhao Z, Zhang B, Jiang W, Fu L, Zhang X, Liao H. Three-dimensional augmented reality surgical navigation with hybrid optical and electromagnetic tracking for distal intramedullary nail interlocking. Int J Med Robot 2018; 14:e1909. [PMID: 29575601 DOI: 10.1002/rcs.1909] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 02/07/2018] [Accepted: 02/08/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Zhe Zhao
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Boyu Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Weipeng Jiang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Ligong Fu
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| |
Collapse
|
36
|
Andress S, Johnson A, Unberath M, Winkler AF, Yu K, Fotouhi J, Weidert S, Osgood G, Navab N. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial. J Med Imaging (Bellingham) 2018; 5:021209. [PMID: 29392161 DOI: 10.1117/1.jmi.5.2.021209] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Accepted: 01/04/2018] [Indexed: 11/14/2022] Open
Abstract
Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.
Collapse
Affiliation(s)
- Sebastian Andress
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Ludwig-Maximilians-Universität München, Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Munich, Germany
| | - Alex Johnson
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Mathias Unberath
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States
| | - Alexander Felix Winkler
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| | - Kevin Yu
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| | - Javad Fotouhi
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States
| | - Simon Weidert
- Ludwig-Maximilians-Universität München, Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Munich, Germany
| | - Greg Osgood
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Nassir Navab
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| |
Collapse
|
37
|
De Silva T, Punnoose J, Uneri A, Mahesh M, Goerres J, Jacobson M, Ketcha MD, Manbachi A, Vogt S, Kleinszig G, Khanna AJ, Wolinksy JP, Siewerdsen JH, Osgood G. Virtual fluoroscopy for intraoperative C-arm positioning and radiation dose reduction. J Med Imaging (Bellingham) 2018; 5:015005. [PMID: 29487882 PMCID: PMC5812884 DOI: 10.1117/1.jmi.5.1.015005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 01/12/2018] [Indexed: 12/14/2022] Open
Abstract
Positioning of an intraoperative C-arm to achieve clear visualization of a particular anatomical feature often involves repeated fluoroscopic views, which cost time and radiation exposure to both the patient and surgical staff. A system for virtual fluoroscopy (called FluoroSim) that could dramatically reduce time- and dose-spent "fluoro-hunting" by leveraging preoperative computed tomography (CT), encoded readout of C-arm gantry position, and automatic 3D-2D image registration has been developed. The method is consistent with existing surgical workflow and does not require additional tracking equipment. Real-time virtual fluoroscopy was achieved via mechanical encoding of the C-arm motion, C-arm geometric calibration, and patient registration using a single radiograph. The accuracy, time, and radiation dose associated with C-arm positioning were measured for FluoroSim in comparison with conventional methods. Five radiology technologists were tasked with acquiring six standard pelvic views pertinent to sacro-illiac, anterior-inferior iliac spine, and superior-ramus screw placement in an anthropomorphic pelvis phantom using conventional and FluoroSim approaches. The positioning accuracy, exposure time, number of exposures, and total time for each trial were recorded, and radiation dose was characterized in terms of entrance skin dose and in-room scatter. The geometric accuracy of FluoroSim was measured to be [Formula: see text]. There was no significant difference ([Formula: see text]) observed in the accuracy or total elapsed time for C-arm positioning. However, the total fluoroscopy time required to achieve the desired view decreased by 4.1 s ([Formula: see text] for conventional, compared with [Formula: see text] for FluoroSim, [Formula: see text]), and the total number of exposures reduced by 4.0 ([Formula: see text] for conventional, compared with [Formula: see text] for FluoroSim, [Formula: see text]). These reductions amounted to a 50% to 78% decrease in patient entrance skin dose and a 55% to 70% reduction in in-room scatter. FluoroSim was found to reduce the radiation exposure required in C-arm positioning without diminishing positioning time or accuracy, providing a potentially valuable tool to assist technologists and surgeons.
Collapse
Affiliation(s)
- Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Joshua Punnoose
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Mahadevappa Mahesh
- Johns Hopkins University, Russell H. Morgan Department of Radiology, Baltimore, Maryland, United States
| | - Joseph Goerres
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Matthew Jacobson
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Michael D. Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Amir Manbachi
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | | | | | - Akhil Jay Khanna
- Johns Hopkins University, Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jean-Paul Wolinksy
- Johns Hopkins University, Department of Neurosurgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Russell H. Morgan Department of Radiology, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Neurosurgery, Baltimore, Maryland, United States
| | - Greg Osgood
- Johns Hopkins University, Orthopaedic Surgery, Baltimore, Maryland, United States
| |
Collapse
|
38
|
von der Heide AM, Fallavollita P, Wang L, Sandner P, Navab N, Weidert S, Euler E. Camera-augmented mobile C-arm (CamC): A feasibility study of augmented reality imaging in the operating room. Int J Med Robot 2017; 14. [PMID: 29266806 DOI: 10.1002/rcs.1885] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Revised: 09/24/2017] [Accepted: 11/16/2017] [Indexed: 11/09/2022]
Abstract
BACKGROUND In orthopaedic trauma surgery, image-guided procedures are mostly based on fluoroscopy. The reduction of radiation exposure is an important goal. The purpose of this work was to investigate the impact of a camera-augmented mobile C-arm (CamC) on radiation exposure and the surgical workflow during a first clinical trial. METHODS Applying a workflow-oriented approach, 10 general workflow steps were defined to compare the CamC to traditional C-arms. The surgeries included were arbitrarily identified and assigned to the study. The evaluation criteria were radiation exposure and operation time for each workflow step and the entire surgery. The evaluation protocol was designed and conducted in a single-centre study. RESULTS The radiation exposure was remarkably reduced by 18 X-ray shots 46% using the CamC while keeping similar surgery times. CONCLUSIONS The intuitiveness of the system, its easy integration into the surgical workflow, and its great potential to reduce radiation have been demonstrated.
Collapse
Affiliation(s)
- Anna Maria von der Heide
- Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Klinikum der Universität München, Germany
| | - Pascal Fallavollita
- Chair for Computer Aided Medical Procedures & Augmented Reality, Technische Universität München, Germany.,Interdisciplinary School of Health Sciences, University of Ottawa, Canada
| | - Lejing Wang
- Chair for Computer Aided Medical Procedures & Augmented Reality, Technische Universität München, Germany
| | - Philipp Sandner
- TUM School of Management, Technische Universität München, Germany
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures & Augmented Reality, Technische Universität München, Germany.,Johns Hopkins University, Baltimore, Maryland, USA
| | - Simon Weidert
- Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Klinikum der Universität München, Germany
| | - Ekkehard Euler
- Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Klinikum der Universität München, Germany
| |
Collapse
|
39
|
Dang H, Stayman JW, Xu J, Zbijewski W, Sisniega A, Mow M, Wang X, Foos DH, Aygun N, Koliatsos VE, Siewerdsen JH. Task-based statistical image reconstruction for high-quality cone-beam CT. Phys Med Biol 2017; 62:8693-8719. [PMID: 28976368 DOI: 10.1088/1361-6560/aa90fd] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR-viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ([Formula: see text]). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in [Formula: see text], and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Collapse
Affiliation(s)
- Hao Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Lee SC, Fuerst B, Tateno K, Johnson A, Fotouhi J, Osgood G, Tombari F, Navab N. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery. Healthc Technol Lett 2017; 4:168-173. [PMID: 29184659 PMCID: PMC5683202 DOI: 10.1049/htl.2017.0066] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 08/02/2017] [Indexed: 12/12/2022] Open
Abstract
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
Collapse
Affiliation(s)
- Sing Chun Lee
- Computer Aided Medical Procedures, Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA
| | | | - Keisuke Tateno
- Fakultät für Informatik, Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Technische Universität München, Garching, Bayern, Germany.,Canon Inc., Shimomaruko, Tokyo, Japan
| | - Alex Johnson
- Orthopaedic Trauma, Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Javad Fotouhi
- Computer Aided Medical Procedures, Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Greg Osgood
- Orthopaedic Trauma, Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Federico Tombari
- Fakultät für Informatik, Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Technische Universität München, Garching, Bayern, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures, Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA.,Fakultät für Informatik, Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Technische Universität München, Garching, Bayern, Germany
| |
Collapse
|
41
|
Cutolo F, Meola A, Carbone M, Sinceri S, Cagnazzo F, Denaro E, Esposito N, Ferrari M, Ferrari V. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom. Comput Assist Surg (Abingdon) 2017; 22:39-53. [PMID: 28754068 DOI: 10.1080/24699322.2017.1358400] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Affiliation(s)
- Fabrizio Cutolo
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| | - Antonio Meola
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marina Carbone
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Sara Sinceri
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | | | - Ennio Denaro
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Nicola Esposito
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Vascular Surgery, Pisa University Medical School, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| |
Collapse
|
42
|
Augmented Reality in Neurosurgery: A Review of Current Concepts and Emerging Applications. Can J Neurol Sci 2017; 44:235-245. [PMID: 28434425 DOI: 10.1017/cjn.2016.443] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Augmented reality (AR) superimposes computer-generated virtual objects onto the user's view of the real world. Among medical disciplines, neurosurgery has long been at the forefront of image-guided surgery, and it continues to push the frontiers of AR technology in the operating room. In this systematic review, we explore the history of AR in neurosurgery and examine the literature on current neurosurgical applications of AR. Significant challenges to surgical AR exist, including compounded sources of registration error, impaired depth perception, visual and tactile temporal asynchrony, and operator inattentional blindness. Nevertheless, the ability to accurately display multiple three-dimensional datasets congruently over the area where they are most useful, coupled with future advances in imaging, registration, display technology, and robotic actuation, portend a promising role for AR in the neurosurgical operating room.
Collapse
|
43
|
Dang H, Stayman JW, Sisniega A, Zbijewski W, Xu J, Wang X, Foos DH, Aygun N, Koliatsos VE, Siewerdsen JH. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head. Phys Med Biol 2016; 62:539-559. [PMID: 28033118 DOI: 10.1088/1361-6560/aa52b8] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.
Collapse
Affiliation(s)
- Hao Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
44
|
Wang X, Habert S, Zu Berge CS, Fallavollita P, Navab N. Inverse visualization concept for RGB-D augmented C-arms. Comput Biol Med 2016; 77:135-47. [PMID: 27544070 DOI: 10.1016/j.compbiomed.2016.08.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 08/03/2016] [Accepted: 08/10/2016] [Indexed: 11/19/2022]
Abstract
X-ray is still the essential imaging for many minimally-invasive interventions. Overlaying X-ray images with an optical view of the surgery scene has been demonstrated to be an efficient way to reduce radiation exposure and surgery time. However, clinicians are recommended to place the X-ray source under the patient table while the optical view of the real scene must be captured from the top in order to see the patient, surgical tools, and the surgical site. With the help of a RGB-D (red-green-blue-depth) camera, which can measure depth in addition to color, the 3D model of the real scene is registered to the X-ray image. However, fusing two opposing viewpoints and visualizing them in the context of medical applications has never been attempted. In this paper, we propose first experiences of a novel inverse visualization technique for RGB-D augmented C-arms. A user study consisting of 16 participants demonstrated that our method shows a meaningful visualization with potential in providing clinicians multi-modal fused data in real-time during surgery.
Collapse
Affiliation(s)
- Xiang Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Computer Aided Medical Procedures, Technische Universität München, Germany.
| | - Severine Habert
- Computer Aided Medical Procedures, Technische Universität München, Germany
| | | | | | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Germany; Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
45
|
Madan H, Pernuš F, Likar B, Špiclin Ž. A framework for automatic creation of gold-standard rigid 3D–2D registration datasets. Int J Comput Assist Radiol Surg 2016; 12:263-275. [DOI: 10.1007/s11548-016-1482-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 08/31/2016] [Indexed: 10/21/2022]
|
46
|
Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers. ELECTRONICS 2016. [DOI: 10.3390/electronics5030059] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
47
|
Beijst C, Elschot M, van der Velden S, de Jong HWAM. Multimodality calibration for simultaneous fluoroscopic and nuclear imaging. EJNMMI Phys 2016; 3:20. [PMID: 27576333 PMCID: PMC5005238 DOI: 10.1186/s40658-016-0156-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 08/18/2016] [Indexed: 02/08/2023] Open
Abstract
Background Simultaneous real-time fluoroscopic and nuclear imaging could benefit image-guided (oncological) procedures. To this end, a hybrid modality is currently being developed by our group, by combining a c-arm with a gamma camera and a four-pinhole collimator. Accurate determination of the system parameters that describe the position of the x-ray tube, x-ray detector, gamma camera, and collimators is crucial to optimize image quality. The purpose of this study was to develop a calibration method that estimates the system parameters used for reconstruction. A multimodality phantom consisting of five point sources was created. First, nuclear and fluoroscopic images of the phantom were acquired at several distances from the image intensifier. The system parameters were acquired using physical measurement, and multimodality images of the phantom were reconstructed. The resolution and co-registration error of the point sources were determined as a measure of image quality. Next, the system parameters were estimated using a calibration method, which adjusted the parameters in the reconstruction algorithm, until the resolution and co-registration were optimized. For evaluation, multimodality images of a second set of phantom acquisitions were reconstructed using calibrated parameter sets. Subsequently, the resolution and co-registration error of the point sources were determined as a measure of image quality. This procedure was performed five times for different noise simulations. In addition, simultaneously acquired fluoroscopic and nuclear images of two moving syringes were obtained with parameter sets from before and after calibration. Results The mean FWHM was significantly lower after calibration than before calibration for 21 out of 25 point sources. The mean co-registration error was significantly lower after calibration than before calibration for all point sources. The simultaneously acquired fluoroscopic and nuclear images showed improved co-registration after calibration as compared with before calibration. Conclusions A calibration method was presented that improves the resolution and co-registration of simultaneously acquired hybrid fluoroscopic and nuclear images by estimating the geometric parameter set as compared with a parameter set acquired by direct physical measurement. Electronic supplementary material The online version of this article (doi:10.1186/s40658-016-0156-1) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Casper Beijst
- Radiology and Nuclear Medicine, UMC Utrecht, P.O. Box 85500, 3508 GA, Utrecht, the Netherlands. .,Image Sciences Institute, UMC Utrecht, P.O. Box 85500, 3508 GA, Utrecht, the Netherlands.
| | - Mattijs Elschot
- Radiology and Nuclear Medicine, UMC Utrecht, P.O. Box 85500, 3508 GA, Utrecht, the Netherlands.,Department of Circulation and Medical Imaging, Faculty of Medicine, Norwegian University of Science and Technology, Trondheim, Norway
| | - Sandra van der Velden
- Radiology and Nuclear Medicine, UMC Utrecht, P.O. Box 85500, 3508 GA, Utrecht, the Netherlands.,Image Sciences Institute, UMC Utrecht, P.O. Box 85500, 3508 GA, Utrecht, the Netherlands
| | - Hugo W A M de Jong
- Radiology and Nuclear Medicine, UMC Utrecht, P.O. Box 85500, 3508 GA, Utrecht, the Netherlands
| |
Collapse
|
48
|
Mitrović U, Pernuš F, Likar B, Špiclin Ž. Simultaneous 3D-2D image registration and C-arm calibration: Application to endovascular image-guided interventions. Med Phys 2016; 42:6433-47. [PMID: 26520733 DOI: 10.1118/1.4932626] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Three-dimensional to two-dimensional (3D-2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D-2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D-2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm's geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D-2D registration method against a highly accurate reference or "gold standard" registration, performed on clinical image datasets acquired in the context of the intervention. METHODS The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D-2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. RESULTS Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D-2D image registration. CONCLUSIONS Because the tested methods perform simultaneous C-arm calibration and 3D-2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors' main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.
Collapse
Affiliation(s)
- Uroš Mitrović
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Cosylab, Control System Laboratory, Teslova ulica 30, Ljubljana 1000, Slovenia
| | - Franjo Pernuš
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia
| | - Boštjan Likar
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Sensum, Computer Vision Systems, Tehnološki Park 21, Ljubljana 1000, Slovenia
| | - Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Sensum, Computer Vision Systems, Tehnološki Park 21, Ljubljana 1000, Slovenia
| |
Collapse
|
49
|
Albiol F, Corbi A, Albiol A. Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1952-1961. [PMID: 26978665 DOI: 10.1109/tmi.2016.2540929] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.
Collapse
|
50
|
Personalized, relevance-based Multimodal Robotic Imaging and augmented reality for Computer Assisted Interventions. Med Image Anal 2016; 33:64-71. [PMID: 27475417 DOI: 10.1016/j.media.2016.06.021] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 06/12/2016] [Accepted: 06/15/2016] [Indexed: 11/21/2022]
Abstract
In the last decade, many researchers in medical image computing and computer assisted interventions across the world focused on the development of the Virtual Physiological Human (VPH), aiming at changing the practice of medicine from classification and treatment of diseases to that of modeling and treating patients. These projects resulted in major advancements in segmentation, registration, morphological, physiological and biomechanical modeling based on state of art medical imaging as well as other sensory data. However, a major issue which has not yet come into the focus is personalizing intra-operative imaging, allowing for optimal treatment. In this paper, we discuss the personalization of imaging and visualization process with particular focus on satisfying the challenging requirements of computer assisted interventions. We discuss such requirements and review a series of scientific contributions made by our research team to tackle some of these major challenges.
Collapse
|