1
|
Fotouhi J, Liu X, Armand M, Navab N, Unberath M. Reconstruction of Orthographic Mosaics From Perspective X-Ray Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3165-3177. [PMID: 34181536 DOI: 10.1109/tmi.2021.3093198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Image stitching is a prominent challenge in medical imaging, where the limited field-of-view captured by single images prohibits holistic analysis of patient anatomy. The barrier that prevents straight-forward mosaicing of 2D images is depth mismatch due to parallax. In this work, we leverage the Fourier slice theorem to aggregate information from multiple transmission images in parallax-free domains using fundamental principles of X-ray image formation. The details of the stitched image are subsequently restored using a novel deep learning strategy that exploits similarity measures designed around frequency, as well as dense and sparse spatial image content. Our work provides evidence that reconstruction of orthographic mosaics is possible with realistic motions of the C-arm involving both translation and rotation. We also show that these orthographic mosaics enable metric measurements of clinically relevant quantities directly on the 2D image plane.
Collapse
|
2
|
Fotouhi J, Fuerst B, Unberath M, Reichenstein S, Lee SC, Johnson AA, Osgood GM, Armand M, Navab N. Automatic intraoperative stitching of nonoverlapping cone-beam CT acquisitions. Med Phys 2018; 45:2463-2475. [PMID: 29569728 PMCID: PMC5997569 DOI: 10.1002/mp.12877] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 03/05/2018] [Accepted: 03/05/2018] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | - Bernhard Fuerst
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | - Mathias Unberath
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | | | - Sing Chun Lee
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
| | - Alex A. Johnson
- Department of Orthopaedic SurgeryJohns Hopkins HospitalBaltimoreMDUSA
| | - Greg M. Osgood
- Department of Orthopaedic SurgeryJohns Hopkins HospitalBaltimoreMDUSA
| | - Mehran Armand
- Department of Mechanical EngineeringJohns Hopkins UniversityBaltimoreMDUSA
- Applied Physics LaboratoryJohns Hopkins UniversityLaurelMDUSA
| | - Nassir Navab
- Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreMDUSA
- Computer Aided Medical ProceduresTechnical University of MunichMunichGermany
| |
Collapse
|
3
|
Augmented reality technology for preoperative planning and intraoperative navigation during hepatobiliary surgery: A review of current methods. Hepatobiliary Pancreat Dis Int 2018; 17:101-112. [PMID: 29567047 DOI: 10.1016/j.hbpd.2018.02.002] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 11/16/2017] [Indexed: 02/05/2023]
Abstract
BACKGROUND Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. DATA SOURCES The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. RESULTS In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. CONCLUSIONS With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
Collapse
|
4
|
Andress S, Johnson A, Unberath M, Winkler AF, Yu K, Fotouhi J, Weidert S, Osgood G, Navab N. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial. J Med Imaging (Bellingham) 2018; 5:021209. [PMID: 29392161 DOI: 10.1117/1.jmi.5.2.021209] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Accepted: 01/04/2018] [Indexed: 11/14/2022] Open
Abstract
Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.
Collapse
Affiliation(s)
- Sebastian Andress
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Ludwig-Maximilians-Universität München, Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Munich, Germany
| | - Alex Johnson
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Mathias Unberath
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States
| | - Alexander Felix Winkler
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| | - Kevin Yu
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| | - Javad Fotouhi
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States
| | - Simon Weidert
- Ludwig-Maximilians-Universität München, Klinik für Allgemeine, Unfall- und Wiederherstellungschirurgie, Munich, Germany
| | - Greg Osgood
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Nassir Navab
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, Maryland, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| |
Collapse
|
5
|
Range Imaging for Motion Compensation in C-Arm Cone-Beam CT of Knees under Weight-Bearing Conditions. J Imaging 2018. [DOI: 10.3390/jimaging4010013] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
6
|
Fotouhi J, Alexander CP, Unberath M, Taylor G, Lee SC, Fuerst B, Johnson A, Osgood G, Taylor RH, Khanuja H, Armand M, Navab N. Plan in 2-D, execute in 3-D: an augmented reality solution for cup placement in total hip arthroplasty. J Med Imaging (Bellingham) 2018; 5:021205. [PMID: 29322072 DOI: 10.1117/1.jmi.5.2.021205] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/12/2017] [Indexed: 01/15/2023] Open
Abstract
Reproducibly achieving proper implant alignment is a critical step in total hip arthroplasty procedures that has been shown to substantially affect patient outcome. In current practice, correct alignment of the acetabular cup is verified in C-arm x-ray images that are acquired in an anterior-posterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon's experience in understanding the 3-D orientation of a hemispheric implant from 2-D AP projection images. This work proposes an easy to use intraoperative component planning system based on two C-arm x-ray images that are combined with 3-D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital and report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10 deg, and 0.53 deg, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.
Collapse
Affiliation(s)
- Javad Fotouhi
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, United States
| | - Clayton P Alexander
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, United States
| | - Mathias Unberath
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, United States
| | - Giacomo Taylor
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, United States
| | - Sing Chun Lee
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, United States
| | - Bernhard Fuerst
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, United States
| | - Alex Johnson
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, United States
| | - Greg Osgood
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, United States
| | - Russell H Taylor
- Johns Hopkins University, Laboratory for Computational Sensing and Robotics, Baltimore, United States
| | - Harpal Khanuja
- Johns Hopkins Hospital, Department of Orthopaedic Surgery, Baltimore, United States
| | - Mehran Armand
- Johns Hopkins University, Laboratory for Computational Sensing and Robotics, Baltimore, United States.,Johns Hopkins University, Applied Physics Laboratory, Laurel, Maryland, United States
| | - Nassir Navab
- Johns Hopkins University, Computer Aided Medical Procedures, Baltimore, United States.,Technische Universität München, Computer Aided Medical Procedures, Munich, Germany
| |
Collapse
|
7
|
Lee SC, Fuerst B, Tateno K, Johnson A, Fotouhi J, Osgood G, Tombari F, Navab N. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery. Healthc Technol Lett 2017; 4:168-173. [PMID: 29184659 PMCID: PMC5683202 DOI: 10.1049/htl.2017.0066] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 08/02/2017] [Indexed: 12/12/2022] Open
Abstract
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
Collapse
Affiliation(s)
- Sing Chun Lee
- Computer Aided Medical Procedures, Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA
| | | | - Keisuke Tateno
- Fakultät für Informatik, Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Technische Universität München, Garching, Bayern, Germany.,Canon Inc., Shimomaruko, Tokyo, Japan
| | - Alex Johnson
- Orthopaedic Trauma, Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Javad Fotouhi
- Computer Aided Medical Procedures, Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Greg Osgood
- Orthopaedic Trauma, Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Federico Tombari
- Fakultät für Informatik, Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Technische Universität München, Garching, Bayern, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures, Laboratory for Computational Sensing & Robotics, Johns Hopkins University, Baltimore, MD, USA.,Fakultät für Informatik, Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Technische Universität München, Garching, Bayern, Germany
| |
Collapse
|
8
|
Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration. Int J Comput Assist Radiol Surg 2017; 12:1221-1230. [PMID: 28527025 DOI: 10.1007/s11548-017-1611-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 05/08/2017] [Indexed: 12/25/2022]
Abstract
PURPOSE In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.
Collapse
|