1
|
Kiran U, Bhat SN, Anitha H, Naik RR. Feature-based multimodal registration framework for vertebral pose estimation. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2023:10.1007/s00586-023-08054-z. [PMID: 38104308 DOI: 10.1007/s00586-023-08054-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 08/21/2023] [Accepted: 11/12/2023] [Indexed: 12/19/2023]
Abstract
PURPOSE The reliable estimation of the vertebral body posture helps to aid a safe and effective spine surgery. The proposed work aims to present an MR to X-ray image registration to assess the 3D pose of the vertebral body during spine surgery. The 3D assessment of vertebral pose assists in analyzing the position and orientation of the vertebral body to provide information during various clinical diagnosis conditions such as curvature estimation and pedicle screw insertion surgery. METHODS The proposed feature-based registration framework extracted vertebral end plates to avoid the mismatch between the intensities of MR and X-ray images. Using the projection matrix, the segmented MRI is forward projected and then registered to the X-ray image using binary image matching similarity and the CMA-ES optimizer. RESULTS The proposed method estimated the vertebral pose by registering the simulated X-ray onto pre-operative MRI. To evaluate the efficacy of the proposed approach, a certain number of experiments are carried out on the simulated dataset. CONCLUSION The proposed method is a fast and accurate registration method that can provide 3D information about the vertebral body. This 3D information is useful to improve accuracy during various clinical diagnoses.
Collapse
Affiliation(s)
- Usha Kiran
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - H Anitha
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - Roshan Ramakrishna Naik
- Department of Electronics and Communication Engineering, St. Joseph Engineering College, Vamanjoor, Mangalore, Karnataka, 575028, India
| |
Collapse
|
2
|
Mekki L, Sheth NM, Vijayan RC, Rohleder M, Sisniega A, Kleinszig G, Vogt S, Kunze H, Osgood GM, Siewerdsen JH, Uneri A. Surgical navigation for guidewire placement from intraoperative fluoroscopy in orthopaedic surgery. Phys Med Biol 2023; 68:215001. [PMID: 37774711 DOI: 10.1088/1361-6560/acfec4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective. Surgical guidewires are commonly used in placing fixation implants to stabilize fractures. Accurate positioning of these instruments is challenged by difficulties in 3D reckoning from 2D fluoroscopy. This work aims to enhance the accuracy and reduce exposure times by providing 3D navigation for guidewire placement from as little as two fluoroscopic images.Approach. Our approach combines machine learning-based segmentation with the geometric model of the imager to determine the 3D poses of guidewires. Instrument tips are encoded as individual keypoints, and the segmentation masks are processed to estimate the trajectory. Correspondence between detections in multiple views is established using the pre-calibrated system geometry, and the corresponding features are backprojected to obtain the 3D pose. Guidewire 3D directions were computed using both an analytical and an optimization-based method. The complete approach was evaluated in cadaveric specimens with respect to potential confounding effects from the imaging geometry and radiographic scene clutter due to other instruments.Main results. The detection network identified the guidewire tips within 2.2 mm and guidewire directions within 1.1°, in 2D detector coordinates. Feature correspondence rejected false detections, particularly in images with other instruments, to achieve 83% precision and 90% recall. Estimating the 3D direction via numerical optimization showed added robustness to guidewires aligned with the gantry rotation plane. Guidewire tips and directions were localized in 3D world coordinates with a median accuracy of 1.8 mm and 2.7°, respectively.Significance. The paper reports a new method for automatic 2D detection and 3D localization of guidewires from pairs of fluoroscopic images. Localized guidewires can be virtually overlaid on the patient's pre-operative 3D scan during the intervention. Accurate pose determination for multiple guidewires from two images offers to reduce radiation dose by minimizing the need for repeated imaging and provides quantitative feedback prior to implant placement.
Collapse
Affiliation(s)
- L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N M Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - R C Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M Rohleder
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - H Kunze
- Siemens Healthineers, Erlangen, Germany
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
3
|
Kiran U, Ramakrishna Naik R, Bhat SN, H A. Evaluating similarity measure for multimodal 3D to 2D registration. Biomed Phys Eng Express 2023; 9:055015. [PMID: 37487480 DOI: 10.1088/2057-1976/ace9e1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/24/2023] [Indexed: 07/26/2023]
Abstract
The 3D to 2D registration technique in spine surgery is vital to aid surgeons in avoiding the wrong site surgery by estimating the vertebral pose. The vertebral poses are estimated by generating the spatial correspondence relationship between pre-operative MR with intra-operative x-ray images, then evaluated using a similarity measure. Different similarity measures are used in 3D to 2D registration techniques to assess the spatial correspondence between the pre-operative and intra-operative images. However, to evaluate the registration performance of the similarity measures, the proposed framework employs three different similarity measures: Binary Image Matching, Dice Coefficients, and Normalized Cross-correlation technique to compare the images based on pixel positions. The registration accuracy of the proposed similarity measures is compared based on the mean Target Registration Error, mean Iteration Times, and success rate. In the absence of simulated test images, the experiment is conducted on the simulated AP and Lateral test images. The experiment conducted on the simulated test images shows that all three similarity measures work well for the feature based 3D to 2D registration in that BIM gives better results. The experiment also indicates high registration accuracy when the initial displacements are varied up to ±20 mm and ±100of the translational and rotational parameters, respectively, for three similarity measures.
Collapse
Affiliation(s)
- Usha Kiran
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
| | - Roshan Ramakrishna Naik
- Department of Electronics and Communication Engineering, St. Joseph Engineering College, Vamanjoor, Mangalore, Karnataka, 575028, India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
| | - Anitha H
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
| |
Collapse
|
4
|
Willey MC, Kern AM, Goetz JE, Marsh JL, Anderson DD. Biomechanical guidance can improve accuracy of reduction for intra-articular tibia plafond fractures and reduce joint contact stress. J Orthop Res 2023; 41:546-554. [PMID: 35672888 PMCID: PMC9726992 DOI: 10.1002/jor.25393] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/15/2022] [Accepted: 05/26/2022] [Indexed: 02/04/2023]
Abstract
Articular fracture malreduction increases posttraumatic osteoarthritis (PTOA) risk by elevating joint contact stress. A new biomechanical guidance system (BGS) that provides intraoperative assessment of articular fracture reduction and joint contact stress based solely on a preoperative computed tomography (CT) and intraoperative fluoroscopy may facilitate better fracture reduction. The objective of this proof-of-concept cadaveric study was to test this premise while characterizing BGS performance. Articular tibia plafond fractures were created in five cadaveric ankles. CT scans were obtained to provide digital models. Indirect reduction was performed in a simulated operating room once with and once without BGS guidance. CT scans after fixation provided models of the reduced ankles for assessing reduction accuracy, joint contact stresses, and BGS accuracy. BGS was utilized 4.8 ± 1.3 (mean ± SD) times per procedure, increasing operative time by 10 min (39%), and the number of fluoroscopy images by 31 (17%). Errors in BGS reduction assessment compared to CT-derived models were 0.45 ± 0.57 mm in translation and 2.0 ± 2.5° in rotation. For the four ankles that were successfully reduced and fixed, associated absolute errors in computed mean and maximum contact stress were 0.40 ± 0.40 and 0.96 ± 1.12 MPa, respectively. BGS reduced mean and maximum contact stress by 1.1 and 2.6 MPa, respectively. BGS thus improved the accuracy of articular fracture reduction and significantly reduced contact stress. Statement of Clinical Significance: Malreduction of articular fractures is known to lead to PTOA. The BGS described in this work has potential to improve quality of articular fracture reduction and clinical outcomes for patients with a tibia plafond fracture.
Collapse
Affiliation(s)
- Michael C Willey
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - Andrew M Kern
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - Jessica E Goetz
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - John Lawrence Marsh
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - Donald D Anderson
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Industrial and Systems Engineering, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
5
|
Vijayan RC, Venkataraman K, Wei J, Sheth NM, Shafiq B, Siewerdsen JH, Zbijewski W, Li G, Cleary K, Uneri A. Multi-Body 3D-2D Registration for Robot-Assisted Joint Reduction: Preclinical Evaluation in the Ankle Syndesmosis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12466:124661F. [PMID: 37143861 PMCID: PMC10155864 DOI: 10.1117/12.2654481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Purpose Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. Methods The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Results Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to <2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Conclusions Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.
Collapse
Affiliation(s)
- R. C. Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - K. Venkataraman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - J. Wei
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - N. M. Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - B. Shafiq
- Department of Orthopedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston TX
| | - W. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - G. Li
- Children’s National Hospital, Washington DC
| | - K. Cleary
- Children’s National Hospital, Washington DC
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- ; phone: +1-276-614-7743; website: carnegie.jhu.edu
| |
Collapse
|
6
|
Evaluation of a Navigated 3D Ultrasound Integration for Brain Tumor Surgery: First Results of an Ongoing Prospective Study. Curr Oncol 2022; 29:6594-6609. [PMID: 36135087 PMCID: PMC9498154 DOI: 10.3390/curroncol29090518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/08/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022] Open
Abstract
The aim of the study was to assess the quality, accuracy and benefit of navigated 2D and 3D ultrasound for intra-axial tumor surgery in a prospective study. Patients intended for gross total resection were consecutively enrolled. Intraoperatively, a 2D and 3D iUS-based resection was performed. During surgery, the image quality, clinical benefit and navigation accuracy were recorded based on a standardized protocol using Likert’s scales. A total of 16 consecutive patients were included. Mean ratings of image quality in 2D iUS were significantly higher than in 3D iUS (p < 0.001). There was no relevant decrease in rating during the surgery in 2D and 3D iUS (p > 0.46). The benefit was rated 2.2 in 2D iUS and 2.6 in 3D iUS (p = 0.08). The benefit remained stable in 2D, while there was a slight decrease in the benefit in 3D after complete tumor resection (p = 0.09). The accuracy was similar in both (mean 2.2 p = 0.88). Seven patients had a small tumor remnant in intraoperative MRT (mean 0.98 cm3) that was not appreciated with iUS. Crucially, 3D iUS allows for an accurate intraoperative update of imaging with slightly lower image quality than 2D iUS. Our preliminary data suggest that the benefit and accuracy of 2D and 3D iUS navigation do not undergo significant variations during tumor resection.
Collapse
|
7
|
Assessing the accuracy of a new 3D2D registration algorithm based on a non-invasive skin marker model for navigated spine surgery. Int J Comput Assist Radiol Surg 2022; 17:1933-1945. [PMID: 35986831 PMCID: PMC9468112 DOI: 10.1007/s11548-022-02733-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 08/04/2022] [Indexed: 11/24/2022]
Abstract
Purpose We assessed the accuracy of a new 3D2D registration algorithm to be used for navigated spine surgery and explored anatomical and radiologic parameters affecting the registration accuracy. Compared to existing 3D2D registration algorithms, the algorithm does not need bone-mounted or table-mounted instruments for registration. Neither does the intraoperative imaging device have to be tracked or calibrated. Methods The rigid registration algorithm required imaging data (a pre-existing CT scan (3D) and two angulated fluoroscopic images (2D)) to register positions of vertebrae in 3D and is based on non-invasive skin markers. The algorithm registered five adjacent vertebrae and was tested in the thoracic and lumbar spine from three human cadaveric specimens. The registration accuracy was calculated for each registered vertebra and measured with the target registration error (TRE) in millimeters. We used multivariable analysis to identify parameters independently affecting the algorithm’s accuracy such as the angulation between the two fluoroscopic images (between 40° and 90°), the detector-skin distance, the number of skin markers applied, and waist circumference. Results The algorithm registered 780 vertebrae with a median TRE of 0.51 mm [interquartile range 0.32–0.73 mm] and a maximum TRE of 2.06 mm. The TRE was most affected by the angulation between the two fluoroscopic images obtained (p < 0.001): larger angulations resulted in higher accuracy. The algorithm was more accurate in thoracic vertebrae (p = 0.004) and in the specimen with the smallest waist circumference (p = 0.003). The algorithm registered all five adjacent vertebrae with similar accuracy. Conclusion We studied the accuracy of a new 3D2D registration algorithm based on non-invasive skin markers. The algorithm registered five adjacent vertebrae with similar accuracy in the thoracic and lumbar spine and showed a maximum target registration error of approximately 2 mm. To further evaluate its potential for navigated spine surgery, the algorithm may now be integrated into a complete navigation system. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-022-02733-w.
Collapse
|
8
|
Naik RR, Hoblidar A, Bhat SN, Ampar N, Kundangar R. A Hybrid 3D-2D Image Registration Framework for Pedicle Screw Trajectory Registration between Intraoperative X-ray Image and Preoperative CT Image. J Imaging 2022; 8:jimaging8070185. [PMID: 35877629 PMCID: PMC9324544 DOI: 10.3390/jimaging8070185] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 06/11/2022] [Accepted: 06/19/2022] [Indexed: 12/04/2022] Open
Abstract
Pedicle screw insertion is considered a complex surgery among Orthopaedics surgeons. Exclusively to prevent postoperative complications associated with pedicle screw insertion, various types of image intensity registration-based navigation systems have been developed. These systems are computation-intensive, have a small capture range and have local maxima issues. On the other hand, deep learning-based techniques lack registration generalizability and have data dependency. To overcome these limitations, a patient-specific hybrid 3D-2D registration principled framework was designed to map a pedicle screw trajectory between intraoperative X-ray image and preoperative CT image. An anatomical landmark-based 3D-2D Iterative Control Point (ICP) registration was performed to register a pedicular marker pose between the X-ray images and axial preoperative CT images. The registration framework was clinically validated by generating projection images possessing an optimal match with intraoperative X-ray images at the corresponding control point registration. The effectiveness of the registered trajectory was evaluated in terms of displacement and directional errors after reprojecting its position on 2D radiographic planes. The mean Euclidean distances for the Head and Tail end of the reprojected trajectory from the actual trajectory in the AP and lateral planes were shown to be 0.6–0.8 mm and 0.5–1.6 mm, respectively. Similarly, the corresponding mean directional errors were found to be 4.90 and 20. The mean trajectory length difference between the actual and registered trajectory was shown to be 2.67 mm. The approximate time required in the intraoperative environment to axially map the marker position for a single vertebra was found to be 3 min. Utilizing the markerless registration techniques, the designed framework functions like a screw navigation tool, and assures the quality of surgery being performed by limiting the need of postoperative CT.
Collapse
Affiliation(s)
- Roshan Ramakrishna Naik
- Manipal Institute of Technology, Manipal Academy of Higher Education Manipal, Manipal 576104, India;
| | - Anitha Hoblidar
- Manipal Institute of Technology, Manipal Academy of Higher Education Manipal, Manipal 576104, India;
- Correspondence: (A.H.); (S.N.B.)
| | - Shyamasunder N. Bhat
- Kasturba Medical College, Manipal Academy of Higher Education Manipal, Manipal 576104, India; (N.A.); (R.K.)
- Correspondence: (A.H.); (S.N.B.)
| | - Nishanth Ampar
- Kasturba Medical College, Manipal Academy of Higher Education Manipal, Manipal 576104, India; (N.A.); (R.K.)
| | - Raghuraj Kundangar
- Kasturba Medical College, Manipal Academy of Higher Education Manipal, Manipal 576104, India; (N.A.); (R.K.)
| |
Collapse
|
9
|
Sheth N, Vagdargi P, Sisniega A, Uneri A, Osgood G, Siewerdsen JH. Preclinical evaluation of a prototype freehand drill video guidance system for orthopedic surgery. J Med Imaging (Bellingham) 2022; 9:045004. [PMID: 36046335 PMCID: PMC9411797 DOI: 10.1117/1.jmi.9.4.045004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/09/2022] [Indexed: 08/28/2023] Open
Abstract
Purpose: Internal fixation of pelvic fractures is a challenging task requiring the placement of instrumentation within complex three-dimensional bone corridors, typically guided by fluoroscopy. We report a system for two- and three-dimensional guidance using a drill-mounted video camera and fiducial markers with evaluation in first preclinical studies. Approach: The system uses a camera affixed to a surgical drill and multimodality (optical and radio-opaque) markers for real-time trajectory visualization in fluoroscopy and/or CT. Improvements to a previously reported prototype include hardware components (mount, camera, and fiducials) and software (including a system for detecting marker perturbation) to address practical requirements necessary for translation to clinical studies. Phantom and cadaver experiments were performed to quantify the accuracy of video-fluoroscopy and video-CT registration, the ability to detect marker perturbation, and the conformance in placing guidewires along realistic pelvic trajectories. The performance was evaluated in terms of geometric accuracy and conformance within bone corridors. Results: The studies demonstrated successful guidewire delivery in a cadaver, with a median entry point error of 1.00 mm (1.56 mm IQR) and median angular error of 1.94 deg (1.23 deg IQR). Such accuracy was sufficient to guide K-wire placement through five of the six trajectories investigated with a strong level of conformance within bone corridors. The sixth case demonstrated a cortical breach due to extrema in the registration error. The system was able to detect marker perturbations and alert the user to potential registration issues. Feasible workflows were identified for orthopedic-trauma scenarios involving emergent cases (with no preoperative imaging) or cases with preoperative CT. Conclusions: A prototype system for guidewire placement was developed providing guidance that is potentially compatible with orthopedic-trauma workflow. First preclinical (cadaver) studies demonstrated accurate guidance of K-wire placement in pelvic bone corridors and the ability to automatically detect perturbations that degrade registration accuracy. The preclinical prototype demonstrated performance and utility supporting translation to clinical studies.
Collapse
Affiliation(s)
- Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Gregory Osgood
- Johns Hopkins Medicine, Department of Orthopedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
10
|
Kausch L, Thomas S, Kunze H, Norajitra T, Klein A, Ayala L, El Barbari J, Mandelka E, Privalov M, Vetter S, Mahnken A, Maier-Hein L, Maier-Hein K. C-arm positioning for standard projections during spinal implant placement. Med Image Anal 2022; 81:102557. [DOI: 10.1016/j.media.2022.102557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 06/09/2022] [Accepted: 07/22/2022] [Indexed: 10/16/2022]
|
11
|
Uneri A, Wu P, Jones CK, Vagdargi P, Han R, Helm PA, Luciano MG, Anderson WS, Siewerdsen JH. Deformable 3D-2D registration for high-precision guidance and verification of neuroelectrode placement. Phys Med Biol 2021; 66. [PMID: 34644684 DOI: 10.1088/1361-6560/ac2f89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - C K Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - M G Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America.,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
12
|
Unberath M, Gao C, Hu Y, Judish M, Taylor RH, Armand M, Grupp R. The Impact of Machine Learning on 2D/3D Registration for Image-Guided Interventions: A Systematic Review and Perspective. Front Robot AI 2021; 8:716007. [PMID: 34527706 PMCID: PMC8436154 DOI: 10.3389/frobt.2021.716007] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based navigation is widely considered the next frontier of minimally invasive surgery. It is believed that image-based navigation will increase the access to reproducible, safe, and high-precision surgery as it may then be performed at acceptable costs and effort. This is because image-based techniques avoid the need of specialized equipment and seamlessly integrate with contemporary workflows. Furthermore, it is expected that image-based navigation techniques will play a major role in enabling mixed reality environments, as well as autonomous and robot-assisted workflows. A critical component of image guidance is 2D/3D registration, a technique to estimate the spatial relationships between 3D structures, e.g., preoperative volumetric imagery or models of surgical instruments, and 2D images thereof, such as intraoperative X-ray fluoroscopy or endoscopy. While image-based 2D/3D registration is a mature technique, its transition from the bench to the bedside has been restrained by well-known challenges, including brittleness with respect to optimization objective, hyperparameter selection, and initialization, difficulties in dealing with inconsistencies or multiple objects, and limited single-view performance. One reason these challenges persist today is that analytical solutions are likely inadequate considering the complexity, variability, and high-dimensionality of generic 2D/3D registration problems. The recent advent of machine learning-based approaches to imaging problems that, rather than specifying the desired functional mapping, approximate it using highly expressive parametric models holds promise for solving some of the notorious challenges in 2D/3D registration. In this manuscript, we review the impact of machine learning on 2D/3D registration to systematically summarize the recent advances made by introduction of this novel technology. Grounded in these insights, we then offer our perspective on the most pressing needs, significant open problems, and possible next steps.
Collapse
Affiliation(s)
- Mathias Unberath
- Advanced Robotics and Computationally Augmented Environments (ARCADE) Lab, Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | | | | | | | | | | | | |
Collapse
|
13
|
Schippers B, Hekman E, van Helden S, Boomsma M, van Osch J, Nijveldt R. Enhancing perioperative landmark detection during sacroiliac joint fusion in patients suffering from low back pain. Comput Assist Surg (Abingdon) 2021; 26:41-48. [PMID: 33941011 DOI: 10.1080/24699322.2021.1916600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
Over the past decade, minimally invasive sacroiliac joint (SIJ) fusion has become an effective treatment for patients suffering from low back pain (LBP) originating from the SIJ. Perioperative C-arm fluoroscopy-assisted surgical navigation during SIJ fusion remains challenging due to the lack of 3D spatial information. This study developed and assessed a 3D CT/2D fluoroscopy integration approach based on digitally reconstructed radiographs (DRRs) obtained from pre-operative CT scans. Development of this approach proved feasible and landmarks were successfully translated, in retrospect, to perioperatively acquired fluoroscopies. Further expansion of and research into the proposed approach to increase perioperative navigation is indicated and additional validation should be performed.
Collapse
Affiliation(s)
- Bas Schippers
- Department of Surgery, Isala Hospital, Zwolle, The Netherlands
| | - Edsko Hekman
- Department of Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Sven van Helden
- Department of Surgery, Isala Hospital, Zwolle, The Netherlands
| | - Martijn Boomsma
- Department of Radiology, Isala Hospital, Zwolle, The Netherlands
| | - Jochen van Osch
- Department of Physics, Isala Hospital, Zwolle, The Netherlands
| | - Robert Nijveldt
- Department of Surgery, Isala Hospital, Zwolle, The Netherlands
| |
Collapse
|
14
|
Vijayan RC, Han R, Wu P, Sheth NM, Ketcha MD, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Development of a fluoroscopically guided robotic assistant for instrument placement in pelvic trauma surgery. J Med Imaging (Bellingham) 2021; 8:035001. [PMID: 34124283 PMCID: PMC8189698 DOI: 10.1117/1.jmi.8.3.035001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 05/21/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: A method for fluoroscopic guidance of a robotic assistant is presented for instrument placement in pelvic trauma surgery. The solution uses fluoroscopic images acquired in standard clinical workflow and helps avoid repeat fluoroscopy commonly performed during implant guidance. Approach: Images acquired from a mobile C-arm are used to perform 3D-2D registration of both the patient (via patient CT) and the robot (via CAD model of a surgical instrument attached to its end effector, e.g; a drill guide), guiding the robot to target trajectories defined in the patient CT. The proposed approach avoids C-arm gantry motion, instead manipulating the robot to acquire disparate views of the instrument. Phantom and cadaver studies were performed to determine operating parameters and assess the accuracy of the proposed approach in aligning a standard drill guide instrument. Results: The proposed approach achieved average drill guide tip placement accuracy of 1.57 ± 0.47 mm and angular alignment of 0.35 ± 0.32 deg in phantom studies. The errors remained within 2 mm and 1 deg in cadaver experiments, comparable to the margins of errors provided by surgical trackers (but operating without the need for external tracking). Conclusions: By operating at a fixed fluoroscopic perspective and eliminating the need for encoded C-arm gantry movement, the proposed approach simplifies and expedites the registration of image-guided robotic assistants and can be used with simple, non-calibrated, non-encoded, and non-isocentric C-arm systems to accurately guide a robotic device in a manner that is compatible with the surgical workflow.
Collapse
Affiliation(s)
- Rohan C. Vijayan
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Niral M. Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Michael D. Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | | | | | - Greg M. Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
15
|
Zhang X, Uneri A, Wu P, Ketcha MD, Jones CK, Huang Y, Lo SFL, Helm PA, Siewerdsen JH. Long-length tomosynthesis and 3D-2D registration for intraoperative assessment of spine instrumentation. Phys Med Biol 2021; 66:055008. [PMID: 33477120 DOI: 10.1088/1361-6560/abde96] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
PURPOSE A system for long-length intraoperative imaging is reported based on longitudinal motion of an O-arm gantry featuring a multi-slot collimator. We assess the utility of long-length tomosynthesis and the geometric accuracy of 3D image registration for surgical guidance and evaluation of long spinal constructs. METHODS A multi-slot collimator with tilted apertures was integrated into an O-arm system for long-length imaging. The multi-slot projective geometry leads to slight view disparity in both long-length projection images (referred to as 'line scans') and tomosynthesis 'slot reconstructions' produced using a weighted-backprojection method. The radiation dose for long-length imaging was measured, and the utility of long-length, intraoperative tomosynthesis was evaluated in phantom and cadaver studies. Leveraging the depth resolution provided by parallax views, an algorithm for 3D-2D registration of the patient and surgical devices was adapted for registration with line scans and slot reconstructions. Registration performance using single-plane or dual-plane long-length images was evaluated and compared to registration accuracy achieved using standard dual-plane radiographs. RESULTS Longitudinal coverage of ∼50-64 cm was achieved with a single long-length slot scan, providing a field-of-view (FOV) up to (40 × 64) cm2, depending on patient positioning. The dose-area product (reference point air kerma × x-ray field area) for a slot scan ranged from ∼702-1757 mGy·cm2, equivalent to ∼2.5 s of fluoroscopy and comparable to other long-length imaging systems. Long-length scanning produced high-resolution tomosynthesis reconstructions, covering ∼12-16 vertebral levels. 3D image registration using dual-plane slot reconstructions achieved median target registration error (TRE) of 1.2 mm and 0.6° in cadaver studies, outperforming registration to dual-plane line scans (TRE = 2.8 mm and 2.2°) and radiographs (TRE = 2.5 mm and 1.1°). 3D registration using single-plane slot reconstructions leveraged the ∼7-14° angular separation between slots to achieve median TRE ∼2 mm and <2° from a single scan. CONCLUSION The multi-slot configuration provided intraoperative visualization of long spine segments, facilitating target localization, assessment of global spinal alignment, and evaluation of long surgical constructs. 3D-2D registration to long-length tomosynthesis reconstructions yielded a promising means of guidance and verification with accuracy exceeding that of 3D-2D registration to conventional radiographs.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Uneri A, Wu P, Jones CK, Ketcha MD, Vagdargi P, Han R, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Data-Driven Deformable 3D-2D Registration for Guiding Neuroelectrode Placement in Deep Brain Stimulation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11598:115981B. [PMID: 35982943 PMCID: PMC9382676 DOI: 10.1117/12.2582160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Deep brain stimulation is a neurosurgical procedure used in treatment of a growing spectrum of movement disorders. Inaccuracies in electrode placement, however, can result in poor symptom control or adverse effects and confound variability in clinical outcomes. A deformable 3D-2D registration method is presented for high-precision 3D guidance of neuroelectrodes. METHODS The approach employs a model-based, deformable algorithm for 3D-2D image registration. Variations in lead design are captured in a parametric 3D model based on a B-spline curve. The registration is solved through iterative optimization of 16 degrees-of-freedom that maximize image similarity between the 2 acquired radiographs and simulated forward projections of the neuroelectrode model. The approach was evaluated in phantom models with respect to pertinent imaging parameters, including view selection and imaging dose. RESULTS The results demonstrate an accuracy of (0.2 ± 0.2) mm in 3D localization of individual electrodes. The solution was observed to be robust to changes in pertinent imaging parameters, which demonstrate accurate localization with ≥20° view separation and at 1/10th the dose of a standard fluoroscopy frame. CONCLUSIONS The presented approach provides the means for guiding neuroelectrode placement from 2 low-dose radiographic images in a manner that accommodates potential deformations at the target anatomical site. Future work will focus on improving runtime though learning-based initialization, application in reducing reconstruction metal artifacts for 3D verification of placement, and extensive evaluation in clinical data from an IRB study underway.
Collapse
Affiliation(s)
- A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - C. K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - R. Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - M. Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - W. S. Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| |
Collapse
|
17
|
Han R, Uneri A, Vijayan RC, Wu P, Vagdargi P, Sheth N, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH. Fracture reduction planning and guidance in orthopaedic trauma surgery via multi-body image registration. Med Image Anal 2020; 68:101917. [PMID: 33341493 DOI: 10.1016/j.media.2020.101917] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 11/16/2020] [Accepted: 11/23/2020] [Indexed: 02/06/2023]
Abstract
PURPOSES Surgical reduction of pelvic fracture is a challenging procedure, and accurate restoration of natural morphology is essential to obtaining positive functional outcome. The procedure often requires extensive preoperative planning, long fluoroscopic exposure time, and trial-and-error to achieve accurate reduction. We report a multi-body registration framework for reduction planning using preoperative CT and intraoperative guidance using routine 2D fluoroscopy that could help address such challenges. METHOD The framework starts with semi-automatic segmentation of fractured bone fragments in preoperative CT using continuous max-flow. For reduction planning, a multi-to-one registration is performed to register bone fragments to an adaptive template that adjusts to patient-specific bone shapes and poses. The framework further registers bone fragments to intraoperative fluoroscopy to provide 2D fluoroscopy guidance and/or 3D navigation relative to the reduction plan. The framework was investigated in three studies: (1) a simulation study of 40 CT images simulating three fracture categories (unilateral two-body, unilateral three-body, and bilateral two-body); (2) a proof-of-concept cadaver study to mimic clinical scenario; and (3) a retrospective clinical study investigating feasibility in three cases of increasing severity and accuracy requirement. RESULTS Segmentation of simulated pelvic fracture demonstrated Dice coefficient of 0.92±0.06. Reduction planning using the adaptive template achieved 2-3 mm and 2-3° error for the three fracture categories, significantly better than planning based on mirroring of contralateral anatomy. 3D-2D registration yielded ~2 mm and 0.5° accuracy, providing accurate guidance with respect to the preoperative reduction plan. The cadaver study and retrospective clinical study demonstrated comparable accuracy: ~0.90 Dice coefficient in segmentation, ~3 mm accuracy in reduction planning, and ~2 mm accuracy in 3D-2D registration. CONCLUSION The registration framework demonstrated planning and guidance accuracy within clinical requirements in both simulation and clinical feasibility studies for a broad range of fracture-dislocation patterns. Using routinely acquired preoperative CT and intraoperative fluoroscopy, the framework could improve the accuracy of pelvic fracture reduction, reduce radiation dose, and could integrate well with common clinical workflow without the need for additional navigation systems.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - A Uneri
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - R C Vijayan
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - P Wu
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - P Vagdargi
- Department of Computer Science, The Johns Hopkins University, BaltimoreMD, United States
| | - N Sheth
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - S Vogt
- Siemens Healthineers, ErlangenGermany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, The Johns Hopkins Hospital, BaltimoreMD, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States.
| |
Collapse
|
18
|
Feddal A, Escalard S, Delvoye F, Fahed R, Desilles JP, Zuber K, Redjem H, Savatovsky JS, Ciccio G, Smajda S, Ben Maacha M, Mazighi M, Piotin M, Blanc R. Fusion Image Guidance for Supra-Aortic Vessel Catheterization in Neurointerventions: A Feasibility Study. AJNR Am J Neuroradiol 2020; 41:1663-1669. [PMID: 32819903 DOI: 10.3174/ajnr.a6707] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 06/03/2020] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE Endovascular navigation through tortuous vessels can be complex. Tools that can optimise this access phase need to be developed. Our aim was to evaluate the feasibility of supra-aortic vessel catheterization guidance by means of live fluoroscopy fusion with MR angiography or CT angiography. MATERIALS AND METHODS Twenty-five patients underwent preinterventional diagnostic MRA, and 8 patients underwent CTA. Fusion guidance was evaluated in 35 sessions of catheterization, targeting a total of 151 supra-aortic vessels. The time for MRA/CTA segmentation and fluoroscopy with MRA/CTA coregistration was recorded. The feasibility of fusion guidance was evaluated by recording the catheterizations executed by interventional neuroradiologists according to a standard technique under fluoroscopy and conventional road-mapping independent of the fusion guidance. Precision of the fusion roadmap was evaluated by measuring (on a semiquantitative 3-point scale) the maximum offset between the position of the guidewires/catheters and the vasculature on the virtual CTA/MRA images. The targeted vessels were divided in 2 groups according to their position from the level of the aortic arch. RESULTS The average time needed for segmentation and image coregistration was 7 ± 2 minutes. The MRA/CTA virtual roadmap overlaid on live fluoroscopy was considered accurate in 84.8% (128/151) of the assessed landmarks, with a higher accuracy for the group of vessels closer to the aortic arch (92.4%; OR, 4.88; 95% CI, 1.83-11.66; P = .003). CONCLUSIONS Fluoroscopy with MRA/CTA fusion guidance for supra-aortic vessel interventions is feasible. Further improvements of the technique to increase accuracy at the cervical level and further studies are needed for assessing the procedural time savings and decreasing the x-ray radiation exposure.
Collapse
Affiliation(s)
- A Feddal
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - S Escalard
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - F Delvoye
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - R Fahed
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - J P Desilles
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
- Université Paris Denis Diderot (J.P.D., M.M., M.P., R.B.), Sorbonne Paris Cite, Paris, France
- Laboratory of Vascular Translational Science (J.P.D., M.M., M.P., R.B.), U1148 Institut National de la Santé et de la Recherche Médicale, Paris, France
| | - K Zuber
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - H Redjem
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - J S Savatovsky
- Diagnostic Neuroradiology Unit (J.S.S.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - G Ciccio
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - S Smajda
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - M Ben Maacha
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
| | - M Mazighi
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
- Université Paris Denis Diderot (J.P.D., M.M., M.P., R.B.), Sorbonne Paris Cite, Paris, France
- Laboratory of Vascular Translational Science (J.P.D., M.M., M.P., R.B.), U1148 Institut National de la Santé et de la Recherche Médicale, Paris, France
| | - M Piotin
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
- Université Paris Denis Diderot (J.P.D., M.M., M.P., R.B.), Sorbonne Paris Cite, Paris, France
- Laboratory of Vascular Translational Science (J.P.D., M.M., M.P., R.B.), U1148 Institut National de la Santé et de la Recherche Médicale, Paris, France
| | - R Blanc
- From the Interventional Neuroradiology Unit (A.F., S.E., F.D., R.F., J.P.D., K.Z., H.R., G.C., S.S., M.B.M., M.M., M.P., R.B.), Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
- Université Paris Denis Diderot (J.P.D., M.M., M.P., R.B.), Sorbonne Paris Cite, Paris, France
- Laboratory of Vascular Translational Science (J.P.D., M.M., M.P., R.B.), U1148 Institut National de la Santé et de la Recherche Médicale, Paris, France
| |
Collapse
|
19
|
Han R, Uneri A, Ketcha M, Vijayan R, Sheth N, Wu P, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH. Multi-body 3D-2D registration for image-guided reduction of pelvic dislocation in orthopaedic trauma surgery. Phys Med Biol 2020; 65:135009. [PMID: 32217833 PMCID: PMC8647002 DOI: 10.1088/1361-6560/ab843c] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Surgical reduction of pelvic dislocation is a challenging procedure with poor long-term prognosis if reduction does not accurately restore natural morphology. The procedure often requires long fluoroscopic exposure times and trial-and-error to achieve accurate reduction. We report a method to automatically compute the target pose of dislocated bones in preoperative CT and provide 3D guidance of reduction using routine 2D fluoroscopy. A pelvic statistical shape model (SSM) and a statistical pose model (SPM) were formed from an atlas of 40 pelvic CT images. Multi-body bone segmentation was achieved by mapping the SSM to a preoperative CT via an active shape model. The target reduction pose for the dislocated bone is estimated by fitting the poses of undislocated bones to the SPM. Intraoperatively, multiple bones are registered to fluoroscopy images via 3D-2D registration to obtain 3D pose estimates from 2D images. The method was examined in three studies: (1) a simulation study of 40 CT images simulating a range of dislocation patterns; (2) a pelvic phantom study with controlled dislocation of the left innominate bone; (3) a clinical case study investigating feasibility in images acquired during pelvic reduction surgery. Experiments investigated the accuracy of registration as a function of initialization error (capture range), image quality (radiation dose and image noise), and field of view (FOV) size. The simulation study achieved target pose estimation with translational error of median 2.3 mm (1.4 mm interquartile range, IQR) and rotational error of 2.1° (1.3° IQR). 3D-2D registration yielded 0.3 mm (0.2 mm IQR) in-plane and 0.3 mm (0.2 mm IQR) out-of-plane translational error, with in-plane capture range of ±50 mm and out-of-plane capture range of ±120 mm. The phantom study demonstrated 3D-2D target registration error of 2.5 mm (1.5 mm IQR), and the method was robust over a large dose range, down to 5 [Formula: see text]Gy/frame (an order of magnitude lower than the nominal fluoroscopic dose). The clinical feasibility study demonstrated accurate registration with both preoperative and intraoperative radiographs, yielding 3.1 mm (1.0 mm IQR) projection distance error with robust performance for FOV ranging from 340 × 340 mm2 to 170 × 170 mm2 (at the image plane). The method demonstrated accurate estimation of the target reduction pose in simulation, phantom, and a clinical feasibility study for a broad range of dislocation patterns, initialization error, dose levels, and FOV size. The system provides a novel means of guidance and assessment of pelvic reduction from routinely acquired preoperative CT and intraoperative fluoroscopy. The method has the potential to reduce radiation dose by minimizing trial-and-error and to improve outcomes by guiding more accurate reduction of joint dislocations.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - M Ketcha
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - R Vijayan
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - N Sheth
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, The Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
20
|
Schaffert R, Wang J, Fischer P, Maier A, Borsdorf A. Robust Multi-View 2-D/3-D Registration Using Point-To-Plane Correspondence Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:161-174. [PMID: 31199258 DOI: 10.1109/tmi.2019.2922931] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In minimally invasive procedures, the clinician relies on image guidance to observe and navigate the operation site. In order to show structures which are not visible in the live X-ray images, such as vessels or planning annotations, X-ray images can be augmented with pre-operatively acquired images. Accurate image alignment is needed and can be provided by 2-D/3-D registration. In this paper, a multi-view registration method based on the point-to-plane correspondence model is proposed. The correspondence model is extended to be independent of the used camera coordinates and different multi-view registration schemes are introduced and compared. Evaluation is performed for a wide range of clinically relevant registration scenarios. We show for different applications that registration using correspondences from both views simultaneously provides accurate and robust registration, while the performance of the other schemes varies considerably. Our method also outperforms the state-of-the-art method for cerebral angiography registration, achieving a capture range of 18 mm and an accuracy of 0.22±0.07 mm. Furthermore, investigations on the minimum angle between the views are performed in order to provide accurate and robust registration, while minimizing the obstruction to the clinical workflow. We show that small angles around 30° are sufficient to provide reliable registration results.
Collapse
|
21
|
Capostagno S, Stayman JW, Jacobson M, Ehtiati T, Weiss CR, Siewerdsen JH. Task-driven source-detector trajectories in cone-beam computed tomography: II. Application to neuroradiology. J Med Imaging (Bellingham) 2019; 6:025004. [PMID: 31093518 DOI: 10.1117/1.jmi.6.2.025004] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 04/04/2019] [Indexed: 11/14/2022] Open
Abstract
We apply the methodology detailed in "Task-driven source-detector trajectories in cone-beam computed tomography: I. Theory and methods" by Stayman et al. for task-driven optimization of source-detector orbits in cone-beam computed tomography (CBCT) to scenarios emulating imaging tasks in interventional neuroradiology. The task-driven imaging framework is used to optimize the CBCT source-detector trajectory by maximizing the detectability index, d ' . The approach was applied to simulated cases of endovascular embolization of an aneurysm and arteriovenous malformation and was translated to real data first using a CBCT test bench followed by implementation on an interventional robotic C-arm. Task-driven trajectories were found to generally favor higher fidelity (i.e., less noisy) views, with an average increase in d ' ranging from 7% to 28%. Visually, this resulted in improved conspicuity of particular stimuli by reducing the noise and altering the noise correlation to a form distinct from the spatial frequencies associated with the imaging task. The improvements in detectability and the demonstration of the task-driven workflow using a real interventional imaging system show the potential of the task-driven imaging framework to improve imaging performance on motorized, multiaxis C-arms in neuroradiology.
Collapse
Affiliation(s)
- Sarah Capostagno
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - J Webster Stayman
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Matthew Jacobson
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tina Ehtiati
- Siemens Medical Solutions USA, Inc., Imaging and Therapy Systems, Hoffman Estates, Illinois, United States
| | - Clifford R Weiss
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States.,Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States.,Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, Maryland, United States
| |
Collapse
|
22
|
Stayman JW, Capostagno S, Gang GJ, Siewerdsen JH. Task-driven source-detector trajectories in cone-beam computed tomography: I. Theory and methods. J Med Imaging (Bellingham) 2019; 6:025002. [PMID: 31065569 PMCID: PMC6497008 DOI: 10.1117/1.jmi.6.2.025002] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Accepted: 03/29/2019] [Indexed: 11/14/2022] Open
Abstract
We develop a mathematical framework for the design of orbital trajectories that are optimal to a particular imaging task (or tasks) in advanced cone-beam computed tomography systems that have the capability of general source-detector positioning. The framework allows various parameterizations of the orbit as well as constraints based on imaging system capabilities. To accommodate nonstandard system geometries, a model-based iterative reconstruction method is applied. Such algorithms generally complicate the assessment and prediction of reconstructed image properties; however, we leverage efficient implementations of analytical predictors of local noise and spatial resolution that incorporate dependencies of the reconstruction algorithm on patient anatomy, x-ray technique, and geometry. These image property predictors serve as inputs to a task-based performance metric defined by detectability index, which is optimized with respect to the orbital parameters of data acquisition. We investigate the framework of the task-driven trajectory design in several examples to examine the dependence of optimal source-detector trajectories on the imaging task (or tasks), including location and spatial-frequency dependence. A variety of multitask objectives are also investigated, and the advantages to imaging performance are quantified in simulation studies.
Collapse
Affiliation(s)
- J. Webster Stayman
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Sarah Capostagno
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Grace J. Gang
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, Maryland, United States
| |
Collapse
|
23
|
Uneri A, Zhang X, Stayman JW, Helm PA, Osgood GM, Theodore N, Siewerdsen JH. 3D-2D Image Registration in Virtual Long-Film Imaging: Application to Spinal Deformity Correction. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109511H. [PMID: 34290470 PMCID: PMC8292105 DOI: 10.1117/12.2513679] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
PURPOSE Intraoperative 2D virtual long-film (VLF) imaging is investigated for 3D guidance and confirmation of the surgical product in spinal deformity correction. Multi-slot-scan geometry (rather than a single-slot "topogram") is exploited to produce parallax views of the scene for accurate 3D colocalization from a single radiograph. METHODS The multi-slot approach uses additional angled collimator apertures to form fan-beams with disparate views (parallax) of anatomy and instrumentation and to extend field-of-view beyond the linear motion limits. Combined with a knowledge of surgical implants (pedicle screws and/or spinal rods modeled as "known components"), 3D-2D image registration is used to solve for pose estimates via optimization of image gradient correlation. Experiments were conducted in cadaver studies emulating the system geometry of the O-arm (Medtronic, Minneapolis MN). RESULTS Experiments demonstrated feasibility of multi-slot VLF and quantified the geometric accuracy of 3D-2D registration using VLF acquisitions. Registration of pedicle screws from a single VLF yielded mean target registration error of (2.0±0.7) mm, comparable to the accuracy of surgical trackers and registration using multiple radiographs (e.g., AP and LAT). CONCLUSIONS 3D-2D registration in a single VLF image offers a promising new solution for image guidance in spinal deformity correction. The ability to accurately resolve pose from a single view absolves workflow challenges of multiple-view registration and suggests application beyond spine surgery, such as reduction of long-bone fractures.
Collapse
Affiliation(s)
- A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - X. Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - J. W. Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - G. M. Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - N. Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| |
Collapse
|
24
|
Yi T, Ramchandran V, Siewerdsen JH, Uneri A. Robotic drill guide positioning using known-component 3D-2D image registration. J Med Imaging (Bellingham) 2018; 5:021212. [PMID: 29430481 DOI: 10.1117/1.jmi.5.2.021212] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 01/04/2018] [Indexed: 11/14/2022] Open
Abstract
A method for x-ray image-guided robotic instrument positioning is reported and evaluated in preclinical studies of spinal pedicle screw placement with the aim of improving delivery of transpedicle K-wires and screws. The known-component (KC) registration algorithm was used to register the three-dimensional patient CT and drill guide surface model to intraoperative two-dimensional radiographs. Resulting transformations, combined with offline hand-eye calibration, drive the robotically held drill guide to target trajectories defined in the preoperative CT. The method was assessed in comparison with a more conventional tracker-based approach, and robustness to clinically realistic errors was tested in phantom and cadaver. Deviations from planned trajectories were analyzed in terms of target registration error (TRE) at the tooltip (mm) and approach angle (deg). In phantom studies, the KC approach resulted in [Formula: see text] and [Formula: see text], comparable with accuracy in tracker-based approach. In cadaver studies with realistic anatomical deformation, the KC approach yielded [Formula: see text] and [Formula: see text], with statistically significant improvement versus tracker ([Formula: see text] and [Formula: see text]). Robustness to deformation is attributed to relatively local rigidity of anatomy in radiographic views. X-ray guidance offered accurate robotic positioning and could fit naturally within clinical workflow of fluoroscopically guided procedures.
Collapse
Affiliation(s)
- Thomas Yi
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Vignesh Ramchandran
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
25
|
Goerres J, Uneri A, Jacobson M, Ramsay B, De Silva T, Ketcha M, Han R, Manbachi A, Vogt S, Kleinszig G, Wolinsky JP, Osgood G, Siewerdsen JH. Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration. Phys Med Biol 2017; 62:9018-9038. [PMID: 29058687 PMCID: PMC5868367 DOI: 10.1088/1361-6560/aa954f] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4° and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.
Collapse
Affiliation(s)
- J Goerres
- Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Ouadah S, Jacobson M, Stayman JW, Ehtiati T, Weiss C, Siewerdsen JH. Correction of patient motion in cone-beam CT using 3D-2D registration. Phys Med Biol 2017; 62:8813-8831. [PMID: 28994668 PMCID: PMC5894892 DOI: 10.1088/1361-6560/aa9254] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was >0.995, with significant improvement (p < 0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.
Collapse
Affiliation(s)
- S Ouadah
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, United States of America
| | | | | | | | | | | |
Collapse
|
27
|
Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration. Int J Comput Assist Radiol Surg 2017; 12:1221-1230. [PMID: 28527025 DOI: 10.1007/s11548-017-1611-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 05/08/2017] [Indexed: 12/25/2022]
Abstract
PURPOSE In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.
Collapse
|
28
|
Uneri A, De Silva T, Goerres J, Jacobson MW, Ketcha MD, Reaungamornrat S, Kleinszig G, Vogt S, Khanna AJ, Osgood GM, Wolinsky JP, Siewerdsen JH. Intraoperative evaluation of device placement in spine surgery using known-component 3D-2D image registration. Phys Med Biol 2017; 62:3330-3351. [PMID: 28233760 DOI: 10.1088/1361-6560/aa62c5] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Intraoperative x-ray radiography/fluoroscopy is commonly used to assess the placement of surgical devices in the operating room (e.g. spine pedicle screws), but qualitative interpretation can fail to reliably detect suboptimal delivery and/or breach of adjacent critical structures. We present a 3D-2D image registration method wherein intraoperative radiographs are leveraged in combination with prior knowledge of the patient and surgical components for quantitative assessment of device placement and more rigorous quality assurance (QA) of the surgical product. The algorithm is based on known-component registration (KC-Reg) in which patient-specific preoperative CT and parametric component models are used. The registration performs optimization of gradient similarity, removes the need for offline geometric calibration of the C-arm, and simultaneously solves for multiple component bodies, thereby allowing QA in a single step (e.g. spinal construct with 4-20 screws). Performance was tested in a spine phantom, and first clinical results are reported for QA of transpedicle screws delivered in a patient undergoing thoracolumbar spine surgery. Simultaneous registration of ten pedicle screws (five contralateral pairs) demonstrated mean target registration error (TRE) of 1.1 ± 0.1 mm at the screw tip and 0.7 ± 0.4° in angulation when a prior geometric calibration was used. The calibration-free formulation, with the aid of component collision constraints, achieved TRE of 1.4 ± 0.6 mm. In all cases, a statistically significant improvement (p < 0.05) was observed for the simultaneous solutions in comparison to previously reported sequential solution of individual components. Initial application in clinical data in spine surgery demonstrated TRE of 2.7 ± 2.6 mm and 1.5 ± 0.8°. The KC-Reg algorithm offers an independent check and quantitative QA of the surgical product using radiographic/fluoroscopic views acquired within standard OR workflow. Such intraoperative assessment could improve quality and safety, provide the opportunity to revise suboptimal constructs in the OR, and reduce the frequency of revision surgery.
Collapse
Affiliation(s)
- A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America. Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
29
|
Ouadah S, Stayman JW, Gang GJ, Ehtiati T, Siewerdsen JH. Self-calibration of cone-beam CT geometry using 3D-2D image registration. Phys Med Biol 2016; 61:2613-32. [PMID: 26961687 DOI: 10.1088/0031-9155/61/7/2613] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory.
Collapse
Affiliation(s)
- S Ouadah
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, USA
| | | | | | | | | |
Collapse
|
30
|
Otake Y, Wang AS, Uneri A, Kleinszig G, Vogt S, Aygun N, Lo SFL, Wolinsky JP, Gokaslan ZL, Siewerdsen JH. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation. Phys Med Biol 2016; 60:2075-90. [PMID: 25674851 DOI: 10.1088/0031-9155/60/5/2075] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely 'LevelCheck') to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product) in a manner consistent with natural surgical workflow.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Uneri A, De Silva T, Stayman JW, Kleinszig G, Vogt S, Khanna AJ, Gokaslan ZL, Wolinsky JP, Siewerdsen JH. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement. Phys Med Biol 2015; 60:8007-24. [PMID: 26421941 DOI: 10.1088/0031-9155/60/20/8007] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws-referred to as 'known components') to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as 'parametrically-known' component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as 'exactly-known' component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the 'acceptance window' of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.
Collapse
Affiliation(s)
- A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Tyler DW, Dank JA. Cramér-Rao lower bound calculations for image registration using simulated phenomenology. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2015; 32:1425-1436. [PMID: 26367285 DOI: 10.1364/josaa.32.001425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The Cramér-Rao lower bound (CRLB) is a valuable tool to quantify fundamental limits to estimation problems associated with imaging systems, and has been used previously to study image registration performance bounds. Most existing work, however, assumes constant-variance noise; for many applications, noise is signal-dependent. Further, linear filters applied after detection can potentially yield reduced registration error, but prior work has not treated the CRLB behavior caused by filter-imposed noise correlation. We have developed computational methods to efficiently generalize existing image registration CRLB calculations to account for the effect of both signal-dependent noise and linear filtering on the estimation of rigid-translation ("shift") parameters. Because effective use of the CRLB requires radiometrically realistic simulated imagery, we have also developed methods to exploit computer animation software and available optical properties databases to conveniently build and modify synthetic objects for radiometric image simulations using DIRSIG. In this paper, we present the generalized expressions for the rigid shift Fisher information matrix and discuss the properties of the associated CRLB. We discuss the methods used to synthesize object "sets" for use in DIRSIG, and then demonstrate the use of simulated imagery in the CRLB code to choose an error-minimizing filter and optimal integration time for an image-based tracker in the presence of random platform jitter.
Collapse
|
33
|
Uneri A, Stayman JW, De Silva T, Wang AS, Kleinszig G, Vogt S, Khanna AJ, Wolinsky JP, Gokaslan ZL, Siewerdsen JH. Known-Component 3D-2D Registration for Image Guidance and Quality Assurance in Spine Surgery Pedicle Screw Placement. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9415. [PMID: 26028805 DOI: 10.1117/12.2082210] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
PURPOSE To extend the functionality of radiographic/fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. METHODS Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TRE x ), and angular deviation (TRE ϕ ) from planned trajectory. RESULTS Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TRE x <2 mm and TRE ϕ <0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TRE x <1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. CONCLUSIONS 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.
Collapse
Affiliation(s)
- A Uneri
- Department of Computer Science, Johns Hopkins Univ., Baltimore, MD
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| | - T De Silva
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| | - A S Wang
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| | - G Kleinszig
- Siemens Healthcare XP Division, Erlangen, Germany
| | - S Vogt
- Siemens Healthcare XP Division, Erlangen, Germany
| | - A J Khanna
- Department of Orthopaedic Surgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - J-P Wolinsky
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - Z L Gokaslan
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - J H Siewerdsen
- Department of Computer Science, Johns Hopkins Univ., Baltimore, MD ; Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| |
Collapse
|
34
|
Liu WP, Otake Y, Azizian M, Wagner OJ, Sorger JM, Armand M, Taylor RH. 2D-3D radiograph to cone-beam computed tomography (CBCT) registration for C-arm image-guided robotic surgery. Int J Comput Assist Radiol Surg 2014; 10:1239-52. [PMID: 25503592 DOI: 10.1007/s11548-014-1132-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 11/13/2014] [Indexed: 11/25/2022]
Abstract
PURPOSE C-arm radiographs are commonly used for intraoperative image guidance in surgical interventions. Fluoroscopy is a cost-effective real-time modality, although image quality can vary greatly depending on the target anatomy. Cone-beam computed tomography (CBCT) scans are sometimes available, so 2D-3D registration is needed for intra-procedural guidance. C-arm radiographs were registered to CBCT scans and used for 3D localization of peritumor fiducials during a minimally invasive thoracic intervention with a da Vinci Si robot. METHODS Intensity-based 2D-3D registration of intraoperative radiographs to CBCT was performed. The feasible range of X-ray projections achievable by a C-arm positioned around a da Vinci Si surgical robot, configured for robotic wedge resection, was determined using phantom models. Experiments were conducted on synthetic phantoms and animals imaged with an OEC 9600 and a Siemens Artis zeego, representing the spectrum of different C-arm systems currently available for clinical use. RESULTS The image guidance workflow was feasible using either an optically tracked OEC 9600 or a Siemens Artis zeego C-arm, resulting in an angular difference of Δθ:∼ 30°. The two C-arm systems provided TRE mean ≤ 2.5 mm and TRE mean ≤ 2.0 mm, respectively (i.e., comparable to standard clinical intraoperative navigation systems). CONCLUSIONS C-arm 3D localization from dual 2D-3D registered radiographs was feasible and applicable for intraoperative image guidance during da Vinci robotic thoracic interventions using the proposed workflow. Tissue deformation and in vivo experiments are required before clinical evaluation of this system.
Collapse
Affiliation(s)
- Wen Pei Liu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA,
| | | | | | | | | | | | | |
Collapse
|
35
|
Uneri A, Wang AS, Otake Y, Kleinszig G, Vogt S, Khanna AJ, Gallia GL, Gokaslan ZL, Siewerdsen JH. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance. Phys Med Biol 2014; 59:5329-45. [PMID: 25146673 DOI: 10.1088/0031-9155/59/18/5329] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|