1
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
2
|
Hooshangnejad H, China D, Huang Y, Zbijewski W, Uneri A, McNutt T, Lee J, Ding K. XIOSIS: an X-ray-based intra-operative image-guided platform for oncology smart material delivery. IEEE Trans Med Imaging 2024; PP:1-1. [PMID: 38602853 DOI: 10.1109/tmi.2024.3387830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Image-guided interventional oncology procedures can greatly enhance the outcome of cancer treatment. As an enhancing procedure, oncology smart material delivery can increase cancer therapy's quality, effectiveness, and safety. However, the effectiveness of enhancing procedures highly depends on the accuracy of smart material placement procedures. Inaccurate placement of smart materials can lead to adverse side effects and health hazards. Image guidance can considerably improve the safety and robustness of smart material delivery. In this study, we developed a novel generative deep-learning platform that highly prioritizes clinical practicality and provides the most informative intra-operative feedback for image-guided smart material delivery. XIOSIS generates a patient-specific 3D volumetric computed tomography (CT) from three intraoperative radiographs (X-ray images) acquired by a mobile C-arm during the operation. As the first of its kind, XIOSIS (i) synthesizes the CT from small field-of-view radiographs;(ii) reconstructs the intra-operative spacer distribution; (iii) is robust; and (iv) is equipped with a novel soft-contrast cost function. To demonstrate the effectiveness of XIOSIS in providing intra-operative image guidance, we applied XIOSIS to the duodenal hydrogel spacer placement procedure. We evaluated XIOSIS performance in an image-guided virtual spacer placement and actual spacer placement in two cadaver specimens. XIOSIS showed a clinically acceptable performance, reconstructed the 3D intra-operative hydrogel spacer distribution with an average structural similarity of 0.88 and Dice coefficient of 0.63 and with less than 1 cm difference in spacer location relative to the spinal cord.
Collapse
|
3
|
Johnston A, Mahesh M, Uneri A, Rypinski TA, Boone JM, Siewerdsen JH. Objective image quality assurance in cone-beam CT: Test methods, analysis, and workflow in longitudinal studies. Med Phys 2024; 51:2424-2443. [PMID: 38354310 DOI: 10.1002/mp.16983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/20/2023] [Accepted: 01/28/2024] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND Standards for image quality evaluation in multi-detector CT (MDCT) and cone-beam CT (CBCT) are evolving to keep pace with technological advances. A clear need is emerging for methods that facilitate rigorous quality assurance (QA) with up-to-date metrology and streamlined workflow suitable to a range of MDCT and CBCT systems. PURPOSE To evaluate the feasibility and workflow associated with image quality (IQ) assessment in longitudinal studies for MDCT and CBCT with a single test phantom and semiautomated analysis of objective, quantitative IQ metrology. METHODS A test phantom (CorgiTM Phantom, The Phantom Lab, Greenwich, New York, USA) was used in monthly IQ testing over the course of 1 year for three MDCT scanners (one of which presented helical and volumetric scan modes) and four CBCT scanners. Semiautomated software analyzed image uniformity, linearity, contrast, noise, contrast-to-noise ratio (CNR), 3D noise-power spectrum (NPS), modulation transfer function (MTF) in axial and oblique directions, and cone-beam artifact magnitude. The workflow was evaluated using methods adapted from systems/industrial engineering, including value stream process modeling (VSPM), standard work layout (SWL), and standard work control charts (SWCT) to quantify and optimize test methodology in routine practice. The completeness and consistency of DICOM data from each system was also evaluated. RESULTS Quantitative IQ metrology provided valuable insight in longitudinal quality assurance (QA), with metrics such as NPS and MTF providing insight on root cause for various forms of system failure-for example, detector calibration and geometric calibration. Monthly constancy testing showed variations in IQ test metrics owing to system performance as well as phantom setup and provided initial estimates of upper and lower control limits appropriate to QA action levels. Rigorous evaluation of QA workflow identified methods to reduce total cycle time to ∼10 min for each system-viz., use of a single phantom configuration appropriate to all scanners and Head or Body scan protocols. Numerous gaps in the completeness and consistency of DICOM data were observed for CBCT systems. CONCLUSION An IQ phantom and test methodology was found to be suitable to QA of MDCT and CBCT systems with streamlined workflow appropriate to busy clinical settings.
Collapse
Affiliation(s)
- Ashley Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mahadevappa Mahesh
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Tatiana A Rypinski
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas, USA
| | - John M Boone
- Department of Radiology, University of California - Davis, Davis, California, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
4
|
China D, Feng Z, Hooshangnejad H, Sforza D, Vagdargi P, Bell MAL, Uneri A, Sisniega A, Ding K. FLEX: FLexible Transducer With External Tracking for Ultrasound Imaging With Patient-Specific Geometry Estimation. IEEE Trans Biomed Eng 2024; 71:1298-1307. [PMID: 38048239 PMCID: PMC10998498 DOI: 10.1109/tbme.2023.3333216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
Flexible array transducers can adapt to patient-specific geometries during real-time ultrasound (US) image-guided therapy monitoring. This makes the system radiation-free and less user-dependency. Precise estimation of the flexible transducer's geometry is crucial for the delay-and-sum (DAS) beamforming algorithm to reconstruct B-mode US images. The primary innovation of this research is to build a system named FLexible transducer with EXternal tracking (FLEX) to estimate the position of each element of the flexible transducer and reconstruct precise US images. FLEX utilizes customized optical markers and a tracker to monitor the probe's geometry, employing a polygon fitting algorithm to estimate the position and azimuth angle of each transducer element. Subsequently, the traditional DAS algorithm processes the delay estimation from the tracked element position, reconstructing US images from radio-frequency (RF) channel data. The proposed method underwent evaluation on phantoms and cadaveric specimens, demonstrating its clinical feasibility. Deviations in tracked probe geometry compared to ground truth were minimal, measuring 0.50 ± 0.29 mm for the CIRS phantom, 0.54 ± 0.35 mm for the deformable phantom, and 0.36 ± 0.24 mm on the cadaveric specimen. Reconstructing the US image using tracked probe geometry significantly outperformed the untracked geometry, as indicated by a Dice score of 95.1 ± 3.3% versus 62.3 ± 9.2% for the CIRS phantom. The proposed method achieved high accuracy (<0.5 mm error) in tracking the element position for various random curvatures applicable for clinical deployment. The evaluation results show that the radiation-free proposed method can effectively reconstruct US images and assist in monitoring image-guided therapy with minimal user dependency.
Collapse
|
5
|
Butz I, Fernandez M, Uneri A, Theodore N, Anderson WS, Siewerdsen JH. Performance assessment of surgical tracking systems based on statistical process control and longitudinal QA. Comput Assist Surg (Abingdon) 2023; 28:2275522. [PMID: 37942523 DOI: 10.1080/24699322.2023.2275522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023] Open
Abstract
A system for performance assessment and quality assurance (QA) of surgical trackers is reported based on principles of geometric accuracy and statistical process control (SPC) for routine longitudinal testing. A simple QA test phantom was designed, where the number and distribution of registration fiducials was determined drawing from analytical models for target registration error (TRE). A tracker testbed was configured with open-source software for measurement of a TRE-based accuracy metric ε and Jitter (J ). Six trackers were tested: 2 electromagnetic (EM - Aurora); and 4 infrared (IR - 1 Spectra, 1 Vega, and 2 Vicra) - all NDI (Waterloo, ON). Phase I SPC analysis of Shewhart mean (x ¯ ) and standard deviation (s ) determined system control limits. Phase II involved weekly QA of each system for up to 32 weeks and identified Pass, Note, Alert, and Failure action rules. The process permitted QA in <1 min. Phase I control limits were established for all trackers: EM trackers exhibited higher upper control limits than IR trackers in ε (EM: x ¯ ε ∼ 2.8-3.3 mm, IR: x ¯ ε ∼ 1.6-2.0 mm) and Jitter (EM: x ¯ jitter ∼ 0.30-0.33 mm, IR: x ¯ jitter ∼ 0.08-0.10 mm), and older trackers showed evidence of degradation - e.g. higher Jitter for the older Vicra (p-value < .05). Phase II longitudinal tests yielded 676 outcomes in which a total of 4 Failures were noted - 3 resolved by intervention (metal interference for EM trackers) - and 1 owing to restrictive control limits for a new system (Vega). Weekly tests also yielded 40 Notes and 16 Alerts - each spontaneously resolved in subsequent monitoring.
Collapse
Affiliation(s)
- I Butz
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Fernandez
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - N Theodore
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - W S Anderson
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
6
|
Mekki L, Sheth NM, Vijayan RC, Rohleder M, Sisniega A, Kleinszig G, Vogt S, Kunze H, Osgood GM, Siewerdsen JH, Uneri A. Surgical navigation for guidewire placement from intraoperative fluoroscopy in orthopaedic surgery. Phys Med Biol 2023; 68:215001. [PMID: 37774711 DOI: 10.1088/1361-6560/acfec4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective. Surgical guidewires are commonly used in placing fixation implants to stabilize fractures. Accurate positioning of these instruments is challenged by difficulties in 3D reckoning from 2D fluoroscopy. This work aims to enhance the accuracy and reduce exposure times by providing 3D navigation for guidewire placement from as little as two fluoroscopic images.Approach. Our approach combines machine learning-based segmentation with the geometric model of the imager to determine the 3D poses of guidewires. Instrument tips are encoded as individual keypoints, and the segmentation masks are processed to estimate the trajectory. Correspondence between detections in multiple views is established using the pre-calibrated system geometry, and the corresponding features are backprojected to obtain the 3D pose. Guidewire 3D directions were computed using both an analytical and an optimization-based method. The complete approach was evaluated in cadaveric specimens with respect to potential confounding effects from the imaging geometry and radiographic scene clutter due to other instruments.Main results. The detection network identified the guidewire tips within 2.2 mm and guidewire directions within 1.1°, in 2D detector coordinates. Feature correspondence rejected false detections, particularly in images with other instruments, to achieve 83% precision and 90% recall. Estimating the 3D direction via numerical optimization showed added robustness to guidewires aligned with the gantry rotation plane. Guidewire tips and directions were localized in 3D world coordinates with a median accuracy of 1.8 mm and 2.7°, respectively.Significance. The paper reports a new method for automatic 2D detection and 3D localization of guidewires from pairs of fluoroscopic images. Localized guidewires can be virtually overlaid on the patient's pre-operative 3D scan during the intervention. Accurate pose determination for multiple guidewires from two images offers to reduce radiation dose by minimizing the need for repeated imaging and provides quantitative feedback prior to implant placement.
Collapse
Affiliation(s)
- L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N M Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - R C Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M Rohleder
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - H Kunze
- Siemens Healthineers, Erlangen, Germany
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
7
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
8
|
Vijayan RC, Venkataraman K, Wei J, Sheth NM, Shafiq B, Siewerdsen JH, Zbijewski W, Li G, Cleary K, Uneri A. Multi-Body 3D-2D Registration for Robot-Assisted Joint Reduction: Preclinical Evaluation in the Ankle Syndesmosis. Proc SPIE Int Soc Opt Eng 2023; 12466:124661F. [PMID: 37143861 PMCID: PMC10155864 DOI: 10.1117/12.2654481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Purpose Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. Methods The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Results Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to <2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Conclusions Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.
Collapse
Affiliation(s)
- R. C. Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - K. Venkataraman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - J. Wei
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - N. M. Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - B. Shafiq
- Department of Orthopedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston TX
| | - W. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - G. Li
- Children’s National Hospital, Washington DC
| | - K. Cleary
- Children’s National Hospital, Washington DC
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- ; phone: +1-276-614-7743; website: carnegie.jhu.edu
| |
Collapse
|
9
|
Vijayan R, Sheth N, Mekki L, Lu A, Uneri A, Sisniega A, Magaraggia J, Kleinszig G, Vogt S, Thiboutot J, Lee H, Yarmus L, Siewerdsen JH. 3D-2D image registration in the presence of soft-tissue deformation in image-guided transbronchial interventions. Phys Med Biol 2022; 68. [PMID: 36317269 DOI: 10.1088/1361-6560/ac9e3c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/27/2022] [Indexed: 11/06/2022]
Abstract
Purpose. Target localization in pulmonary interventions (e.g. transbronchial biopsy of a lung nodule) is challenged by deformable motion and may benefit from fluoroscopic overlay of the target to provide accurate guidance. We present and evaluate a 3D-2D image registration method for fluoroscopic overlay in the presence of tissue deformation using a multi-resolution/multi-scale (MRMS) framework with an objective function that drives registration primarily by soft-tissue image gradients.Methods. The MRMS method registers 3D cone-beam CT to 2D fluoroscopy without gating of respiratory phase by coarse-to-fine resampling and global-to-local rescaling about target regions-of-interest. A variation of the gradient orientation (GO) similarity metric (denotedGO') was developed to downweight bone gradients and drive registration via soft-tissue gradients. Performance was evaluated in terms of projection distance error at isocenter (PDEiso). Phantom studies determined nominal algorithm parameters and capture range. Preclinical studies used a freshly deceased, ventilated porcine specimen to evaluate performance in the presence of real tissue deformation and a broad range of 3D-2D image mismatch.Results. Nominal algorithm parameters were identified that provided robust performance over a broad range of motion (0-20 mm), including an adaptive parameter selection technique to accommodate unknown mismatch in respiratory phase. TheGO'metric yielded median PDEiso= 1.2 mm, compared to 6.2 mm for conventionalGO.Preclinical studies with real lung deformation demonstrated median PDEiso= 1.3 mm with MRMS +GO'registration, compared to 2.2 mm with a conventional transform. Runtime was 26 s and can be reduced to 2.5 s given a prior registration within ∼5 mm as initialization.Conclusions. MRMS registration via soft-tissue gradients achieved accurate fluoroscopic overlay in the presence of deformable lung motion. By driving registration via soft-tissue image gradients, the method avoided false local minima presented by bones and was robust to a wide range of motion magnitude.
Collapse
Affiliation(s)
- R Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - N Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - J Thiboutot
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - H Lee
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - L Yarmus
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| |
Collapse
|
10
|
Huang Y, Jones CK, Zhang X, Johnston A, Waktola S, Aygun N, Witham TF, Bydon A, Theodore N, Helm PA, Siewerdsen JH, Uneri A. Multi-perspective region-based CNNs for vertebrae labeling in intraoperative long-length images. Comput Methods Programs Biomed 2022; 227:107222. [PMID: 36370597 DOI: 10.1016/j.cmpb.2022.107222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/31/2022] [Accepted: 11/02/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Effective aggregation of intraoperative x-ray images that capture the patient anatomy from multiple view-angles has the potential to enable and improve automated image analysis that can be readily performed during surgery. We present multi-perspective region-based neural networks that leverage knowledge of the imaging geometry for automatic vertebrae labeling in Long-Film images - a novel tomographic imaging modality with an extended field-of-view for spine imaging. METHOD A multi-perspective network architecture was designed to exploit small view-angle disparities produced by a multi-slot collimator and consolidate information from overlapping image regions. A second network incorporates large view-angle disparities to jointly perform labeling on images from multiple views (viz., AP and lateral). A recurrent module incorporates contextual information and enforce anatomical order for the detected vertebrae. The three modules are combined to form the multi-view multi-slot (MVMS) network for labeling vertebrae using images from all available perspectives. The network was trained on images synthesized from 297 CT images and tested on 50 AP and 50 lateral Long-Film images acquired from 13 cadaveric specimens. Labeling performance of the multi-perspective networks was evaluated with respect to the number of vertebrae appearances and presence of surgical instrumentation. RESULTS The MVMS network achieved an F1 score of >96% and an average vertebral localization error of 3.3 mm, with 88.3% labeling accuracy on both AP and lateral images - (15.5% and 35.0% higher than conventional Faster R-CNN on AP and lateral views, respectively). Aggregation of multiple appearances of the same vertebra using the multi-slot network significantly improved the labeling accuracy (p < 0.05). Using the multi-view network, labeling accuracy on the more challenging lateral views was improved to the same level as that of the AP views. The approach demonstrated robustness to the presence of surgical instrumentation, commonly encountered in intraoperative images, and achieved comparable performance in images with and without instrumentation (88.9% vs. 91.2% labeling accuracy). CONCLUSION The MVMS network demonstrated effective multi-perspective aggregation, providing means for accurate, automated vertebrae labeling during spine surgery. The algorithms may be generalized to other imaging tasks and modalities that involve multiple views with view-angle disparities (e.g., bi-plane radiography). Predicted labels can help avoid adverse events during surgery (e.g., wrong-level surgery), establish correspondence with labels in preoperative modalities to facilitate image registration, and enable automated measurement of spinal alignment metrics for intraoperative assessment of spinal curvature.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - C K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD, United States
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - A Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - S Waktola
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - N Aygun
- Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States
| | - T F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - A Bydon
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - P A Helm
- Medtronic, Littleton MA, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore MD, United States; Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States; Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States.
| |
Collapse
|
11
|
Sheth N, Vagdargi P, Sisniega A, Uneri A, Osgood G, Siewerdsen JH. Preclinical evaluation of a prototype freehand drill video guidance system for orthopedic surgery. J Med Imaging (Bellingham) 2022; 9:045004. [PMID: 36046335 PMCID: PMC9411797 DOI: 10.1117/1.jmi.9.4.045004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/09/2022] [Indexed: 08/28/2023] Open
Abstract
Purpose: Internal fixation of pelvic fractures is a challenging task requiring the placement of instrumentation within complex three-dimensional bone corridors, typically guided by fluoroscopy. We report a system for two- and three-dimensional guidance using a drill-mounted video camera and fiducial markers with evaluation in first preclinical studies. Approach: The system uses a camera affixed to a surgical drill and multimodality (optical and radio-opaque) markers for real-time trajectory visualization in fluoroscopy and/or CT. Improvements to a previously reported prototype include hardware components (mount, camera, and fiducials) and software (including a system for detecting marker perturbation) to address practical requirements necessary for translation to clinical studies. Phantom and cadaver experiments were performed to quantify the accuracy of video-fluoroscopy and video-CT registration, the ability to detect marker perturbation, and the conformance in placing guidewires along realistic pelvic trajectories. The performance was evaluated in terms of geometric accuracy and conformance within bone corridors. Results: The studies demonstrated successful guidewire delivery in a cadaver, with a median entry point error of 1.00 mm (1.56 mm IQR) and median angular error of 1.94 deg (1.23 deg IQR). Such accuracy was sufficient to guide K-wire placement through five of the six trajectories investigated with a strong level of conformance within bone corridors. The sixth case demonstrated a cortical breach due to extrema in the registration error. The system was able to detect marker perturbations and alert the user to potential registration issues. Feasible workflows were identified for orthopedic-trauma scenarios involving emergent cases (with no preoperative imaging) or cases with preoperative CT. Conclusions: A prototype system for guidewire placement was developed providing guidance that is potentially compatible with orthopedic-trauma workflow. First preclinical (cadaver) studies demonstrated accurate guidance of K-wire placement in pelvic bone corridors and the ability to automatically detect perturbations that degrade registration accuracy. The preclinical prototype demonstrated performance and utility supporting translation to clinical studies.
Collapse
Affiliation(s)
- Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Gregory Osgood
- Johns Hopkins Medicine, Department of Orthopedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
12
|
Zhang X, Uneri A, Huang Y, Jones CK, Witham TF, Helm PA, Siewerdsen JH. Deformable 3D-2D image registration and analysis of global spinal alignment in long-length intraoperative spine imaging. Med Phys 2022; 49:5715-5727. [PMID: 35762028 DOI: 10.1002/mp.15819] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 06/03/2022] [Accepted: 06/13/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Spinal deformation during surgical intervention (caused by patient positioning and/or correction of malalignment) confounds conventional navigation due to assumptions of rigid transformation. Moreover, the ability to accurately quantify spinal alignment in the operating room would provide assessment of the surgical product via metrics that correlate with clinical outcome. PURPOSE A method for deformable 3D-2D registration of preoperative CT to intraoperative long-length tomosynthesis images is reported for accurate 3D evaluation of device placement in the presence of spinal deformation and automated evaluation of global spinal alignment (GSA). METHODS Long-length tomosynthesis ("Long Film", LF) images were acquired using an O-arm™ imaging system (Medtronic, Minneapolis USA). A deformable 3D-2D patient registration was developed using multi-scale masking (proceeding from the full-length image to local subvolumes about each vertebra) to transform vertebral labels and planning information from preoperative CT to the LF images. Automatic measurement of GSA [Main Thoracic Kyphosis (MThK) and Lumbar Lordosis (LL)] was obtained using a spline fit to registered labels. The "Known-Component Registration" (KC-Reg) method for device registration was adapted to the multi-scale process for 3D device localization from orthogonal LF images. The multi-scale framework was evaluated using a deformable spine phantom in which pedicle screws were inserted, and deformations were induced over a range in LL ∼25-80°. Further validation was carried out in a cadaver study with implanted pedicle screws and a similar range of spinal deformation. The accuracy of patient and device registration was evaluated in terms of 3D translational error and target registration error (TRE), respectively, and the accuracy of automatic GSA measurements were compared to manual annotation. RESULTS Phantom studies demonstrated accurate registration via the multi-scale framework for all vertebral levels in both the neutral and deformed spine: median (interquartile range, IQR) patient registration error was 1.1 mm (0.7-1.9 mm IQR). Automatic measures of MThK and LL agreed with manual delineation within -1.1° ± 2.2° and 0.7° ± 2.0° (mean and standard deviation), respectively. Device registration error was 0.7 mm (0.4-1.0 mm IQR) at the screw tip and 0.9° (1.0°-1.5°) about the screw trajectory. Deformable 3D-2D registration significantly outperformed conventional rigid registration (p < 0.05), which exhibited device registration error of 2.1 mm (0.8-4.1 mm) and 4.1° (1.2°-9.5°). Cadaver studies verified performance under realistic conditions, demonstrating patient registration error of 1.6 mm (0.9-2.1 mm); MThK within -4.2° ± 6.8° and LL within 1.7° ± 3.5°; and device registration error of 0.8 mm (0.5-1.9 mm) and 0.7° (0.4°-1.2°) for the multi-scale deformable method, compared to 2.5 mm (1.0-7.9 mm) and 2.3° (1.6°-8.1°) for rigid registration (p < 0.05). CONCLUSION The deformable 3D-2D registration framework leverages long-length intraoperative imaging to achieve accurate patient and device registration over extended lengths of the spine (up to 64 cm) even with strong anatomical deformation. The method offers a new means for quantitative validation of spinal correction (intraoperative GSA measurement) and 3D verification of device placement in comparison to preoperative images and planning data. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - Craig K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD
| | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD
| | | | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD.,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD.,Department of Neurosurgery, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
13
|
Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol 2022; 67:10.1088/1361-6560/ac72ef. [PMID: 35609586 PMCID: PMC9801422 DOI: 10.1088/1361-6560/ac72ef] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/24/2022] [Indexed: 01/03/2023]
Abstract
Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
| | - J Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States of America
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America,Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| |
Collapse
|
14
|
Sheth NM, Uneri A, Helm PA, Zbijewski W, Siewerdsen JH. Technical assessment of 2D and 3D imaging performance of an IGZO-based flat-panel X-ray detector. Med Phys 2022; 49:3053-3066. [PMID: 35363391 PMCID: PMC10153656 DOI: 10.1002/mp.15605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 03/09/2022] [Accepted: 03/09/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Indirect detection flat-panel detectors (FPDs) consisting of hydrogenated amorphous silicon (a-Si:H) thin-film transistors (TFTs) are a prevalent technology for digital x-ray imaging. However, their performance is challenged in applications requiring low exposure levels, high spatial resolution, and high frame rate. Emerging FPD designs using metal oxide TFTs may offer potential performance improvements compared to FPDs based on a-Si:H TFTs. PURPOSE This work investigates the imaging performance of a new indium gallium zinc oxide (IGZO) TFT-based detector in 2D fluoroscopy and 3D cone-beam CT (CBCT). METHODS The new FPD consists of a sensor array combining IGZO TFTs with a-Si:H photodiodes and a 0.7-mm thick CsI:Tl scintillator. The FPD was implemented on an x-ray imaging bench with system geometry emulating intraoperative CBCT. A conventional FPD with a-Si:H TFTs and a 0.6-mm thick CsI:Tl scintillator was similarly implemented as a basis of comparison. 2D imaging performance was characterized in terms of electronic noise, sensitivity, linearity, lag, spatial resolution (modulation transfer function, MTF), image noise (noise-power spectrum, NPS), and detective quantum efficiency (DQE) with entrance air kerma (EAK) ranging from 0.3 to 1.2 μGy. 3D imaging performance was evaluated in terms of the 3D MTF and noise-equivalent quanta (NEQ), soft-tissue contrast-to-noise ratio (CNR), and image quality evident in anthropomorphic phantoms for a range of anatomical sites and dose, with weighted air kerma, K w ${K_w}$ , ranging from 0.8 to 4.9 mGy. RESULTS The 2D imaging performance of the IGZO-based FPD exhibited up to ∼1.7× lower electronic noise than the a-Si:H FPD at matched pixel pitch. Furthermore, the IGZO FPD exhibited ∼27% increase in mid-frequency DQE (1 mm-1 ) at matched pixel size and dose (EAK ≈ 1.0 μGy) and ∼11% increase after adjusting for differences in scintillator thickness. 2D spatial resolution was limited by the scintillator for each FPD. The IGZO-based FPD demonstrated improved 3D NEQ at all spatial frequencies in both head (≥25% increase for all dose levels) and body (≥10% increase for K w ${K_w}$ ≤2 mGy) imaging scenarios. These characteristics translated to improved low-contrast visualization in anthropomorphic phantoms, demonstrating ≥10% improvement in CNR and extension of the low-dose range for which the detector is input-quantum limited. CONCLUSION The IGZO-based FPD demonstrated improvements in electronic noise, image lag, and NEQ that translated to measurable improvements in 2D and 3D imaging performance compared to a conventional FPD based on a-Si:H TFTs. The improvements are most beneficial for 2D or 3D imaging scenarios involving low-dose and/or high-frame rate.
Collapse
Affiliation(s)
- Niral Milan Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
15
|
Vagdargi P, Uneri A, Jones CK, Wu P, Han R, Luciano MG, Anderson WS, Helm PA, Hager GD, Siewerdsen JH. Pre-Clinical Development of Robot-Assisted Ventriculoscopy for 3D Image Reconstruction and Guidance of Deep Brain Neurosurgery. IEEE Trans Med Robot Bionics 2022; 4:28-37. [PMID: 35368731 PMCID: PMC8967072 DOI: 10.1109/tmrb.2021.3125322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Conventional neuro-navigation can be challenged in targeting deep brain structures via transventricular neuroendoscopy due to unresolved geometric error following soft-tissue deformation. Current robot-assisted endoscopy techniques are fairly limited, primarily serving to planned trajectories and provide a stable scope holder. We report the implementation of a robot-assisted ventriculoscopy (RAV) system for 3D reconstruction, registration, and augmentation of the neuroendoscopic scene with intraoperative imaging, enabling guidance even in the presence of tissue deformation and providing visualization of structures beyond the endoscopic field-of-view. Phantom studies were performed to quantitatively evaluate image sampling requirements, registration accuracy, and computational runtime for two reconstruction methods and a variety of clinically relevant ventriculoscope trajectories. A median target registration error of 1.2 mm was achieved with an update rate of 2.34 frames per second, validating the RAV concept and motivating translation to future clinical studies.
Collapse
Affiliation(s)
- Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Craig K. Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Mark G. Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, USA
| | | | | | - Gregory D. Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering and Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
16
|
Han R, Jones CK, Lee J, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Deformable MR-CT image registration using an unsupervised, dual-channel network for neurosurgical guidance. Med Image Anal 2022; 75:102292. [PMID: 34784539 PMCID: PMC10229200 DOI: 10.1016/j.media.2021.102292] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 10/22/2021] [Accepted: 10/25/2021] [Indexed: 02/08/2023]
Abstract
PURPOSE The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue - e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance. METHOD The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks. RESULTS The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s. CONCLUSION The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States
| | - J Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD, United States
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States; Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States.
| |
Collapse
|
17
|
Uneri A, Wu P, Jones CK, Vagdargi P, Han R, Helm PA, Luciano MG, Anderson WS, Siewerdsen JH. Deformable 3D-2D registration for high-precision guidance and verification of neuroelectrode placement. Phys Med Biol 2021; 66. [PMID: 34644684 DOI: 10.1088/1361-6560/ac2f89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - C K Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - M G Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America.,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
18
|
Zhang X, Zbijewski W, Huang Y, Uneri A, Jones CK, Lo SFL, Witham TF, Luciano M, Anderson WS, Helm PA, Siewerdsen JH. Intraoperative cone-beam and slot-beam CT: 3D image quality and dose with a slot collimator on the O-arm imaging system. Med Phys 2021; 48:6800-6809. [PMID: 34519364 PMCID: PMC10174643 DOI: 10.1002/mp.15221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/09/2021] [Accepted: 08/31/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To characterize the 3D imaging performance and radiation dose for a prototype slot-beam configuration on an intraoperative O-arm™ Surgical Imaging System (Medtronic Inc., Littleton, MA) and identify potential improvements in soft-tissue image quality for surgical interventions. METHODS A slot collimator was integrated with the O-arm™ system for slot-beam axial CT. The collimator can be automatically actuated to provide 1.2° slot-beam longitudinal collimation. Cone-beam and slot-beam configurations were investigated with and without an antiscatter grid (12:1 grid ratio, 60 lines/cm). Dose, scatter, image noise, and soft-tissue contrast resolution were evaluated in quantitative phantoms for head and body configurations over a range of exposure levels (beam energy and mAs), with reconstruction performed via filtered-backprojection. Qualitative imaging performance across various anatomical sites and imaging tasks was assessed with anthropomorphic head, abdomen, and pelvis phantoms. RESULTS The dose for a slot-beam scan varied from 0.02-0.06 mGy/mAs for head protocols to 0.01-0.03 mGy/mAs for body protocols, yielding dose reduction by ∼1/5 to 1/3 compared to cone-beam, owing to beam collimation and reduced x-ray scatter. The slot-beam provided an ∼6-7× reduction in scatter-to-primary ratio (SPR) compared to the cone-beam, yielding SPR ∼20-80% for head and body without the grid and ∼7-30% with the grid. Compared to cone-beam scans at equivalent dose, slot-beam images exhibited an ∼2.5× increase in soft-tissue contrast-to-noise ratio (CNR) for both grid and gridless configurations. For slot-beam scans, a further ∼10-30% improvement in CNR was achieved when the grid was removed. Slot-beam imaging could benefit certain interventional scenarios in which improved visualization of soft tissues is required within a fairly narrow longitudinal region of interest ( ± 7 mm in z )--for example, checking the completeness of tumor resection, preservation of adjacent anatomy, or detection of complications (e.g., hemorrhage). While preserving existing capabilities for fluoroscopy and cone-beam CT, slot-beam scanning could enhance the utility of intraoperative imaging and provide a useful mode for safety and validation checks in image-guided surgery. CONCLUSIONS The 3D imaging performance and dose of a prototype slot-beam CT configuration on the O-arm™ system was investigated. Substantial improvements in soft-tissue image quality and reduction in radiation dose are evident with the slot-beam configuration due to reduced x-ray scatter.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Craig K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland, USA
| | - Sheng-Fu L Lo
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| | | | | | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| |
Collapse
|
19
|
Huang Y, Uneri A, Jones CK, Zhang X, Ketcha MD, Aygun N, Helm PA, Siewerdsen JH. 3D vertebrae labeling in spine CT: an accurate, memory-efficient (Ortho2D) framework. Phys Med Biol 2021; 66. [PMID: 34082413 DOI: 10.1088/1361-6560/ac07c7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/03/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate localization and labeling of vertebrae in computed tomography (CT) is an important step toward more quantitative, automated diagnostic analysis and surgical planning. In this paper, we present a framework (called Ortho2D) for vertebral labeling in CT in a manner that is accurate and memory-efficient.Methods. Ortho2D uses two independent faster R-convolutional neural network networks to detect and classify vertebrae in orthogonal (sagittal and coronal) CT slices. The 2D detections are clustered in 3D to localize vertebrae centroids in the volumetric CT and classify the region (cervical, thoracic, lumbar, or sacral) and vertebral level. A post-process sorting method incorporates the confidence in network output to refine classifications and reduce outliers. Ortho2D was evaluated on a publicly available dataset containing 302 normal and pathological spine CT images with and without surgical instrumentation. Labeling accuracy and memory requirements were assessed in comparison to other recently reported methods. The memory efficiency of Ortho2D permitted extension to high-resolution CT to investigate the potential for further boosts to labeling performance.Results. Ortho2D achieved overall vertebrae detection accuracy of 97.1%, region identification accuracy of 94.3%, and individual vertebral level identification accuracy of 91.0%. The framework achieved 95.8% and 83.6% level identification accuracy in images without and with surgical instrumentation, respectively. Ortho2D met or exceeded the performance of previously reported 2D and 3D labeling methods and reduced memory consumption by a factor of ∼50 (at 1 mm voxel size) compared to a 3D U-Net, allowing extension to higher resolution datasets than normally afforded. The accuracy of level identification increased from 80.1% (for standard/low resolution CT) to 95.1% (for high-resolution CT).Conclusions. The Ortho2D method achieved vertebrae labeling performance that is comparable to other recently reported methods with significant reduction in memory consumption, permitting further performance boosts via application to high-resolution CT.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M D Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N Aygun
- Department of Radiology, Johns Hopkins University, Baltimore MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton MA, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America.,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore MD, United States of America.,Department of Radiology, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
20
|
Vijayan RC, Han R, Wu P, Sheth NM, Ketcha MD, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Development of a fluoroscopically guided robotic assistant for instrument placement in pelvic trauma surgery. J Med Imaging (Bellingham) 2021; 8:035001. [PMID: 34124283 PMCID: PMC8189698 DOI: 10.1117/1.jmi.8.3.035001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 05/21/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: A method for fluoroscopic guidance of a robotic assistant is presented for instrument placement in pelvic trauma surgery. The solution uses fluoroscopic images acquired in standard clinical workflow and helps avoid repeat fluoroscopy commonly performed during implant guidance. Approach: Images acquired from a mobile C-arm are used to perform 3D-2D registration of both the patient (via patient CT) and the robot (via CAD model of a surgical instrument attached to its end effector, e.g; a drill guide), guiding the robot to target trajectories defined in the patient CT. The proposed approach avoids C-arm gantry motion, instead manipulating the robot to acquire disparate views of the instrument. Phantom and cadaver studies were performed to determine operating parameters and assess the accuracy of the proposed approach in aligning a standard drill guide instrument. Results: The proposed approach achieved average drill guide tip placement accuracy of 1.57 ± 0.47 mm and angular alignment of 0.35 ± 0.32 deg in phantom studies. The errors remained within 2 mm and 1 deg in cadaver experiments, comparable to the margins of errors provided by surgical trackers (but operating without the need for external tracking). Conclusions: By operating at a fixed fluoroscopic perspective and eliminating the need for encoded C-arm gantry movement, the proposed approach simplifies and expedites the registration of image-guided robotic assistants and can be used with simple, non-calibrated, non-encoded, and non-isocentric C-arm systems to accurately guide a robotic device in a manner that is compatible with the surgical workflow.
Collapse
Affiliation(s)
- Rohan C. Vijayan
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Niral M. Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Michael D. Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | | | | | - Greg M. Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
21
|
Ketcha MD, Marrama M, Souza A, Uneri A, Wu P, Zhang X, Helm PA, Siewerdsen JH. Sinogram + image domain neural network approach for metal artifact reduction in low-dose cone-beam computed tomography. J Med Imaging (Bellingham) 2021; 8:052103. [PMID: 33732755 DOI: 10.1117/1.jmi.8.5.052103] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 02/22/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Cone-beam computed tomography (CBCT) is commonly used in the operating room to evaluate the placement of surgical implants in relation to critical anatomical structures. A particularly problematic setting, however, is the imaging of metallic implants, where strong artifacts can obscure visualization of both the implant and surrounding anatomy. Such artifacts are compounded when combined with low-dose imaging techniques such as sparse-view acquisition. Approach: This work presents a dual convolutional neural network approach, one operating in the sinogram domain and one in the reconstructed image domain, that is specifically designed for the physics and setting of intraoperative CBCT to address the sources of beam hardening and sparse view sampling that contribute to metal artifacts. The networks were trained with images from cadaver scans with simulated metal hardware. Results: The trained networks were tested on images of cadavers with surgically implanted metal hardware, and performance was compared with a method operating in the image domain alone. While both methods removed most image artifacts, superior performance was observed for the dual-convolutional neural network (CNN) approach in which beam-hardening and view sampling effects were addressed in both the sinogram and image domain. Conclusion: The work demonstrates an innovative approach for eliminating metal and sparsity artifacts in CBCT using a dual-CNN framework which does not require a metal segmentation.
Collapse
Affiliation(s)
- Michael D Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | | | - Andre Souza
- Medtronic, Littleton, Massachusetts, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | - Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | - Xiaoxuan Zhang
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | | | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| |
Collapse
|
22
|
Zhang X, Uneri A, Wu P, Ketcha MD, Jones CK, Huang Y, Lo SFL, Helm PA, Siewerdsen JH. Long-length tomosynthesis and 3D-2D registration for intraoperative assessment of spine instrumentation. Phys Med Biol 2021; 66:055008. [PMID: 33477120 DOI: 10.1088/1361-6560/abde96] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
PURPOSE A system for long-length intraoperative imaging is reported based on longitudinal motion of an O-arm gantry featuring a multi-slot collimator. We assess the utility of long-length tomosynthesis and the geometric accuracy of 3D image registration for surgical guidance and evaluation of long spinal constructs. METHODS A multi-slot collimator with tilted apertures was integrated into an O-arm system for long-length imaging. The multi-slot projective geometry leads to slight view disparity in both long-length projection images (referred to as 'line scans') and tomosynthesis 'slot reconstructions' produced using a weighted-backprojection method. The radiation dose for long-length imaging was measured, and the utility of long-length, intraoperative tomosynthesis was evaluated in phantom and cadaver studies. Leveraging the depth resolution provided by parallax views, an algorithm for 3D-2D registration of the patient and surgical devices was adapted for registration with line scans and slot reconstructions. Registration performance using single-plane or dual-plane long-length images was evaluated and compared to registration accuracy achieved using standard dual-plane radiographs. RESULTS Longitudinal coverage of ∼50-64 cm was achieved with a single long-length slot scan, providing a field-of-view (FOV) up to (40 × 64) cm2, depending on patient positioning. The dose-area product (reference point air kerma × x-ray field area) for a slot scan ranged from ∼702-1757 mGy·cm2, equivalent to ∼2.5 s of fluoroscopy and comparable to other long-length imaging systems. Long-length scanning produced high-resolution tomosynthesis reconstructions, covering ∼12-16 vertebral levels. 3D image registration using dual-plane slot reconstructions achieved median target registration error (TRE) of 1.2 mm and 0.6° in cadaver studies, outperforming registration to dual-plane line scans (TRE = 2.8 mm and 2.2°) and radiographs (TRE = 2.5 mm and 1.1°). 3D registration using single-plane slot reconstructions leveraged the ∼7-14° angular separation between slots to achieve median TRE ∼2 mm and <2° from a single scan. CONCLUSION The multi-slot configuration provided intraoperative visualization of long spine segments, facilitating target localization, assessment of global spinal alignment, and evaluation of long surgical constructs. 3D-2D registration to long-length tomosynthesis reconstructions yielded a promising means of guidance and verification with accuracy exceeding that of 3D-2D registration to conventional radiographs.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
23
|
Vagdargi P, Sheth N, Sisniega A, Uneri A, De Silva T, Osgood GM, Siewerdsen JH. Drill-mounted video guidance for orthopaedic trauma surgery. J Med Imaging (Bellingham) 2021; 8:015002. [PMID: 33604409 DOI: 10.1117/1.jmi.8.1.015002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 01/19/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Percutaneous fracture fixation is a challenging procedure that requires accurate interpretation of fluoroscopic images to insert guidewires through narrow bone corridors. We present a guidance system with a video camera mounted onboard the surgical drill to achieve real-time augmentation of the drill trajectory in fluoroscopy and/or CT. Approach: The camera was mounted on the drill and calibrated with respect to the drill axis. Markers identifiable in both video and fluoroscopy are placed about the surgical field and co-registered by feature correspondences. If available, a preoperative CT can also be co-registered by 3D-2D image registration. Real-time guidance is achieved by virtual overlay of the registered drill axis on fluoroscopy or in CT. Performance was evaluated in terms of target registration error (TRE), conformance within clinically relevant pelvic bone corridors, and runtime. Results: Registration of the drill axis to fluoroscopy demonstrated median TRE of 0.9 mm and 2.0 deg when solved with two views (e.g., anteroposterior and lateral) and five markers visible in both video and fluoroscopy-more than sufficient to provide Kirschner wire (K-wire) conformance within common pelvic bone corridors. Registration accuracy was reduced when solved with a single fluoroscopic view ( TRE = 3.4 mm and 2.7 deg) but was also sufficient for K-wire conformance within pelvic bone corridors. Registration was robust with as few as four markers visible within the field of view. Runtime of the initial implementation allowed fluoroscopy overlay and/or 3D CT navigation with freehand manipulation of the drill up to 10 frames / s . Conclusions: A drill-mounted video guidance system was developed to assist with K-wire placement. Overall workflow is compatible with fluoroscopically guided orthopaedic trauma surgery and does not require markers to be placed in preoperative CT. The initial prototype demonstrates accuracy and runtime that could improve the accuracy of K-wire placement, motivating future work for translation to clinical studies.
Collapse
Affiliation(s)
- Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Greg M Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States.,Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
24
|
Uneri A, Wu P, Jones CK, Ketcha MD, Vagdargi P, Han R, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Data-Driven Deformable 3D-2D Registration for Guiding Neuroelectrode Placement in Deep Brain Stimulation. Proc SPIE Int Soc Opt Eng 2021; 11598:115981B. [PMID: 35982943 PMCID: PMC9382676 DOI: 10.1117/12.2582160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Deep brain stimulation is a neurosurgical procedure used in treatment of a growing spectrum of movement disorders. Inaccuracies in electrode placement, however, can result in poor symptom control or adverse effects and confound variability in clinical outcomes. A deformable 3D-2D registration method is presented for high-precision 3D guidance of neuroelectrodes. METHODS The approach employs a model-based, deformable algorithm for 3D-2D image registration. Variations in lead design are captured in a parametric 3D model based on a B-spline curve. The registration is solved through iterative optimization of 16 degrees-of-freedom that maximize image similarity between the 2 acquired radiographs and simulated forward projections of the neuroelectrode model. The approach was evaluated in phantom models with respect to pertinent imaging parameters, including view selection and imaging dose. RESULTS The results demonstrate an accuracy of (0.2 ± 0.2) mm in 3D localization of individual electrodes. The solution was observed to be robust to changes in pertinent imaging parameters, which demonstrate accurate localization with ≥20° view separation and at 1/10th the dose of a standard fluoroscopy frame. CONCLUSIONS The presented approach provides the means for guiding neuroelectrode placement from 2 low-dose radiographic images in a manner that accommodates potential deformations at the target anatomical site. Future work will focus on improving runtime though learning-based initialization, application in reducing reconstruction metal artifacts for 3D verification of placement, and extensive evaluation in clinical data from an IRB study underway.
Collapse
Affiliation(s)
- A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - C. K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - R. Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - M. Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - W. S. Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| |
Collapse
|
25
|
Vijayan RC, Han R, Wu P, Sheth NM, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Fluoroscopic Guidance of a Surgical Robot: Pre-clinical Evaluation in Pelvic Guidewire Placement. Proc SPIE Int Soc Opt Eng 2021; 11598:115981G. [PMID: 36090307 PMCID: PMC9455933 DOI: 10.1117/12.2582188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE A method and prototype for a fluoroscopically-guided surgical robot is reported for assisting pelvic fracture fixation. The approach extends the compatibility of existing guidance methods with C-arms that are in mainstream use (without prior geometric calibration) using an online calibration of the C-arm geometry automated via registration to patient anatomy. We report the first preclinical studies of this method in cadaver for evaluation of geometric accuracy. METHODS The robot is placed over the patient within the imaging field-of-view and radiographs are acquired as the robot rotates an attached instrument. The radiographs are then used to perform an online geometric calibration via 3D-2D image registration, which solves for the intrinsic and extrinsic parameters of the C-arm imaging system with respect to the patient. The solved projective geometry is then be used to register the robot to the patient and drive the robot to planned trajectories. This method is applied to a robotic system consisting of a drill guide instrument for guidewire placement and evaluated in experiments using a cadaver specimen. RESULTS Robotic drill guide alignment to trajectories defined in the cadaver pelvis were accurate within 2 mm and 1° (on average) using the calibration-free approach. Conformance of trajectories within bone corridors was confirmed in cadaver by extrapolating the aligned drill guide trajectory into the cadaver pelvis. CONCLUSION This study demonstrates the accuracy of image-guided robotic positioning without prior calibration of the C-arm gantry, facilitating the use of surgical robots with simpler imaging devices that cannot establish or maintain an offline calibration. Future work includes testing of the system in a clinical setting with trained orthopaedic surgeons and residents.
Collapse
Affiliation(s)
- R C Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - N M Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD USA
| | - S Vogt
- Siemens Healthineers, Erlangen Germany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
- Department of Computer Science, Johns Hopkins University, Baltimore MD USA
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| |
Collapse
|
26
|
Han R, Uneri A, Vijayan RC, Wu P, Vagdargi P, Sheth N, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH. Fracture reduction planning and guidance in orthopaedic trauma surgery via multi-body image registration. Med Image Anal 2020; 68:101917. [PMID: 33341493 DOI: 10.1016/j.media.2020.101917] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 11/16/2020] [Accepted: 11/23/2020] [Indexed: 02/06/2023]
Abstract
PURPOSES Surgical reduction of pelvic fracture is a challenging procedure, and accurate restoration of natural morphology is essential to obtaining positive functional outcome. The procedure often requires extensive preoperative planning, long fluoroscopic exposure time, and trial-and-error to achieve accurate reduction. We report a multi-body registration framework for reduction planning using preoperative CT and intraoperative guidance using routine 2D fluoroscopy that could help address such challenges. METHOD The framework starts with semi-automatic segmentation of fractured bone fragments in preoperative CT using continuous max-flow. For reduction planning, a multi-to-one registration is performed to register bone fragments to an adaptive template that adjusts to patient-specific bone shapes and poses. The framework further registers bone fragments to intraoperative fluoroscopy to provide 2D fluoroscopy guidance and/or 3D navigation relative to the reduction plan. The framework was investigated in three studies: (1) a simulation study of 40 CT images simulating three fracture categories (unilateral two-body, unilateral three-body, and bilateral two-body); (2) a proof-of-concept cadaver study to mimic clinical scenario; and (3) a retrospective clinical study investigating feasibility in three cases of increasing severity and accuracy requirement. RESULTS Segmentation of simulated pelvic fracture demonstrated Dice coefficient of 0.92±0.06. Reduction planning using the adaptive template achieved 2-3 mm and 2-3° error for the three fracture categories, significantly better than planning based on mirroring of contralateral anatomy. 3D-2D registration yielded ~2 mm and 0.5° accuracy, providing accurate guidance with respect to the preoperative reduction plan. The cadaver study and retrospective clinical study demonstrated comparable accuracy: ~0.90 Dice coefficient in segmentation, ~3 mm accuracy in reduction planning, and ~2 mm accuracy in 3D-2D registration. CONCLUSION The registration framework demonstrated planning and guidance accuracy within clinical requirements in both simulation and clinical feasibility studies for a broad range of fracture-dislocation patterns. Using routinely acquired preoperative CT and intraoperative fluoroscopy, the framework could improve the accuracy of pelvic fracture reduction, reduce radiation dose, and could integrate well with common clinical workflow without the need for additional navigation systems.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - A Uneri
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - R C Vijayan
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - P Wu
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - P Vagdargi
- Department of Computer Science, The Johns Hopkins University, BaltimoreMD, United States
| | - N Sheth
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - S Vogt
- Siemens Healthineers, ErlangenGermany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, The Johns Hopkins Hospital, BaltimoreMD, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States.
| |
Collapse
|
27
|
Wu P, Sheth N, Sisniega A, Uneri A, Han R, Vijayan R, Vagdargi P, Kreher B, Kunze H, Kleinszig G, Vogt S, Lo SF, Theodore N, Siewerdsen JH. C-arm orbits for metal artifact avoidance (MAA) in cone-beam CT. Phys Med Biol 2020; 65:165012. [PMID: 32428891 PMCID: PMC8650760 DOI: 10.1088/1361-6560/ab9454] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Metal artifacts present a challenge to cone-beam CT (CBCT) image-guided surgery, obscuring visualization of metal instruments and adjacent anatomy-often in the very region of interest pertinent to the imaging/surgical tasks. We present a method to reduce the influence of metal artifacts by prospectively defining an image acquisition protocol-viz., the C-arm source-detector orbit-that mitigates metal-induced biases in the projection data. The metal artifact avoidance (MAA) method is compatible with simple mobile C-arms, does not require exact prior information on the patient or metal implants, and is consistent with 3D filtered backprojection (FBP), more advanced (e.g. polyenergetic) model-based image reconstruction (MBIR), and metal artifact reduction (MAR) post-processing methods. The MAA method consists of: (i) coarse localization of metal objects in the field-of-view (FOV) via two or more low-dose scout projection views and segmentation (e.g. a simple U-Net) in coarse backprojection; (ii) model-based prediction of metal-induced x-ray spectral shift for all source-detector vertices accessible by the imaging system (e.g. gantry rotation and tilt angles); and (iii) identification of a circular or non-circular orbit that reduces the variation in spectral shift. The method was developed, tested, and evaluated in a series of studies presenting increasing levels of complexity and realism, including digital simulations, phantom experiment, and cadaver experiment in the context of image-guided spine surgery (pedicle screw implants). The MAA method accurately predicted tilted circular and non-circular orbits that reduced the magnitude of metal artifacts in CBCT reconstructions. Realistic distributions of metal instrumentation were successfully localized (0.71 median Dice coefficient) from 2-6 low-dose scout views even in complex anatomical scenes. The MAA-predicted tilted circular orbits reduced root-mean-square error (RMSE) in 3D image reconstructions by 46%-70% and 'blooming' artifacts (apparent width of the screw shaft) by 20-45%. Non-circular orbits defined by MAA achieved a further ∼46% reduction in RMSE compared to the best (tilted) circular orbit. The MAA method presents a practical means to predict C-arm orbits that minimize spectral bias from metal instrumentation. Resulting orbits-either simple tilted circular orbits or more complex non-circular orbits that can be executed with a motorized multi-axis C-arm-exhibited substantial reduction of metal artifacts in raw CBCT reconstructions by virtue of higher fidelity projection data, which are in turn compatible with subsequent MAR post-processing and/or polyenergetic MBIR to further reduce artifacts.
Collapse
Affiliation(s)
- P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
28
|
Han R, Uneri A, Ketcha M, Vijayan R, Sheth N, Wu P, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH. Multi-body 3D-2D registration for image-guided reduction of pelvic dislocation in orthopaedic trauma surgery. Phys Med Biol 2020; 65:135009. [PMID: 32217833 PMCID: PMC8647002 DOI: 10.1088/1361-6560/ab843c] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Surgical reduction of pelvic dislocation is a challenging procedure with poor long-term prognosis if reduction does not accurately restore natural morphology. The procedure often requires long fluoroscopic exposure times and trial-and-error to achieve accurate reduction. We report a method to automatically compute the target pose of dislocated bones in preoperative CT and provide 3D guidance of reduction using routine 2D fluoroscopy. A pelvic statistical shape model (SSM) and a statistical pose model (SPM) were formed from an atlas of 40 pelvic CT images. Multi-body bone segmentation was achieved by mapping the SSM to a preoperative CT via an active shape model. The target reduction pose for the dislocated bone is estimated by fitting the poses of undislocated bones to the SPM. Intraoperatively, multiple bones are registered to fluoroscopy images via 3D-2D registration to obtain 3D pose estimates from 2D images. The method was examined in three studies: (1) a simulation study of 40 CT images simulating a range of dislocation patterns; (2) a pelvic phantom study with controlled dislocation of the left innominate bone; (3) a clinical case study investigating feasibility in images acquired during pelvic reduction surgery. Experiments investigated the accuracy of registration as a function of initialization error (capture range), image quality (radiation dose and image noise), and field of view (FOV) size. The simulation study achieved target pose estimation with translational error of median 2.3 mm (1.4 mm interquartile range, IQR) and rotational error of 2.1° (1.3° IQR). 3D-2D registration yielded 0.3 mm (0.2 mm IQR) in-plane and 0.3 mm (0.2 mm IQR) out-of-plane translational error, with in-plane capture range of ±50 mm and out-of-plane capture range of ±120 mm. The phantom study demonstrated 3D-2D target registration error of 2.5 mm (1.5 mm IQR), and the method was robust over a large dose range, down to 5 [Formula: see text]Gy/frame (an order of magnitude lower than the nominal fluoroscopic dose). The clinical feasibility study demonstrated accurate registration with both preoperative and intraoperative radiographs, yielding 3.1 mm (1.0 mm IQR) projection distance error with robust performance for FOV ranging from 340 × 340 mm2 to 170 × 170 mm2 (at the image plane). The method demonstrated accurate estimation of the target reduction pose in simulation, phantom, and a clinical feasibility study for a broad range of dislocation patterns, initialization error, dose levels, and FOV size. The system provides a novel means of guidance and assessment of pelvic reduction from routinely acquired preoperative CT and intraoperative fluoroscopy. The method has the potential to reduce radiation dose by minimizing trial-and-error and to improve outcomes by guiding more accurate reduction of joint dislocations.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - M Ketcha
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - R Vijayan
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - N Sheth
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, The Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
29
|
Doerr SA, De Silva T, Vijayan R, Han R, Uneri A, Ketcha MD, Zhang X, Khanna N, Westbroek E, Jiang B, Zygourakis C, Aygun N, Theodore N, Siewerdsen JH. Automatic analysis of global spinal alignment from simple annotation of vertebral bodies. J Med Imaging (Bellingham) 2020; 7:035001. [PMID: 32411814 DOI: 10.1117/1.jmi.7.3.035001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 04/27/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Measurement of global spinal alignment (GSA) is an important aspect of diagnosis and treatment evaluation for spinal deformity but is subject to a high level of inter-reader variability. Approach: Two methods for automatic GSA measurement are proposed to mitigate such variability and reduce the burden of manual measurements. Both approaches use vertebral labels in spine computed tomography (CT) as input: the first (EndSeg) segments vertebral endplates using input labels as seed points; and the second (SpNorm) computes a two-dimensional curvilinear fit to the input labels. Studies were performed to characterize the performance of EndSeg and SpNorm in comparison to manual GSA measurement by five clinicians, including measurements of proximal thoracic kyphosis, main thoracic kyphosis, and lumbar lordosis. Results: For the automatic methods, 93.8% of endplate angle estimates were within the inter-reader 95% confidence interval ( CI 95 ). All GSA measurements for the automatic methods were within the inter-reader CI 95 , and there was no statistically significant difference between automatic and manual methods. The SpNorm method appears particularly robust as it operates without segmentation. Conclusions: Such methods could improve the reproducibility and reliability of GSA measurements and are potentially suitable to applications in large datasets-e.g., for outcome assessment in surgical data science.
Collapse
Affiliation(s)
- Sophia A Doerr
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Rohan Vijayan
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Michael D Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States
| | - Nishanth Khanna
- Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, MD, United States
| | - Erick Westbroek
- Johns Hopkins University, Department of Neurosurgery, Baltimore, MD, United States
| | - Bowen Jiang
- Johns Hopkins University, Department of Neurosurgery, Baltimore, MD, United States
| | - Corinna Zygourakis
- Johns Hopkins University, Department of Neurosurgery, Baltimore, MD, United States
| | - Nafi Aygun
- Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, MD, United States
| | - Nicholas Theodore
- Johns Hopkins University, Department of Neurosurgery, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, United States.,Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, MD, United States.,Johns Hopkins University, Department of Neurosurgery, Baltimore, MD, United States
| |
Collapse
|
30
|
Vagdargi P, Uneri A, Sheth N, Sisniega A, De Silva T, Osgood GM, Siewerdsen JH. Calibration and Registration of a Freehand Video-Guided Surgical Drill for Orthopaedic Trauma. Proc SPIE Int Soc Opt Eng 2020; 11315. [PMID: 32476703 DOI: 10.1117/12.2550001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Pelvic trauma surgical procedures rely heavily on guidance with 2D fluoroscopy views for navigation in complex bone corridors. This "fluoro-hunting" paradigm results in extended radiation exposure and possible suboptimal guidewire placement from limited visualization of the fractures site with overlapped anatomy in 2D fluoroscopy. A novel computer vision-based navigation system for freehand guidewire insertion is proposed. The navigation framework is compatible with the rapid workflow in trauma surgery and bridges the gap between intraoperative fluoroscopy and preoperative CT images. The system uses a drill-mounted camera to detect and track poses of simple multimodality (optical/radiographic) markers for registration of the drill axis to fluoroscopy and, in turn, to CT. Surgical navigation is achieved with real-time display of the drill axis position on fluoroscopy views and, optionally, in 3D on the preoperative CT. The camera was corrected for lens distortion effects and calibrated for 3D pose estimation. Custom marker jigs were constructed to calibrate the drill axis and tooltip with respect to the camera frame. A testing platform for evaluation of the navigation system was developed, including a robotic arm for precise, repeatable, placement of the drill. Experiments were conducted for hand-eye calibration between the drill-mounted camera and the robot using the Park and Martin solver. Experiments using checkerboard calibration demonstrated subpixel accuracy [-0.01 ± 0.23 px] for camera distortion correction. The drill axis was calibrated using a cylindrical model and demonstrated sub-mm accuracy [0.14 ± 0.70 mm] and sub-degree angular deviation.
Collapse
Affiliation(s)
- P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21218
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - N Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - G M Osgood
- Department of Orthopedic Surgery, Johns Hopkins Medicine, Baltimore, MD, USA 21218
| | - J H Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21218.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| |
Collapse
|
31
|
De Silva T, Vedula SS, Perdomo-Pantoja A, Vijayan R, Doerr SA, Uneri A, Han R, Ketcha MD, Skolasky RL, Witham T, Theodore N, Siewerdsen JH. SpineCloud: image analytics for predictive modeling of spine surgery outcomes. J Med Imaging (Bellingham) 2020; 7:031502. [PMID: 32090136 DOI: 10.1117/1.jmi.7.3.031502] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 11/20/2019] [Indexed: 12/28/2022] Open
Abstract
Purpose: Data-intensive modeling could provide insight on the broad variability in outcomes in spine surgery. Previous studies were limited to analysis of demographic and clinical characteristics. We report an analytic framework called "SpineCloud" that incorporates quantitative features extracted from perioperative images to predict spine surgery outcome. Approach: A retrospective study was conducted in which patient demographics, imaging, and outcome data were collected. Image features were automatically computed from perioperative CT. Postoperative 3- and 12-month functional and pain outcomes were analyzed in terms of improvement relative to the preoperative state. A boosted decision tree classifier was trained to predict outcome using demographic and image features as predictor variables. Predictions were computed based on SpineCloud and conventional demographic models, and features associated with poor outcome were identified from weighting terms evident in the boosted tree. Results: Neither approach was predictive of 3- or 12-month outcomes based on preoperative data alone in the current, preliminary study. However, SpineCloud predictions incorporating image features obtained during and immediately following surgery (i.e., intraoperative and immediate postoperative images) exhibited significant improvement in area under the receiver operating characteristic (AUC): AUC = 0.72 ( CI 95 = 0.59 to 0.83) at 3 months and AUC = 0.69 ( CI 95 = 0.55 to 0.82) at 12 months. Conclusions: Predictive modeling of lumbar spine surgery outcomes was improved by incorporation of image-based features compared to analysis based on conventional demographic data. The SpineCloud framework could improve understanding of factors underlying outcome variability and warrants further investigation and validation in a larger patient cohort.
Collapse
Affiliation(s)
- Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - S Swaroop Vedula
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| | - Alexander Perdomo-Pantoja
- Johns Hopkins University, School of Medicine, Department of Neurosurgery, Baltimore, Maryland, United States
| | - Rohan Vijayan
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Sophia A Doerr
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Michael D Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Richard L Skolasky
- Johns Hopkins University, School of Medicine, Department of Orthopedic Surgery, Baltimore, Maryland, United States
| | - Timothy Witham
- Johns Hopkins University, School of Medicine, Department of Neurosurgery, Baltimore, Maryland, United States
| | - Nicholas Theodore
- Johns Hopkins University, School of Medicine, Department of Neurosurgery, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States.,Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States.,Johns Hopkins University, School of Medicine, Department of Neurosurgery, Baltimore, Maryland, United States
| |
Collapse
|
32
|
Vijayan RC, Han R, Wu P, Sheth NM, Ketcha MD, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Image-Guided Robotic K-Wire Placement for Orthopaedic Trauma Surgery. Proc SPIE Int Soc Opt Eng 2020; 11315:113151A. [PMID: 36082206 PMCID: PMC9450105 DOI: 10.1117/12.2549713] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE We report the initial development of an image-based solution for robotic assistance of pelvic fracture fixation. The approach uses intraoperative radiographs, preoperative CT, and an end effector of known design to align the robot with target trajectories in CT. The method extends previous work to solve the robot-to-patient registration from a single radiographic view (without C-arm rotation) and addresses the workflow challenges associated with integrating robotic assistance in orthopaedic trauma surgery in a form that could be broadly applicable to isocentric or non-isocentric C-arms. METHODS The proposed method uses 3D-2D known-component registration to localize a robot end effector with respect to the patient by: (1) exploiting the extended size and complex features of pelvic anatomy to register the patient; and (2) capturing multiple end effector poses using precise robotic manipulation. These transformations, along with an offline hand-eye calibration of the end effector, are used to calculate target robot poses that align the end effector with planned trajectories in the patient CT. Geometric accuracy of the registrations was independently evaluated for the patient and the robot in phantom studies. RESULTS The resulting translational difference between the ground truth and patient registrations of a pelvis phantom using a single (AP) view was 1.3 mm, compared to 0.4 mm using dual (AP+Lat) views. Registration of the robot in air (i.e., no background anatomy) with five unique end effector poses achieved mean translational difference ~1.4 mm for K-wire placement in the pelvis, comparable to tracker-based margins of error (commonly ~2 mm). CONCLUSIONS The proposed approach is feasible based on the accuracy of the patient and robot registrations and is a preliminary step in developing an image-guided robotic guidance system that more naturally fits the workflow of fluoroscopically guided orthopaedic trauma surgery. Future work will involve end-to-end development of the proposed guidance system and assessment of the system with delivery of K-wires in cadaver studies.
Collapse
Affiliation(s)
- R. C. Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - R. Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - N. M. Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - S. Vogt
- Siemens Healthineers, Forchheim, Germany
| | | | - G. M. Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| |
Collapse
|
33
|
Doerr SA, Uneri A, Huang Y, Jones CK, Zhang X, Ketcha MD, Helm PA, Siewerdsen JH. Data-Driven Detection and Registration of Spine Surgery Instrumentation in Intraoperative Images. Proc SPIE Int Soc Opt Eng 2020; 11315:113152P. [PMID: 36082205 PMCID: PMC9450103 DOI: 10.1117/12.2550052] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Conventional model-based 3D-2D registration algorithms can be challenged by limited capture range, model validity, and stringent intraoperative runtime requirements. In this work, a deep convolutional neural network was used to provide robust initialization of a registration algorithm (known-component registration, KC-Reg) for 3D localization of spine surgery implants, combining the speed and global support of data-driven approaches with the previously demonstrated accuracy of model-based registration. METHODS The approach uses a Faster R-CNN architecture to detect and localize a broad variety and orientation of spinal pedicle screws in clinical images. Training data were generated using projections from 17 clinical cone-beam CT scans and a library of screw models to simulate implants. Network output was processed to provide screw count and 2D poses. The network was tested on two test datasets of 2,000 images, each depicting real anatomy and realistic spine surgery instrumentation - one dataset involving the same patient data as in the training set (but with different screws, poses, image noise, and affine transformations) and one dataset with five patients unseen in the test data. Assessment of device detection was quantified in terms of accuracy and specificity, and localization accuracy was evaluated in terms of intersection-over-union (IOU) and distance between true and predicted bounding box coordinates. RESULTS The overall accuracy of pedicle screw detection was ~86.6% (85.3% for the same-patient dataset and 87.8% for the many-patient dataset), suggesting that the screw detection network performed reasonably well irrespective of disparate, complex anatomical backgrounds. The precision of screw detection was ~92.6% (95.0% and 90.2% for the respective same-patient and many-patient datasets). The accuracy of screw localization was within 1.5 mm (median difference of bounding box coordinates), and median IOU exceeded 0.85. For purposes of initializing a 3D-2D registration algorithm, the accuracy was observed to be well within the typical capture range of KC-Reg.1. CONCLUSIONS Initial evaluation of network performance indicates sufficient accuracy to integrate with algorithms for implant registration, guidance, and verification in spine surgery. Such capability is of potential use in surgical navigation, robotic assistance, and data-intensive analysis of implant placement in large retrospective datasets. Future work includes correspondence of multiple views, 3D localization, screw classification, and expansion of the training dataset to a broader variety of anatomical sites, number of screws, and types of implants.
Collapse
Affiliation(s)
- S. A. Doerr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - Y. Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - C. K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - X. Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| |
Collapse
|
34
|
Sheth NM, De Silva T, Uneri A, Ketcha M, Han R, Vijayan R, Osgood GM, Siewerdsen JH. A mobile isocentric C‐arm for intraoperative cone‐beam CT: Technical assessment of dose and 3D imaging performance. Med Phys 2020; 47:958-974. [DOI: 10.1002/mp.13983] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 12/09/2019] [Accepted: 12/13/2019] [Indexed: 01/01/2023] Open
Affiliation(s)
- N. M. Sheth
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - T. De Silva
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - A. Uneri
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - M. Ketcha
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - R. Han
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - R. Vijayan
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - G. M. Osgood
- Department of Orthopaedic Surgery Johns Hopkins Medical Institutions Baltimore MD USA
| | - J. H. Siewerdsen
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| |
Collapse
|
35
|
Siewerdsen JH, Uneri A, Hernandez AM, Burkett GW, Boone JM. Cone‐beam CT dose and imaging performance evaluation with a modular, multipurpose phantom. Med Phys 2019; 47:467-479. [DOI: 10.1002/mp.13952] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Revised: 11/18/2019] [Accepted: 11/23/2019] [Indexed: 12/17/2022] Open
Affiliation(s)
- J. H. Siewerdsen
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD 21205USA
| | - A. Uneri
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD 21205USA
| | - A. M. Hernandez
- Department of Radiology University of California – Davis Sacramento CA 95817USA
| | - G. W. Burkett
- Department of Radiology University of California – Davis Sacramento CA 95817USA
| | - J. M. Boone
- Department of Radiology University of California – Davis Sacramento CA 95817USA
| |
Collapse
|
36
|
Ketcha MD, De Silva T, Han R, Uneri A, Vogt S, Kleinszig G, Siewerdsen JH. Learning-based deformable image registration: effect of statistical mismatch between train and test images. J Med Imaging (Bellingham) 2019; 6:044008. [PMID: 31853461 DOI: 10.1117/1.jmi.6.4.044008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Accepted: 11/18/2019] [Indexed: 01/11/2023] Open
Abstract
Convolutional neural networks (CNNs) offer a promising means to achieve fast deformable image registration with accuracy comparable to conventional, physics-based methods. A persistent question with CNN methods, however, is whether they will be able to generalize to data outside of the training set. We investigated this question of mismatch between train and test data with respect to first- and second-order image statistics (e.g., spatial resolution, image noise, and power spectrum). A UNet-based architecture was built and trained on simulated CT images for various conditions of image noise (dose), spatial resolution, and deformation magnitude. Target registration error was measured as a function of the difference in statistical properties between the test and training data. Generally, registration error is minimized when the training data exactly match the statistics of the test data; however, networks trained with data exhibiting a diversity in statistical characteristics generalized well across the range of statistical conditions considered. Furthermore, networks trained on simulated image content with first- and second-order statistics selected to match that of real anatomical data were shown to provide reasonable registration performance on real anatomical content, offering potential new means for data augmentation. Characterizing the behavior of a CNN in the presence of statistical mismatch is an important step in understanding how these networks behave when deployed on new, unobserved data. Such characterization can inform decisions on whether retraining is necessary and can guide the data collection and/or augmentation process for training.
Collapse
Affiliation(s)
- Michael D Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | | | | | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
37
|
Ketcha MD, De Silva T, Han R, Uneri A, Vogt S, Kleinszig G, Siewerdsen JH. A Statistical Model for Rigid Image Registration Performance: The Influence of Soft-Tissue Deformation as a Confounding Noise Source. IEEE Trans Med Imaging 2019; 38:2016-2027. [PMID: 30932834 PMCID: PMC6755917 DOI: 10.1109/tmi.2019.2907868] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Soft-tissue deformation presents a confounding factor to rigid image registration by introducing image content inconsistent with the underlying motion model, presenting non-correspondent structure with potentially high power, and creating local minima that challenge iterative optimization. In this paper, we introduce a model for registration performance that includes deformable soft tissue as a power-law noise distribution within a statistical framework describing the Cramer-Rao lower bound (CRLB) and root-mean-squared error (RMSE) in registration performance. The model incorporates both cross-correlation and gradient-based similarity metrics, and the model was tested in application to 3D-2D (CT-to-radiograph) and 3D-3D (CT-to-CT) image registration. Predictions accurately reflect the trends in registration error as a function of dose (quantum noise), and the choice of similarity metrics for both registration scenarios. Incorporating soft-tissue deformation as a noise source yields important insight on the limits of registration performance with respect to algorithm design and the clinical application or anatomical context. For example, the model quantifies the advantage of gradient-based similarity metrics in 3D-2D registration, identifies the low-dose limits of registration performance, and reveals the conditions for which the registration performance is fundamentally limited by soft-tissue deformation.
Collapse
|
38
|
Abstract
Intraoperative cone-beam CT (CBCT) is increasingly used for surgical navigation and validation of device placement. In spinal deformity correction, CBCT provides visualization of pedicle screws and fixation rods in relation to adjacent anatomy. This work reports and evaluates a method that uses prior information regarding such surgical instrumentation for improved metal artifact reduction (MAR). The known-component MAR (KC-MAR) approach achieves precise localization of instrumentation in projection images using rigid or deformable 3D-2D registration of component models, thereby overcoming residual errors associated with segmentation-based methods. Projection data containing metal components are processed via 2D inpainting of the detector signal, followed by 3D filtered back-projection (FBP). Phantom studies were performed to identify nominal algorithm parameters and quantitatively investigate performance over a range of component material composition and size. A cadaver study emulating screw and rod placement in spinal deformity correction was conducted to evaluate performance under realistic clinical imaging conditions. KC-MAR demonstrated reduction in artifacts (standard deviation in voxel values) across a range of component types and dose levels, reducing the artifact to 5-10 HU. Accurate component delineation was demonstrated for rigid (screw) and deformable (rod) models with sub-mm registration errors, and a single-pixel dilation of the projected components was found to compensate for partial-volume effects. Artifacts associated with spine screws and rods were reduced by 40%-80% in cadaver studies, and the resulting images demonstrated markedly improved visualization of instrumentation (e.g. screw threads) within cortical margins. The KC-MAR algorithm combines knowledge of surgical instrumentation with 3D image reconstruction in a manner that overcomes potential pitfalls of segmentation. The approach is compatible with FBP-thereby maintaining simplicity in a manner that is consistent with surgical workflow-or more sophisticated model-based reconstruction methods that could further improve image quality and/or help reduce radiation dose.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - T Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
39
|
Vijayan R, De Silva T, Han R, Zhang X, Uneri A, Doerr S, Ketcha M, Perdomo-Pantoja A, Theodore N, Siewerdsen JH. Automatic pedicle screw planning using atlas-based registration of anatomy and reference trajectories. Phys Med Biol 2019; 64:165020. [PMID: 31247607 PMCID: PMC8650759 DOI: 10.1088/1361-6560/ab2d66] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
An algorithm for automatic spinal pedicle screw planning is reported and evaluated in simulation and first clinical studies. A statistical atlas of the lumbar spine (N = 40 members) was constructed for active shape model (ASM) registration of target vertebrae to an unsegmented patient CT. The atlas was augmented to include 'reference' trajectories through the pedicles as defined by a spinal neurosurgeon. Following ASM registration, the trajectories are transformed to the patient CT and accumulated to define a patient-specific screw trajectory, diameter, and length. The algorithm was evaluated in leave-one-out analysis (N = 40 members) and for the first time in a clinical study (N = 5 patients undergoing cone-beam CT (CBCT) guided spine surgery), and in simulated low-dose CBCT images. ASM registration achieved (2.0 ± 0.5) mm root-mean-square-error (RMSE) in surface registration in 96% of cases, with outliers owing to limitations in CT image quality (high noise/slice thickness). Trajectory centerlines were conformant to the pedicle in 95% of cases. For all non-breaching trajectories, automatically defined screw diameter and length were similarly conformant to the pedicle and vertebral body (98.7%, Grade A/B). The algorithm performed similarly in CBCT clinical studies (93% centerline and screw conformance) and was consistent at the lowest dose levels tested. Average runtime in planning five-level (lumbar) bilateral screws (ten trajectories) was (312.1 ± 104.0) s. The runtime per level for ASM registration was (41.2 ± 39.9) s, and the runtime per trajectory was (4.1 ± 0.8) s, suggesting a runtime of ~(45.3 ± 39.9) s with a more fully parallelized implementation. The algorithm demonstrated accurate, automatic definition of pedicle screw trajectories, diameter, and length in CT images of the spine without segmentation. The studies support translation to clinical studies in free-hand or robot-assisted spine surgery, quality assurance, and data analytics in which fast trajectory definition is a benefit to workflow.
Collapse
Affiliation(s)
- R Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Zhang X, Uneri A, Webster Stayman J, Zygourakis CC, Lo SFL, Theodore N, Siewerdsen JH. Known-component 3D image reconstruction for improved intraoperative imaging in spine surgery: A clinical pilot study. Med Phys 2019; 46:3483-3495. [PMID: 31180586 DOI: 10.1002/mp.13652] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 05/21/2019] [Accepted: 05/31/2019] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Intraoperative imaging plays an increased role in support of surgical guidance and quality assurance for interventional approaches. However, image quality sufficient to detect complications and provide quantitative assessment of the surgical product is often confounded by image noise and artifacts. In this work, we translated a three-dimensional model-based image reconstruction (referred to as "Known-Component Reconstruction," KC-Recon) for the first time to clinical studies with the aim of resolving both limitations. METHODS KC-Recon builds upon a penalized weighted least-squares (PWLS) method by incorporating models of surgical instrumentation ("known components") within a joint image registration-reconstruction process to improve image quality. Under IRB approval, a clinical pilot study was conducted with 17 spine surgery patients imaged under informed consent using the O-arm cone-beam CT system (Medtronic, Littleton MA) before and after spinal instrumentation. Volumetric images were generated for each patient using KC-Recon in comparison to conventional filtered backprojection (FBP). Imaging performance prior to instrumentation ("preinstrumentation") was evaluated in terms of soft-tissue contrast-to-noise ratio (CNR) and spatial resolution. The quality of images obtained after the instrumentation ("postinstrumentation") was assessed by quantifying the magnitude of metal artifacts (blooming and streaks) arising from pedicle screws. The potential low-dose advantages of the algorithm were tested by simulating low-dose data (down to one-tenth of the dose of standard protocols) from images acquired at normal dose. RESULTS Preinstrumentation images (at normal clinical dose and matched resolution) exhibited an average 24.0% increase in soft-tissue CNR with KC-Recon compared to FBP (N = 16, P = 0.02), improving visualization of paraspinal muscles, major vessels, and other soft-tissues about the spine and abdomen. For a total of 72 screws in postinstrumentation images, KC-Recon yielded a significant reduction in metal artifacts: 66.3% reduction in overestimation of screw shaft width due to blooming (P < 0.0001) and reduction in streaks at the screw tip (65.8% increase in attenuation accuracy, P < 0.0001), enabling clearer depiction of the screw within the pedicle and vertebral body for an assessment of breach. Depending on the imaging task, dose reduction up to an order of magnitude appeared feasible while maintaining soft-tissue visibility and metal artifact reduction. CONCLUSIONS KC-Recon offers a promising means to improve visualization in the presence of surgical instrumentation and reduce patient dose in image-guided procedures. The improved soft-tissue visibility could facilitate the use of cone-beam CT to soft-tissue surgeries, and the ability to precisely quantify and visualize instrument placement could provide a valuable check against complications in the operating room (cf., postoperative CT).
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - J Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Corinna C Zygourakis
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| | - Sheng-Fu L Lo
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| | - Nicholas Theodore
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA.,Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| |
Collapse
|
41
|
Han R, Uneri A, De Silva T, Ketcha M, Goerres J, Vogt S, Kleinszig G, Osgood G, Siewerdsen JH. Atlas-based automatic planning and 3D–2D fluoroscopic guidance in pelvic trauma surgery. ACTA ACUST UNITED AC 2019; 64:095022. [DOI: 10.1088/1361-6560/ab1456] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
42
|
Uneri A, Zhang X, Stayman JW, Helm PA, Osgood GM, Theodore N, Siewerdsen JH. 3D-2D Image Registration in Virtual Long-Film Imaging: Application to Spinal Deformity Correction. Proc SPIE Int Soc Opt Eng 2019; 10951:109511H. [PMID: 34290470 PMCID: PMC8292105 DOI: 10.1117/12.2513679] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
PURPOSE Intraoperative 2D virtual long-film (VLF) imaging is investigated for 3D guidance and confirmation of the surgical product in spinal deformity correction. Multi-slot-scan geometry (rather than a single-slot "topogram") is exploited to produce parallax views of the scene for accurate 3D colocalization from a single radiograph. METHODS The multi-slot approach uses additional angled collimator apertures to form fan-beams with disparate views (parallax) of anatomy and instrumentation and to extend field-of-view beyond the linear motion limits. Combined with a knowledge of surgical implants (pedicle screws and/or spinal rods modeled as "known components"), 3D-2D image registration is used to solve for pose estimates via optimization of image gradient correlation. Experiments were conducted in cadaver studies emulating the system geometry of the O-arm (Medtronic, Minneapolis MN). RESULTS Experiments demonstrated feasibility of multi-slot VLF and quantified the geometric accuracy of 3D-2D registration using VLF acquisitions. Registration of pedicle screws from a single VLF yielded mean target registration error of (2.0±0.7) mm, comparable to the accuracy of surgical trackers and registration using multiple radiographs (e.g., AP and LAT). CONCLUSIONS 3D-2D registration in a single VLF image offers a promising new solution for image guidance in spinal deformity correction. The ability to accurately resolve pose from a single view absolves workflow challenges of multiple-view registration and suggests application beyond spine surgery, such as reduction of long-bone fractures.
Collapse
Affiliation(s)
- A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - X. Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - J. W. Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - G. M. Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - N. Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| |
Collapse
|
43
|
De Silva T, Uneri A, Zhang X, Ketcha M, Han R, Sheth N, Martin A, Vogt S, Kleinszig G, Belzberg A, Sciubba DM, Siewerdsen JH. Real-time, image-based slice-to-volume registration for ultrasound-guided spinal intervention. Phys Med Biol 2018; 63:215016. [PMID: 30372418 DOI: 10.1088/1361-6560/aae761] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Real-time fusion of magnetic resonance (MR) and ultrasound (US) images could facilitate safe and accurate needle placement in spinal interventions. We develop an entirely image-based registration method (independent of or complementary to surgical trackers) that includes an efficient US probe pose initialization algorithm. The registration enables the simultaneous display of 2D ultrasound image slices relative to 3D pre-procedure MR images for navigation. A dictionary-based 3D-2D pose initialization algorithm was developed in which likely probe positions are predefined in a dictionary with feature encoding by Haar wavelet filters. Feature vectors representing the 2D US image are computed by scaling and translating multiple Haar basis filters to capture scale, location, and relative intensity patterns of distinct anatomical features. Following pose initialization, fast 3D-2D registration was performed by optimizing normalized cross-correlation between intra- and pre-procedure images using Powell's method. Experiments were performed using a lumbar puncture phantom and a fresh cadaver specimen presenting realistic image quality in spinal US imaging. Accuracy was quantified by comparing registration transforms to ground truth motion imparted by a computer-controlled motion system and calculating target registration error (TRE) in anatomical landmarks. Initialization using a 315-length feature vector yielded median translation accuracy of 2.7 mm (3.4 mm interquartile range, IQR) in the phantom and 2.1 mm (2.5 mm IQR) in the cadaver. By comparison, storing the entire image set in the dictionary and optimizing correlation yielded a comparable median accuracy of 2.1 mm (2.8 mm IQR) in the phantom and 2.9 mm (3.5 mm IQR) in the cadaver. However, the dictionary-based method reduced memory requirements by 47× compared to storing the entire image set. The overall 3D error after registration measured using 3D landmarks was 3.2 mm (1.8 mm IQR) mm in the phantom and 3.0 mm (2.3 mm IQR) mm in the cadaver. The system was implemented in a 3D Slicer interface to facilitate translation to clinical studies. Haar feature based initialization provided accuracy and robustness at a level that was sufficient for real-time registration using an entirely image-based method for ultrasound navigation. Such an approach could improve the accuracy and safety of spinal interventions in broad utilization, since it is entirely software-based and can operate free from the cost and workflow requirements of surgical trackers.
Collapse
Affiliation(s)
- T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
44
|
Han R, De Silva T, Ketcha M, Uneri A, Siewerdsen JH. A momentum-based diffeomorphic demons framework for deformable MR-CT image registration. Phys Med Biol 2018; 63:215006. [PMID: 30353886 DOI: 10.1088/1361-6560/aae66c] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Neuro-navigated procedures require a high degree of geometric accuracy but are subject to geometric error from complex deformation in the deep brain-e.g. regions about the ventricles due to egress of cerebrospinal fluid (CSF) upon neuroendoscopic approach or placement of a ventricular shunt. We report a multi-modality, diffeomorphic, deformable registration method using momentum-based acceleration of the Demons algorithm to solve the transformation relating preoperative MRI and intraoperative CT as a basis for high-precision guidance. The registration method (pMI-Demons) extends the mono-modality, diffeomorphic form of the Demons algorithm to multi-modality registration using pointwise mutual information (pMI) as a similarity metric. The method incorporates a preprocessing step to nonlinearly stretch CT image values and incorporates a momentum-based approach to accelerate convergence. Registration performance was evaluated in phantom and patient images: first, the sensitivity of performance to algorithm parameter selection (including update and displacement field smoothing, histogram stretch, and the momentum term) was analyzed in a phantom study over a range of simulated deformations; and second, the algorithm was applied to registration of MR and CT images for four patients undergoing minimally invasive neurosurgery. Performance was compared to two previously reported methods (free-form deformation using mutual information (MI-FFD) and symmetric normalization using mutual information (MI-SyN)) in terms of target registration error (TRE), Jacobian determinant (J), and runtime. The phantom study identified optimal or nominal settings of algorithm parameters for translation to clinical studies. In the phantom study, the pMI-Demons method achieved comparable registration accuracy to the reference methods and strongly reduced outliers in TRE (p [Formula: see text] 0.001 in Kolmogorov-Smirnov test). Similarly, in the clinical study: median TRE = 1.54 mm (0.83-1.66 mm interquartile range, IQR) for pMI-Demons compared to 1.40 mm (1.02-1.67 mm IQR) for MI-FFD and 1.64 mm (0.90-1.92 mm IQR) for MI-SyN. The pMI-Demons and MI-SyN methods yielded diffeomorphic transformations (J > 0) that preserved topology, whereas MI-FFD yielded unrealistic (J < 0) deformations subject to tissue folding and tearing. Momentum-based acceleration gave a ~35% speedup of the pMI-Demons method, providing registration runtime of 10.5 min (reduced to 2.2 min on GPU), compared to 15.5 min for MI-FFD and 34.7 min for MI-SyN. The pMI-Demons method achieved registration accuracy comparable to MI-FFD and MI-SyN, maintained diffeomorphic transformation similar to MI-SyN, and accelerated runtime in a manner that facilitates translation to image-guided neurosurgery.
Collapse
Affiliation(s)
- R Han
- Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | |
Collapse
|
45
|
Uneri A, Zhang X, Yi T, Stayman JW, Helm PA, Theodore N, Siewerdsen JH. Image quality and dose characteristics for an O-arm intraoperative imaging system with model-based image reconstruction. Med Phys 2018; 45:4857-4868. [PMID: 30180274 DOI: 10.1002/mp.13167] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 08/13/2018] [Accepted: 08/16/2018] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To assess the imaging performance and radiation dose characteristics of the O-arm CBCT imaging system (Medtronic Inc., Littleton MA) and demonstrate the potential for improved image quality and reduced dose via model-based image reconstruction (MBIR). METHODS Two main studies were performed to investigate previously unreported characteristics of the O-arm system. First is an investigation of dose and 3D image quality achieved with filtered back-projection (FBP) - including enhancements in geometric calibration, handling of lateral truncation and detector saturation, and incorporation of an isotropic apodization filter. Second is implementation of an MBIR algorithm based on Huber-penalized likelihood estimation (PLH) and investigation of image quality improvement at reduced dose. Each study involved measurements in quantitative phantoms as a basis for analysis of contrast-to-noise ratio and spatial resolution as well as imaging of a human cadaver to test the findings under realistic imaging conditions. RESULTS View-dependent calibration of system geometry improved the accuracy of reconstruction as quantified by the full-width at half maximum of the point-spread function - from 0.80 to 0.65 mm - and yielded subtle but perceptible improvement in high-contrast detail of bone (e.g., temporal bone). Standard technique protocols for the head and body imparted absorbed dose of 16 and 18 mGy, respectively. For low-to-medium contrast (<100 HU) imaging at fixed spatial resolution (1.3 mm edge-spread function) and fixed dose (6.7 mGy), PLH improved CNR over FBP by +48% in the head and +35% in the body. Evaluation at different dose levels demonstrated 30% increase in CNR at 62% of the dose in the head and 90% increase in CNR at 50% dose in the body. CONCLUSIONS A variety of improvements in FBP implementation (geometric calibration, truncation and saturation effects, and isotropic apodization) offer the potential for improved image quality and reduced radiation dose on the O-arm system. Further gains are possible with MBIR, including improved soft-tissue visualization, low-dose imaging protocols, and extension to methods that naturally incorporate prior information of patient anatomy and/or surgical instrumentation.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - T Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - P A Helm
- Medtronic Inc., Littleton, MA, 01460, USA
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA.,Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| |
Collapse
|
46
|
Manbachi A, De Silva T, Uneri A, Jacobson M, Goerres J, Ketcha M, Han R, Aygun N, Thompson D, Ye X, Vogt S, Kleinszig G, Molina C, Iyer R, Garzon-Muvdi T, Raber MR, Groves M, Wolinsky JP, Siewerdsen JH. Clinical Translation of the LevelCheck Decision Support Algorithm for Target Localization in Spine Surgery. Ann Biomed Eng 2018; 46:1548-1557. [PMID: 30051244 DOI: 10.1007/s10439-018-2099-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 07/17/2018] [Indexed: 10/28/2022]
Abstract
Recent work has yielded a method for automatic labeling of vertebrae in intraoperative radiographs as an assistant to manual level counting. The method, called LevelCheck, previously demonstrated promise in phantom studies and retrospective studies. This study aims to: (#1) Analyze the effect of LevelCheck on accuracy and confidence of localization in two modes: (a) Independent Check (labels displayed after the surgeon's decision) and (b) Active Assistant (labels presented before the surgeon's decision). (#2) Assess the feasibility and utility of LevelCheck in the operating room. Two studies were conducted: a laboratory study investigating these two workflow implementations in a simulated operating environment with 5 surgeons, reviewing 62 cases selected from a dataset of radiographs exhibiting a challenge to vertebral localization; and a clinical study involving 20 patients undergoing spine surgery. In Study #1, the median localization error without assistance was 30.4% (IQR = 5.2%) due to the challenging nature of the cases. LevelCheck reduced the median error to 2.4% for both the Independent Check and Active Assistant modes (p < 0.01). Surgeons found LevelCheck to increase confidence in 91% of cases. Study #2 demonstrated accuracy in all cases. The algorithm runtime varied from 17 to 72 s in its current implementation. The algorithm was shown to be feasible, accurate, and to improve confidence during surgery.
Collapse
Affiliation(s)
- Amir Manbachi
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA.,Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Tharindu De Silva
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA
| | - Matthew Jacobson
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA
| | - Joseph Goerres
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA
| | - Michael Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA
| | - Nafi Aygun
- Russell H. Morgan Department of Radiology, Johns Hopkins University, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - David Thompson
- Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, 750 E Pratt St, 15th Floor, Baltimore, MD, 21202, USA
| | - Xiaobu Ye
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Sebastian Vogt
- Siemens Healthineers, Henkestraße 127, 91052, Erlangen, Germany
| | | | - Camilo Molina
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Rajiv Iyer
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Tomas Garzon-Muvdi
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Michael R Raber
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Mari Groves
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Jean-Paul Wolinsky
- Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, 3400 N. Charles Street, Wyman Park Building, Suite 400 West, Baltimore, MD, 21218, USA. .,Department of Neurosurgery, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA. .,Russell H. Morgan Department of Radiology, Johns Hopkins University, The Johns Hopkins Hospital, 1800 Orleans Street, Baltimore, MD, 21287, USA. .,Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, 750 E Pratt St, 15th Floor, Baltimore, MD, 21202, USA. .,Department of Biomedical Engineering, Johns Hopkins University, Traylor Building, Rm 622, 720 Rutland Avenue, Baltimore, MD, 21205, USA.
| |
Collapse
|
47
|
Brown A, Uneri A, Silva TD, Manbachi A, Siewerdsen JH. Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking. J Med Imaging (Bellingham) 2018; 5:021215. [PMID: 29487887 DOI: 10.1117/1.jmi.5.2.021215] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 01/12/2018] [Indexed: 11/14/2022] Open
Abstract
Dynamic reference frames (DRFs) are a common component of modern surgical tracking systems; however, the limited number of commercially available DRFs poses a constraint in developing systems, especially for research and education. This work presents the design and validation of a large, open-source library of DRFs compatible with passive, single-face tracking systems, such as Polaris stereoscopic infrared trackers (NDI, Waterloo, Ontario). An algorithm was developed to create new DRF designs consistent with intra- and intertool design constraints and convert to computer-aided design (CAD) files suitable for three-dimensional printing. A library of 10 such groups, each with 6 to 10 DRFs, was produced and tracking performance was validated in comparison to a standard commercially available reference, including pivot calibration, fiducial registration error (FRE), and target registration error (TRE). Pivot tests showed calibration error [Formula: see text], indistinguishable from the reference. FRE was [Formula: see text], and TRE in a CT head phantom was [Formula: see text], both equivalent to the reference. The library of DRFs offers a useful resource for surgical navigation research and could be extended to other tracking systems and alternative design constraints.
Collapse
Affiliation(s)
- Alisa Brown
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Amir Manbachi
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
48
|
Yi T, Ramchandran V, Siewerdsen JH, Uneri A. Robotic drill guide positioning using known-component 3D-2D image registration. J Med Imaging (Bellingham) 2018; 5:021212. [PMID: 29430481 DOI: 10.1117/1.jmi.5.2.021212] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 01/04/2018] [Indexed: 11/14/2022] Open
Abstract
A method for x-ray image-guided robotic instrument positioning is reported and evaluated in preclinical studies of spinal pedicle screw placement with the aim of improving delivery of transpedicle K-wires and screws. The known-component (KC) registration algorithm was used to register the three-dimensional patient CT and drill guide surface model to intraoperative two-dimensional radiographs. Resulting transformations, combined with offline hand-eye calibration, drive the robotically held drill guide to target trajectories defined in the preoperative CT. The method was assessed in comparison with a more conventional tracker-based approach, and robustness to clinically realistic errors was tested in phantom and cadaver. Deviations from planned trajectories were analyzed in terms of target registration error (TRE) at the tooltip (mm) and approach angle (deg). In phantom studies, the KC approach resulted in [Formula: see text] and [Formula: see text], comparable with accuracy in tracker-based approach. In cadaver studies with realistic anatomical deformation, the KC approach yielded [Formula: see text] and [Formula: see text], with statistically significant improvement versus tracker ([Formula: see text] and [Formula: see text]). Robustness to deformation is attributed to relatively local rigidity of anatomy in radiographic views. X-ray guidance offered accurate robotic positioning and could fit naturally within clinical workflow of fluoroscopically guided procedures.
Collapse
Affiliation(s)
- Thomas Yi
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Vignesh Ramchandran
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
49
|
Uneri A, Zhang X, Stayman JW, Helm P, Osgood GM, Theodore N, Siewerdsen JH. Advanced Image Registration and Reconstruction using the O-Arm System: Dose Reduction, Image Quality, and Guidance using Known-Component Models. Proc SPIE Int Soc Opt Eng 2018; 10576. [PMID: 34290469 DOI: 10.1117/12.2293874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Purpose Model-based image registration and reconstruction offer strong potential for improved safety and precision in image-guided interventions. Advantages include reduced radiation dose, improved soft-tissue visibility (detection of complications), and accurate guidance with/without a dedicated navigation system. This work reports the development and performance of such methods on an O-arm system for intraoperative imaging and translates them to first clinical studies. Methods Two novel methodologies predicate the work: (1) Known-Component Registration (KC-Reg) for 3D localization of the patient and interventional devices from 2D radiographs; and (2) Penalized-Likelihood reconstruction (PLH) for improved 3D image quality and dose reduction. A thorough assessment of geometric stability, dosimetry, and image quality was performed to define algorithm parameters for imaging and guidance protocols. Laboratory studies included: evaluation of KC-Reg in localization of spine screws delivered in cadaver; and PLH performance in contrast, noise, and resolution in phantoms/cadaver compared to filtered backprojection (FBP). Results KC-Reg was shown to successfully register screw implants within ~1 mm based on as few as 3 radiographs. PLH was shown to improve soft-tissue visibility (61% improvement in CNR) compared to FBP at matched resolution. Cadaver studies verified the selection of algorithm parameters and the methods were successfully translated to clinical studies under an IRB protocol. Conclusions Model-based registration and reconstruction approaches were shown to reduce dose and provide improved visualization of anatomy and surgical instrumentation. Immediate future work will focus on further integration of KC-Reg and PLH for Known-Component Reconstruction (KC-Recon) to provide high-quality intraoperative imaging in the presence of dense instrumentation.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - P Helm
- Medtronic Inc., Littleton, MA
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD.,Medtronic Inc., Littleton, MA
| |
Collapse
|
50
|
Yi T, Ramchandran V, Siewerdsen JH, Uneri A. Technical Note: Known-Component Registration for Robotic Drill Guide Positioning. Proc SPIE Int Soc Opt Eng 2018; 10576:105760L. [PMID: 36092693 PMCID: PMC9461572 DOI: 10.1117/12.2322408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A method for x-ray-guided robotic positioning of surgical instruments is reported and evaluated in preclinical studies of spine pedicle screw placement with the aim of improving delivery of transpedicle drills and screws. The known-component registration (KC-Reg) algorithm was used to register the 3D patient CT and the surface model of a drill guide to intraoperatively acquired 2D radiographs. Resulting transformations, combined with offline hand-eye calibration, drive a robotically-held drill guide to target trajectories established in the preoperative patient CT. The proposed method was assessed against more conventional surgical tracker guidance, and robustness to clinically realistic errors was tested in phantom and cadaver studies. Target registration error (TRE) was computed as drill guide deviation from the planned trajectory. The KC-Reg approach resulted in 1.51 ± 0.51 mm error at tooltip and 1.01 ± 0.92° in approach angle, showing comparable performance to the tracker-guided approach. In cadaver studies with anatomical deformation, TRE of 2.31 ± 1.05 mm and 0.66 ± 0.62° were observed, with statistically improved performance over a surgical tracker through registration of locally rigid bony anatomy. X-ray guidance offers an accurate means of driving robotic systems that is compatible with conventional fluoroscopic workflow. Specifically, such procedures involve multi-planar fluoroscopic views that are qualitatively interpreted by the surgeon; the KC-Reg approach accomplishes this using the same multi-planar views to provide greater quantitative accuracy and valuable guidance and QA. The method was robust against anatomical deformation due to the radiographic scene's local nature used in registration, presenting a potentially major surgical benefit.
Collapse
Affiliation(s)
- T. Yi
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| | - V. Ramchandran
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins Univ., Baltimore, MD
| |
Collapse
|