1
|
Butz I, Fernandez M, Uneri A, Theodore N, Anderson WS, Siewerdsen JH. Performance assessment of surgical tracking systems based on statistical process control and longitudinal QA. Comput Assist Surg (Abingdon) 2023; 28:2275522. [PMID: 37942523 DOI: 10.1080/24699322.2023.2275522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023] Open
Abstract
A system for performance assessment and quality assurance (QA) of surgical trackers is reported based on principles of geometric accuracy and statistical process control (SPC) for routine longitudinal testing. A simple QA test phantom was designed, where the number and distribution of registration fiducials was determined drawing from analytical models for target registration error (TRE). A tracker testbed was configured with open-source software for measurement of a TRE-based accuracy metric ε and Jitter (J ). Six trackers were tested: 2 electromagnetic (EM - Aurora); and 4 infrared (IR - 1 Spectra, 1 Vega, and 2 Vicra) - all NDI (Waterloo, ON). Phase I SPC analysis of Shewhart mean (x ¯ ) and standard deviation (s ) determined system control limits. Phase II involved weekly QA of each system for up to 32 weeks and identified Pass, Note, Alert, and Failure action rules. The process permitted QA in <1 min. Phase I control limits were established for all trackers: EM trackers exhibited higher upper control limits than IR trackers in ε (EM: x ¯ ε ∼ 2.8-3.3 mm, IR: x ¯ ε ∼ 1.6-2.0 mm) and Jitter (EM: x ¯ jitter ∼ 0.30-0.33 mm, IR: x ¯ jitter ∼ 0.08-0.10 mm), and older trackers showed evidence of degradation - e.g. higher Jitter for the older Vicra (p-value < .05). Phase II longitudinal tests yielded 676 outcomes in which a total of 4 Failures were noted - 3 resolved by intervention (metal interference for EM trackers) - and 1 owing to restrictive control limits for a new system (Vega). Weekly tests also yielded 40 Notes and 16 Alerts - each spontaneously resolved in subsequent monitoring.
Collapse
Affiliation(s)
- I Butz
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Fernandez
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - N Theodore
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - W S Anderson
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
2
|
Mekki L, Sheth NM, Vijayan RC, Rohleder M, Sisniega A, Kleinszig G, Vogt S, Kunze H, Osgood GM, Siewerdsen JH, Uneri A. Surgical navigation for guidewire placement from intraoperative fluoroscopy in orthopaedic surgery. Phys Med Biol 2023; 68:215001. [PMID: 37774711 DOI: 10.1088/1361-6560/acfec4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective. Surgical guidewires are commonly used in placing fixation implants to stabilize fractures. Accurate positioning of these instruments is challenged by difficulties in 3D reckoning from 2D fluoroscopy. This work aims to enhance the accuracy and reduce exposure times by providing 3D navigation for guidewire placement from as little as two fluoroscopic images.Approach. Our approach combines machine learning-based segmentation with the geometric model of the imager to determine the 3D poses of guidewires. Instrument tips are encoded as individual keypoints, and the segmentation masks are processed to estimate the trajectory. Correspondence between detections in multiple views is established using the pre-calibrated system geometry, and the corresponding features are backprojected to obtain the 3D pose. Guidewire 3D directions were computed using both an analytical and an optimization-based method. The complete approach was evaluated in cadaveric specimens with respect to potential confounding effects from the imaging geometry and radiographic scene clutter due to other instruments.Main results. The detection network identified the guidewire tips within 2.2 mm and guidewire directions within 1.1°, in 2D detector coordinates. Feature correspondence rejected false detections, particularly in images with other instruments, to achieve 83% precision and 90% recall. Estimating the 3D direction via numerical optimization showed added robustness to guidewires aligned with the gantry rotation plane. Guidewire tips and directions were localized in 3D world coordinates with a median accuracy of 1.8 mm and 2.7°, respectively.Significance. The paper reports a new method for automatic 2D detection and 3D localization of guidewires from pairs of fluoroscopic images. Localized guidewires can be virtually overlaid on the patient's pre-operative 3D scan during the intervention. Accurate pose determination for multiple guidewires from two images offers to reduce radiation dose by minimizing the need for repeated imaging and provides quantitative feedback prior to implant placement.
Collapse
Affiliation(s)
- L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N M Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - R C Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M Rohleder
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - H Kunze
- Siemens Healthineers, Erlangen, Germany
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
3
|
Huang H, Siewerdsen JH, Lu A, Hu Y, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Multi-Stage Adaptive Spline Autofocus (MASA) with a Learned Metric for Deformable Motion Compensation in Interventional Cone-Beam CT. Proc SPIE Int Soc Opt Eng 2023; 12463:1246314. [PMID: 37937146 PMCID: PMC10629227 DOI: 10.1117/12.2654361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
Purpose Cone-beam CT (CBCT) is widespread in abdominal interventional imaging, but its long acquisition time makes it susceptible to patient motion. Image-based autofocus has shown success in CBCT deformable motion compensation, via deep autofocus metrics and multi-region optimization, but it is challenged by the large parameter dimensionality required to capture intricate motion trajectories. This work leverages the differentiable nature of deep autofocus metrics to build a novel optimization strategy, Multi-Stage Adaptive Spine Autofocus (MASA), for compensation of complex deformable motion in abdominal CBCT. Methods MASA poses the autofocus problem as a multi-stage adaptive sampling strategy of the motion trajectory, sampled with Hermite spline basis with variable amplitude and knot temporal positioning. The adaptive method permits simultaneous optimization of the sampling phase, local temporal sampling density, and time-dependent amplitude of the motion trajectory. The optimization is performed in a multi-stage schedule with increasing number of knots that progressively accommodates complex trajectories in late stages, preconditioned by coarser components from early stages, and with minimal increase in dimensionality. MASA was evaluated in controlled simulation experiments with two types of motion trajectories: i) combinations of slow drifts with sudden jerk (sigmoid) motion; and ii) combinations of periodic motion sources of varying frequency into multi-frequency trajectories. Further validation was obtained in clinical data from liver CBCT featuring motion of contrast-enhanced vessels, and soft-tissue structures. Results The adaptive sampling strategy provided successful motion compensation in sigmoid trajectories, compared to fixed sampling strategies (mean SSIM increase of 0.026 compared to 0.011). Inspection of the estimated motion showed the capability of MASA to automatically allocate larger sampling density to parts of the scan timeline featuring sudden motion, effectively accommodating complex motion without increasing the problem dimension. Experiments on multi-frequency trajectories with 3-stage MASA (5, 10, and 15 knots) yielded a twofold SSIM increase compared to single-stage autofocus with 15 knots (0.076 vs 0.040, respectively). Application of MASA to clinical datasets resulted in simultaneous improvement on the delineation of both contrast-enhanced vessels and soft-tissue structures in the liver. Conclusion A new autofocus framework, MASA, was developed including a novel multi-stage technique for adaptive temporal sampling of the motion trajectory in combination with fully differentiable deep autofocus metrics. This novel adaptive sampling approach is a crucial step for application of deformable motion compensation to complex temporal motion trajectories.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX USA
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - Y Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
4
|
Vijayan RC, Venkataraman K, Wei J, Sheth NM, Shafiq B, Siewerdsen JH, Zbijewski W, Li G, Cleary K, Uneri A. Multi-Body 3D-2D Registration for Robot-Assisted Joint Reduction: Preclinical Evaluation in the Ankle Syndesmosis. Proc SPIE Int Soc Opt Eng 2023; 12466:124661F. [PMID: 37143861 PMCID: PMC10155864 DOI: 10.1117/12.2654481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Purpose Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. Methods The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Results Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to <2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Conclusions Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.
Collapse
Affiliation(s)
- R. C. Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - K. Venkataraman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - J. Wei
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - N. M. Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - B. Shafiq
- Department of Orthopedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston TX
| | - W. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - G. Li
- Children’s National Hospital, Washington DC
| | - K. Cleary
- Children’s National Hospital, Washington DC
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- ; phone: +1-276-614-7743; website: carnegie.jhu.edu
| |
Collapse
|
5
|
Shi G, Quevedo Gonzalez FJ, Breighner RE, Carrino JA, Siewerdsen JH, Zbijewski W. Effects of non-stationary blur on texture biomarkers of bone using Ultra-High Resolution CT. Proc SPIE Int Soc Opt Eng 2023; 12468:1246813. [PMID: 38226358 PMCID: PMC10788132 DOI: 10.1117/12.2654304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Purpose To advance the development of radiomic models of bone quality using the recently introduced Ultra-High Resolution CT (UHR CT), we investigate inter-scan reproducibility of trabecular bone texture features to spatially-variant azimuthal and radial blurs associated with focal spot elongation and gantry rotation. Methods The UHR CT system features 250×250 μm detector pixels and an x-ray source with a 0.4×0.5 mm focal spot. Visualization of details down to ~150 μm has been reported for this device. A cadaveric femur was imaged on UHR CT at three radial locations within the field-of-view: 0 cm (isocenter), 9 cm from the isocenter, and 18 cm from the isocenter; we expect the non-stationary blurs to worsen with increasing radial displacement. Gray level cooccurrence (GLCM) and gray level run length (GLRLM) texture features were extracted from 237 trabecular regions of interest (ROIs, 5 cm diameter) placed at corresponding locations in the femoral head in scans obtained at the different shifts. We evaluated concordance correlation coefficient (CCC) between texture features at 0 cm (reference) and at 9 cm and 18 cm. We also investigated whether the spatially-variant blurs affect K-means clustering of trabecular bone ROIs based on their texture features. Results The average CCCs (against the 0 cm reference) for GLCM and GLRM features were ~0.7 at 9 cm. At 18 cm, the average CCCs were reduced to ~0.17 for GLCM and ~0.26 for GLRM. The non-stationary blurs are incorporated in radiomic features of cancellous bone, leading to inconsistencies in clustering of trabecular ROIs between different radial locations: an intersection-over-union overlap of corresponding (most similar) clusters between 0 cm and 9 cm shift was >70%, but dropped to <60% for the majority of corresponding clusters between 0 cm and 18 cm shift. Conclusion Non-stationary CT system blurs reduce inter-scan reproducibility of texture features of trabecular bone in UHR CT, especially for locations >15 cm from the isocenter. Radiomic models of bone quality derived from UHR CT measurements at isocenter might need to be revised before application in peripheral body sites such as the hips.
Collapse
Affiliation(s)
- G Shi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - F J Quevedo Gonzalez
- Department of Biomechanics, Hospital for Special Surgery, New York, NY USA 10021
| | - R E Breighner
- Department of Biomechanics, Hospital for Special Surgery, New York, NY USA 10021
| | - J A Carrino
- Hospital for Special Surgery, Radiology & Imaging, New York, NY USA 10021
| | | | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| |
Collapse
|
6
|
Vijayan R, Sheth N, Mekki L, Lu A, Uneri A, Sisniega A, Magaraggia J, Kleinszig G, Vogt S, Thiboutot J, Lee H, Yarmus L, Siewerdsen JH. 3D-2D image registration in the presence of soft-tissue deformation in image-guided transbronchial interventions. Phys Med Biol 2022; 68. [PMID: 36317269 DOI: 10.1088/1361-6560/ac9e3c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/27/2022] [Indexed: 11/06/2022]
Abstract
Purpose. Target localization in pulmonary interventions (e.g. transbronchial biopsy of a lung nodule) is challenged by deformable motion and may benefit from fluoroscopic overlay of the target to provide accurate guidance. We present and evaluate a 3D-2D image registration method for fluoroscopic overlay in the presence of tissue deformation using a multi-resolution/multi-scale (MRMS) framework with an objective function that drives registration primarily by soft-tissue image gradients.Methods. The MRMS method registers 3D cone-beam CT to 2D fluoroscopy without gating of respiratory phase by coarse-to-fine resampling and global-to-local rescaling about target regions-of-interest. A variation of the gradient orientation (GO) similarity metric (denotedGO') was developed to downweight bone gradients and drive registration via soft-tissue gradients. Performance was evaluated in terms of projection distance error at isocenter (PDEiso). Phantom studies determined nominal algorithm parameters and capture range. Preclinical studies used a freshly deceased, ventilated porcine specimen to evaluate performance in the presence of real tissue deformation and a broad range of 3D-2D image mismatch.Results. Nominal algorithm parameters were identified that provided robust performance over a broad range of motion (0-20 mm), including an adaptive parameter selection technique to accommodate unknown mismatch in respiratory phase. TheGO'metric yielded median PDEiso= 1.2 mm, compared to 6.2 mm for conventionalGO.Preclinical studies with real lung deformation demonstrated median PDEiso= 1.3 mm with MRMS +GO'registration, compared to 2.2 mm with a conventional transform. Runtime was 26 s and can be reduced to 2.5 s given a prior registration within ∼5 mm as initialization.Conclusions. MRMS registration via soft-tissue gradients achieved accurate fluoroscopic overlay in the presence of deformable lung motion. By driving registration via soft-tissue image gradients, the method avoided false local minima presented by bones and was robust to a wide range of motion magnitude.
Collapse
Affiliation(s)
- R Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - N Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - J Thiboutot
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - H Lee
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - L Yarmus
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| |
Collapse
|
7
|
Huang Y, Jones CK, Zhang X, Johnston A, Waktola S, Aygun N, Witham TF, Bydon A, Theodore N, Helm PA, Siewerdsen JH, Uneri A. Multi-perspective region-based CNNs for vertebrae labeling in intraoperative long-length images. Comput Methods Programs Biomed 2022; 227:107222. [PMID: 36370597 DOI: 10.1016/j.cmpb.2022.107222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/31/2022] [Accepted: 11/02/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Effective aggregation of intraoperative x-ray images that capture the patient anatomy from multiple view-angles has the potential to enable and improve automated image analysis that can be readily performed during surgery. We present multi-perspective region-based neural networks that leverage knowledge of the imaging geometry for automatic vertebrae labeling in Long-Film images - a novel tomographic imaging modality with an extended field-of-view for spine imaging. METHOD A multi-perspective network architecture was designed to exploit small view-angle disparities produced by a multi-slot collimator and consolidate information from overlapping image regions. A second network incorporates large view-angle disparities to jointly perform labeling on images from multiple views (viz., AP and lateral). A recurrent module incorporates contextual information and enforce anatomical order for the detected vertebrae. The three modules are combined to form the multi-view multi-slot (MVMS) network for labeling vertebrae using images from all available perspectives. The network was trained on images synthesized from 297 CT images and tested on 50 AP and 50 lateral Long-Film images acquired from 13 cadaveric specimens. Labeling performance of the multi-perspective networks was evaluated with respect to the number of vertebrae appearances and presence of surgical instrumentation. RESULTS The MVMS network achieved an F1 score of >96% and an average vertebral localization error of 3.3 mm, with 88.3% labeling accuracy on both AP and lateral images - (15.5% and 35.0% higher than conventional Faster R-CNN on AP and lateral views, respectively). Aggregation of multiple appearances of the same vertebra using the multi-slot network significantly improved the labeling accuracy (p < 0.05). Using the multi-view network, labeling accuracy on the more challenging lateral views was improved to the same level as that of the AP views. The approach demonstrated robustness to the presence of surgical instrumentation, commonly encountered in intraoperative images, and achieved comparable performance in images with and without instrumentation (88.9% vs. 91.2% labeling accuracy). CONCLUSION The MVMS network demonstrated effective multi-perspective aggregation, providing means for accurate, automated vertebrae labeling during spine surgery. The algorithms may be generalized to other imaging tasks and modalities that involve multiple views with view-angle disparities (e.g., bi-plane radiography). Predicted labels can help avoid adverse events during surgery (e.g., wrong-level surgery), establish correspondence with labels in preoperative modalities to facilitate image registration, and enable automated measurement of spinal alignment metrics for intraoperative assessment of spinal curvature.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - C K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD, United States
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - A Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - S Waktola
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - N Aygun
- Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States
| | - T F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - A Bydon
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - P A Helm
- Medtronic, Littleton MA, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore MD, United States; Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States; Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States.
| |
Collapse
|
8
|
Hatamikia S, Biguri A, Herl G, Kronreif G, Reynolds T, Kettenbach J, Russ T, Tersol A, Maier A, Figl M, Siewerdsen JH, Birkfellner W. Source-detector trajectory optimization in cone-beam computed tomography: a comprehensive review on today’s state-of-the-art. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Accepted: 07/29/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Cone-beam computed tomography (CBCT) imaging is becoming increasingly important for a wide range of applications such as image-guided surgery, image-guided radiation therapy as well as diagnostic imaging such as breast and orthopaedic imaging. The potential benefits of non-circular source-detector trajectories was recognized in early work to improve the completeness of CBCT sampling and extend the field of view (FOV). Another important feature of interventional imaging is that prior knowledge of patient anatomy such as a preoperative CBCT or prior CT is commonly available. This provides the opportunity to integrate such prior information into the image acquisition process by customized CBCT source-detector trajectories. Such customized trajectories can be designed in order to optimize task-specific imaging performance, providing intervention or patient-specific imaging settings. The recently developed robotic CBCT C-arms as well as novel multi-source CBCT imaging systems with additional degrees of freedom provide the possibility to largely expand the scanning geometries beyond the conventional circular source-detector trajectory. This recent development has inspired the research community to innovate enhanced image quality by modifying image geometry, as opposed to hardware or algorithms. The recently proposed techniques in this field facilitate image quality improvement, FOV extension, radiation dose reduction, metal artifact reduction as well as 3D imaging under kinematic constraints. Because of the great practical value and the increasing importance of CBCT imaging in image-guided therapy for clinical and preclinical applications as well as in industry, this paper focuses on the review and discussion of the available literature in the CBCT trajectory optimization field. To the best of our knowledge, this paper is the first study that provides an exhaustive literature review regarding customized CBCT algorithms and tries to update the community with the clarification of in-depth information on the current progress and future trends.
Collapse
|
9
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Ehtiati T, Sisniega A. Reference-free learning-based similarity metric for motion compensation in cone-beam CT. Phys Med Biol 2022; 67. [PMID: 35636391 DOI: 10.1088/1361-6560/ac749a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/30/2022] [Indexed: 11/12/2022]
Abstract
Purpose. Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT.Methods. DL-VIF is a reference-free surrogate for the previously reported visual image fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric.Results. The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.068 and 0.034, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (σDL-VIF = 0.008 versusσgradient entropy = 0.019). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) versus little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images.Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - T Ehtiati
- Siemens Medical Solutions USA, Inc., Imaging & Therapy Systems, Hoffman Estates, IL, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
10
|
Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol 2022; 67:10.1088/1361-6560/ac72ef. [PMID: 35609586 PMCID: PMC9801422 DOI: 10.1088/1361-6560/ac72ef] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/24/2022] [Indexed: 01/03/2023]
Abstract
Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
| | - J Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States of America
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America,Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| |
Collapse
|
11
|
Hu Y, Huang H, Siewerdsen JH, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Simulation of Random Deformable Motion in Soft-Tissue Cone-Beam CT with Learned Models. Proc SPIE Int Soc Opt Eng 2022; 12304:1230413. [PMID: 36381251 PMCID: PMC9654724 DOI: 10.1117/12.2646720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Cone-beam CT (CBCT) is widely used for guidance in interventional radiology but it is susceptible to motion artifacts. Motion in interventional CBCT features a complex combination of diverse sources including quasi-periodic, consistent motion patterns such as respiratory motion, and aperiodic, quasi-random, motion such as peristalsis. Recent developments in image-based motion compensation methods include approaches that combine autofocus techniques with deep learning models for extraction of image features pertinent to CBCT motion. Training of such deep autofocus models requires the generation of large amounts of realistic, motion-corrupted CBCT. Previous works on motion simulation were mostly focused on quasi-periodic motion patterns, and reliable simulation of complex combined motion with quasi-random components remains an unaddressed challenge. This work presents a framework aimed at synthesis of realistic motion trajectories for simulation of deformable motion in soft-tissue CBCT. The approach leveraged the capability of conditional generative adversarial network (GAN) models to learn the complex underlying motion present in unlabeled, motion-corrupted, CBCT volumes. The approach is designed for training with unpaired clinical CBCT in an unsupervised fashion. This work presents a first feasibility study, in which the model was trained with simulated data featuring known motion, providing a controlled scenario for validation of the proposed approach prior to extension to clinical data. Our proof-of-concept study illustrated the potential of the model to generate realistic, variable simulation of CBCT deformable motion fields, consistent with three trends underlying the designed training data: i) the synthetic motion induced only diffeomorphic deformations - with Jacobian Determinant larger than zero; ii) the synthetic motion showed median displacement of 0. 5 mm in regions predominantly static in the training (e.g., the posterior aspect of the patient laying supine), compared to a median displacement of 3.8 mm in regions more prone to motion in the training; and iii) the synthetic motion exhibited predominant directionality consistent with the training set, resulting in larger motion in the superior-inferior direction (median and maximum amplitude of 4.58 mm and 20 mm, > 2x larger than the two remaining direction). Together, the proposed framework shows the feasibility for realistic motion simulation and synthesis of variable CBCT data.
Collapse
Affiliation(s)
- Y Hu
- Dept. of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - H Huang
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - J H Siewerdsen
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - W Zbijewski
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Unberath
- Dept. of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - C R Weiss
- Russel H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - A Sisniega
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
12
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Sisniega A. Context-Aware, Reference-Free Local Motion Metric for CBCT Deformable Motion Compensation. Proc SPIE Int Soc Opt Eng 2022; 12304:1230412. [PMID: 36381250 PMCID: PMC9665334 DOI: 10.1117/12.2646857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume, and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
- Department of Radiology, Johns Hopkins University, Baltimore, MD
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - C R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
13
|
Sisniega A, Lu A, Huang H, Zbijewski W, Unberath M, Siewerdsen JH, Weiss CR. Targeted Deformable Motion Compensation for Vascular Interventional Cone-Beam CT Imaging. Proc SPIE Int Soc Opt Eng 2022; 12031:120311H. [PMID: 36381563 PMCID: PMC9654751 DOI: 10.1117/12.2613232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Purpose Cone-beam CT has become commonplace for 3D guidance in interventional radiology (IR), especially for vascular procedures in which identification of small vascular structures is crucial. However, its long image acquisition time poses a limit to image quality due to soft-tissue deformable motion that hampers visibility of small vessels. Autofocus motion compensation has shown promising potential for soft-tissue deformable motion compensation, but it lacks specific target to the imaging task. This work presents an approach for deformable motion compensation targeted at imaging of vascular structures. Methods The proposed method consists on a two-stage framework for: i) identification of contrast-enhanced blood vessels in 2D projection data and delineation of an approximate region covering the vascular target in the volume space, and, ii) a novel autofocus approach including a metric designed to promote the presence of vascular structures acting solely in the region of interest. The vesselness of the image is quantified via evaluation of the properties of the 3D image Hessian, yielding a vesselness filter that gives larger values to voxels candidate to be part of a tubular structure. A cost metric is designed to promote large vesselness values and spatial sparsity, as expected in regions of fine vascularity. A targeted autofocus method was designed by combining the presented metric with a conventional autofocus term acting outside of the region of interest. The resulting method was evaluated on simulated data including synthetic vascularity merged with real anatomical features obtained from MDCT data. Further evaluation was obtained in two clinical datasets obtained during TACE procedures with a robotic C-arm (Artis Zeego, Siemens Healthineers). Results The targeted vascular autofocus effectively restored the shape and contrast of the contrast-enhanced vascularity in the simulation cases, resulting in improved visibility and reduced artifacts. Segmentations performed with a single threshold value on the target vascular regions yielded a net increase of up to 42% in DICE coefficient computed against the static reference. Motion compensation in clinical datasets resulted in improved visibility of vascular structures, observed in maximum intensity projections of the contrast-enhanced liver vessel tree. Conclusion Targeted motion compensation for vascular imaging showed promising performance for increased identification of small vascular structures in presence of motion. The development of autofocus metrics and methods tailored to vascular imaging opens the way for reliable compensation of deformable motion while preserving the integrity of anatomical structures in the image.
Collapse
Affiliation(s)
- A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
14
|
Han R, Jones CK, Lee J, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Deformable MR-CT image registration using an unsupervised, dual-channel network for neurosurgical guidance. Med Image Anal 2022; 75:102292. [PMID: 34784539 PMCID: PMC10229200 DOI: 10.1016/j.media.2021.102292] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 10/22/2021] [Accepted: 10/25/2021] [Indexed: 02/08/2023]
Abstract
PURPOSE The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue - e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance. METHOD The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks. RESULTS The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s. CONCLUSION The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States
| | - J Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD, United States
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States; Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States.
| |
Collapse
|
15
|
Uneri A, Wu P, Jones CK, Vagdargi P, Han R, Helm PA, Luciano MG, Anderson WS, Siewerdsen JH. Deformable 3D-2D registration for high-precision guidance and verification of neuroelectrode placement. Phys Med Biol 2021; 66. [PMID: 34644684 DOI: 10.1088/1361-6560/ac2f89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - C K Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - M G Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America.,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
16
|
Huang Y, Uneri A, Jones CK, Zhang X, Ketcha MD, Aygun N, Helm PA, Siewerdsen JH. 3D vertebrae labeling in spine CT: an accurate, memory-efficient (Ortho2D) framework. Phys Med Biol 2021; 66. [PMID: 34082413 DOI: 10.1088/1361-6560/ac07c7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/03/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate localization and labeling of vertebrae in computed tomography (CT) is an important step toward more quantitative, automated diagnostic analysis and surgical planning. In this paper, we present a framework (called Ortho2D) for vertebral labeling in CT in a manner that is accurate and memory-efficient.Methods. Ortho2D uses two independent faster R-convolutional neural network networks to detect and classify vertebrae in orthogonal (sagittal and coronal) CT slices. The 2D detections are clustered in 3D to localize vertebrae centroids in the volumetric CT and classify the region (cervical, thoracic, lumbar, or sacral) and vertebral level. A post-process sorting method incorporates the confidence in network output to refine classifications and reduce outliers. Ortho2D was evaluated on a publicly available dataset containing 302 normal and pathological spine CT images with and without surgical instrumentation. Labeling accuracy and memory requirements were assessed in comparison to other recently reported methods. The memory efficiency of Ortho2D permitted extension to high-resolution CT to investigate the potential for further boosts to labeling performance.Results. Ortho2D achieved overall vertebrae detection accuracy of 97.1%, region identification accuracy of 94.3%, and individual vertebral level identification accuracy of 91.0%. The framework achieved 95.8% and 83.6% level identification accuracy in images without and with surgical instrumentation, respectively. Ortho2D met or exceeded the performance of previously reported 2D and 3D labeling methods and reduced memory consumption by a factor of ∼50 (at 1 mm voxel size) compared to a 3D U-Net, allowing extension to higher resolution datasets than normally afforded. The accuracy of level identification increased from 80.1% (for standard/low resolution CT) to 95.1% (for high-resolution CT).Conclusions. The Ortho2D method achieved vertebrae labeling performance that is comparable to other recently reported methods with significant reduction in memory consumption, permitting further performance boosts via application to high-resolution CT.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M D Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N Aygun
- Department of Radiology, Johns Hopkins University, Baltimore MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton MA, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America.,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore MD, United States of America.,Department of Radiology, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
17
|
Sisniega A, Stayman JW, Capostagno S, Weiss CR, Ehtiati T, Siewerdsen JH. Accelerated 3D image reconstruction with a morphological pyramid and noise-power convergence criterion. Phys Med Biol 2021; 66:055012. [PMID: 33477131 DOI: 10.1088/1361-6560/abde97] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Model-based iterative reconstruction (MBIR) for cone-beam CT (CBCT) offers better noise-resolution tradeoff and image quality than analytical methods for acquisition protocols with low x-ray dose or limited data, but with increased computational burden that poses a drawback to routine application in clinical scenarios. This work develops a comprehensive framework for acceleration of MBIR in the form of penalized weighted least squares optimized with ordered subsets separable quadratic surrogates. The optimization was scheduled on a set of stages forming a morphological pyramid varying in voxel size. Transition between stages was controlled with a convergence criterion based on the deviation between the mid-band noise power spectrum (NPS) measured on a homogeneous region of the evolving reconstruction and that expected for the converged image, computed with an analytical model that used projection data and the reconstruction parameters. A stochastic backprojector was developed by introducing a random perturbation to the sampling position of each voxel for each ray in the reconstruction within a voxel-based backprojector, breaking the deterministic pattern of sampling artifacts when combined with an unmatched Siddon forward projector. This fast, forward and backprojector pair were included into a multi-resolution reconstruction strategy to provide support for objects partially outside of the field of view. Acceleration from ordered subsets was combined with momentum accumulation stabilized with an adaptive technique that automatically resets the accumulated momentum when it diverges noticeably from the current iteration update. The framework was evaluated with CBCT data of a realistic abdomen phantom acquired on an imaging x-ray bench and with clinical CBCT data from an angiography robotic C-arm (Artis Zeego, Siemens Healthineers, Forchheim, Germany) acquired during a liver embolization procedure. Image fidelity was assessed with the structural similarity index (SSIM) computed with a converged reconstruction. The accelerated framework provided accurate reconstructions in 60 s (SSIM = 0.97) and as little as 27 s (SSIM = 0.94) for soft-tissue evaluation. The use of simple forward and backprojectors resulted in 9.3× acceleration. Accumulation of momentum provided extra ∼3.5× acceleration with stable convergence for 6-30 subsets. The NPS-driven morphological pyramid resulted in initial faster convergence, achieving similar SSIM with 1.5× lower runtime than the single-stage optimization. Acceleration of MBIR to provide reconstruction time compatible with clinical applications is feasible via architectures that integrate algorithmic acceleration with approaches to provide stable convergence, and optimization schedules that maximize convergence speed.
Collapse
Affiliation(s)
- A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD United States of America
| | | | | | | | | | | |
Collapse
|
18
|
Capostagno S, Sisniega A, Stayman JW, Ehtiati T, Weiss CR, Siewerdsen JH. Deformable motion compensation for interventional cone-beam CT. Phys Med Biol 2021; 66:055010. [PMID: 33594993 DOI: 10.1088/1361-6560/abb16e] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Image-guided therapies in the abdomen and pelvis are often hindered by motion artifacts in cone-beam CT (CBCT) arising from complex, non-periodic, deformable organ motion during long scan times (5-30 s). We propose a deformable image-based motion compensation method to address these challenges and improve CBCT guidance. Motion compensation is achieved by selecting a set of small regions of interest in the uncompensated image to minimize a cost function consisting of an autofocus objective and spatiotemporal regularization penalties. Motion trajectories are estimated using an iterative optimization algorithm (CMA-ES) and used to interpolate a 4D spatiotemporal motion vector field. The motion-compensated image is reconstructed using a modified filtered backprojection approach. Being image-based, the method does not require additional input besides the raw CBCT projection data and system geometry that are used for image reconstruction. Experimental studies investigated: (1) various autofocus objective functions, analyzed using a digital phantom with a range of sinusoidal motion magnitude (4, 8, 12, 16, 20 mm); (2) spatiotemporal regularization, studied using a CT dataset from The Cancer Imaging Archive with deformable sinusoidal motion of variable magnitude (10, 15, 20, 25 mm); and (3) performance in complex anatomy, evaluated in cadavers undergoing simple and complex motion imaged on a CBCT-capable mobile C-arm system (Cios Spin 3D, Siemens Healthineers, Forchheim, Germany). Gradient entropy was found to be the best autofocus objective for soft-tissue CBCT, increasing structural similarity (SSIM) by 42%-92% over the range of motion magnitudes investigated. The optimal temporal regularization strength was found to vary widely (0.5-5 mm-2) over the range of motion magnitudes investigated, whereas optimal spatial regularization strength was relatively constant (0.1). In cadaver studies, deformable motion compensation was shown to improve local SSIM by ∼17% for simple motion and ∼21% for complex motion and provided strong visual improvement of motion artifacts (reduction of blurring and streaks and improved visibility of soft-tissue edges). The studies demonstrate the robustness of deformable motion compensation to a range of motion magnitudes, frequencies, and other factors (e.g. truncation and scatter).
Collapse
Affiliation(s)
- S Capostagno
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | |
Collapse
|
19
|
Uneri A, Wu P, Jones CK, Ketcha MD, Vagdargi P, Han R, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Data-Driven Deformable 3D-2D Registration for Guiding Neuroelectrode Placement in Deep Brain Stimulation. Proc SPIE Int Soc Opt Eng 2021; 11598:115981B. [PMID: 35982943 PMCID: PMC9382676 DOI: 10.1117/12.2582160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Deep brain stimulation is a neurosurgical procedure used in treatment of a growing spectrum of movement disorders. Inaccuracies in electrode placement, however, can result in poor symptom control or adverse effects and confound variability in clinical outcomes. A deformable 3D-2D registration method is presented for high-precision 3D guidance of neuroelectrodes. METHODS The approach employs a model-based, deformable algorithm for 3D-2D image registration. Variations in lead design are captured in a parametric 3D model based on a B-spline curve. The registration is solved through iterative optimization of 16 degrees-of-freedom that maximize image similarity between the 2 acquired radiographs and simulated forward projections of the neuroelectrode model. The approach was evaluated in phantom models with respect to pertinent imaging parameters, including view selection and imaging dose. RESULTS The results demonstrate an accuracy of (0.2 ± 0.2) mm in 3D localization of individual electrodes. The solution was observed to be robust to changes in pertinent imaging parameters, which demonstrate accurate localization with ≥20° view separation and at 1/10th the dose of a standard fluoroscopy frame. CONCLUSIONS The presented approach provides the means for guiding neuroelectrode placement from 2 low-dose radiographic images in a manner that accommodates potential deformations at the target anatomical site. Future work will focus on improving runtime though learning-based initialization, application in reducing reconstruction metal artifacts for 3D verification of placement, and extensive evaluation in clinical data from an IRB study underway.
Collapse
Affiliation(s)
- A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - C. K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - R. Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - M. Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - W. S. Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| |
Collapse
|
20
|
Vijayan RC, Han R, Wu P, Sheth NM, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Fluoroscopic Guidance of a Surgical Robot: Pre-clinical Evaluation in Pelvic Guidewire Placement. Proc SPIE Int Soc Opt Eng 2021; 11598:115981G. [PMID: 36090307 PMCID: PMC9455933 DOI: 10.1117/12.2582188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE A method and prototype for a fluoroscopically-guided surgical robot is reported for assisting pelvic fracture fixation. The approach extends the compatibility of existing guidance methods with C-arms that are in mainstream use (without prior geometric calibration) using an online calibration of the C-arm geometry automated via registration to patient anatomy. We report the first preclinical studies of this method in cadaver for evaluation of geometric accuracy. METHODS The robot is placed over the patient within the imaging field-of-view and radiographs are acquired as the robot rotates an attached instrument. The radiographs are then used to perform an online geometric calibration via 3D-2D image registration, which solves for the intrinsic and extrinsic parameters of the C-arm imaging system with respect to the patient. The solved projective geometry is then be used to register the robot to the patient and drive the robot to planned trajectories. This method is applied to a robotic system consisting of a drill guide instrument for guidewire placement and evaluated in experiments using a cadaver specimen. RESULTS Robotic drill guide alignment to trajectories defined in the cadaver pelvis were accurate within 2 mm and 1° (on average) using the calibration-free approach. Conformance of trajectories within bone corridors was confirmed in cadaver by extrapolating the aligned drill guide trajectory into the cadaver pelvis. CONCLUSION This study demonstrates the accuracy of image-guided robotic positioning without prior calibration of the C-arm gantry, facilitating the use of surgical robots with simpler imaging devices that cannot establish or maintain an offline calibration. Future work includes testing of the system in a clinical setting with trained orthopaedic surgeons and residents.
Collapse
Affiliation(s)
- R C Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - N M Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD USA
| | - S Vogt
- Siemens Healthineers, Erlangen Germany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
- Department of Computer Science, Johns Hopkins University, Baltimore MD USA
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA
| |
Collapse
|
21
|
Han R, Uneri A, Vijayan RC, Wu P, Vagdargi P, Sheth N, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH. Fracture reduction planning and guidance in orthopaedic trauma surgery via multi-body image registration. Med Image Anal 2020; 68:101917. [PMID: 33341493 DOI: 10.1016/j.media.2020.101917] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 11/16/2020] [Accepted: 11/23/2020] [Indexed: 02/06/2023]
Abstract
PURPOSES Surgical reduction of pelvic fracture is a challenging procedure, and accurate restoration of natural morphology is essential to obtaining positive functional outcome. The procedure often requires extensive preoperative planning, long fluoroscopic exposure time, and trial-and-error to achieve accurate reduction. We report a multi-body registration framework for reduction planning using preoperative CT and intraoperative guidance using routine 2D fluoroscopy that could help address such challenges. METHOD The framework starts with semi-automatic segmentation of fractured bone fragments in preoperative CT using continuous max-flow. For reduction planning, a multi-to-one registration is performed to register bone fragments to an adaptive template that adjusts to patient-specific bone shapes and poses. The framework further registers bone fragments to intraoperative fluoroscopy to provide 2D fluoroscopy guidance and/or 3D navigation relative to the reduction plan. The framework was investigated in three studies: (1) a simulation study of 40 CT images simulating three fracture categories (unilateral two-body, unilateral three-body, and bilateral two-body); (2) a proof-of-concept cadaver study to mimic clinical scenario; and (3) a retrospective clinical study investigating feasibility in three cases of increasing severity and accuracy requirement. RESULTS Segmentation of simulated pelvic fracture demonstrated Dice coefficient of 0.92±0.06. Reduction planning using the adaptive template achieved 2-3 mm and 2-3° error for the three fracture categories, significantly better than planning based on mirroring of contralateral anatomy. 3D-2D registration yielded ~2 mm and 0.5° accuracy, providing accurate guidance with respect to the preoperative reduction plan. The cadaver study and retrospective clinical study demonstrated comparable accuracy: ~0.90 Dice coefficient in segmentation, ~3 mm accuracy in reduction planning, and ~2 mm accuracy in 3D-2D registration. CONCLUSION The registration framework demonstrated planning and guidance accuracy within clinical requirements in both simulation and clinical feasibility studies for a broad range of fracture-dislocation patterns. Using routinely acquired preoperative CT and intraoperative fluoroscopy, the framework could improve the accuracy of pelvic fracture reduction, reduce radiation dose, and could integrate well with common clinical workflow without the need for additional navigation systems.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - A Uneri
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - R C Vijayan
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - P Wu
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - P Vagdargi
- Department of Computer Science, The Johns Hopkins University, BaltimoreMD, United States
| | - N Sheth
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States
| | - S Vogt
- Siemens Healthineers, ErlangenGermany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, The Johns Hopkins Hospital, BaltimoreMD, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, The Johns Hopkins University, BaltimoreMD, United States.
| |
Collapse
|
22
|
Wu P, Sheth N, Sisniega A, Uneri A, Han R, Vijayan R, Vagdargi P, Kreher B, Kunze H, Kleinszig G, Vogt S, Lo SF, Theodore N, Siewerdsen JH. C-arm orbits for metal artifact avoidance (MAA) in cone-beam CT. Phys Med Biol 2020; 65:165012. [PMID: 32428891 PMCID: PMC8650760 DOI: 10.1088/1361-6560/ab9454] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Metal artifacts present a challenge to cone-beam CT (CBCT) image-guided surgery, obscuring visualization of metal instruments and adjacent anatomy-often in the very region of interest pertinent to the imaging/surgical tasks. We present a method to reduce the influence of metal artifacts by prospectively defining an image acquisition protocol-viz., the C-arm source-detector orbit-that mitigates metal-induced biases in the projection data. The metal artifact avoidance (MAA) method is compatible with simple mobile C-arms, does not require exact prior information on the patient or metal implants, and is consistent with 3D filtered backprojection (FBP), more advanced (e.g. polyenergetic) model-based image reconstruction (MBIR), and metal artifact reduction (MAR) post-processing methods. The MAA method consists of: (i) coarse localization of metal objects in the field-of-view (FOV) via two or more low-dose scout projection views and segmentation (e.g. a simple U-Net) in coarse backprojection; (ii) model-based prediction of metal-induced x-ray spectral shift for all source-detector vertices accessible by the imaging system (e.g. gantry rotation and tilt angles); and (iii) identification of a circular or non-circular orbit that reduces the variation in spectral shift. The method was developed, tested, and evaluated in a series of studies presenting increasing levels of complexity and realism, including digital simulations, phantom experiment, and cadaver experiment in the context of image-guided spine surgery (pedicle screw implants). The MAA method accurately predicted tilted circular and non-circular orbits that reduced the magnitude of metal artifacts in CBCT reconstructions. Realistic distributions of metal instrumentation were successfully localized (0.71 median Dice coefficient) from 2-6 low-dose scout views even in complex anatomical scenes. The MAA-predicted tilted circular orbits reduced root-mean-square error (RMSE) in 3D image reconstructions by 46%-70% and 'blooming' artifacts (apparent width of the screw shaft) by 20-45%. Non-circular orbits defined by MAA achieved a further ∼46% reduction in RMSE compared to the best (tilted) circular orbit. The MAA method presents a practical means to predict C-arm orbits that minimize spectral bias from metal instrumentation. Resulting orbits-either simple tilted circular orbits or more complex non-circular orbits that can be executed with a motorized multi-axis C-arm-exhibited substantial reduction of metal artifacts in raw CBCT reconstructions by virtue of higher fidelity projection data, which are in turn compatible with subsequent MAR post-processing and/or polyenergetic MBIR to further reduce artifacts.
Collapse
Affiliation(s)
- P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
23
|
Han R, Uneri A, Ketcha M, Vijayan R, Sheth N, Wu P, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH. Multi-body 3D-2D registration for image-guided reduction of pelvic dislocation in orthopaedic trauma surgery. Phys Med Biol 2020; 65:135009. [PMID: 32217833 PMCID: PMC8647002 DOI: 10.1088/1361-6560/ab843c] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Surgical reduction of pelvic dislocation is a challenging procedure with poor long-term prognosis if reduction does not accurately restore natural morphology. The procedure often requires long fluoroscopic exposure times and trial-and-error to achieve accurate reduction. We report a method to automatically compute the target pose of dislocated bones in preoperative CT and provide 3D guidance of reduction using routine 2D fluoroscopy. A pelvic statistical shape model (SSM) and a statistical pose model (SPM) were formed from an atlas of 40 pelvic CT images. Multi-body bone segmentation was achieved by mapping the SSM to a preoperative CT via an active shape model. The target reduction pose for the dislocated bone is estimated by fitting the poses of undislocated bones to the SPM. Intraoperatively, multiple bones are registered to fluoroscopy images via 3D-2D registration to obtain 3D pose estimates from 2D images. The method was examined in three studies: (1) a simulation study of 40 CT images simulating a range of dislocation patterns; (2) a pelvic phantom study with controlled dislocation of the left innominate bone; (3) a clinical case study investigating feasibility in images acquired during pelvic reduction surgery. Experiments investigated the accuracy of registration as a function of initialization error (capture range), image quality (radiation dose and image noise), and field of view (FOV) size. The simulation study achieved target pose estimation with translational error of median 2.3 mm (1.4 mm interquartile range, IQR) and rotational error of 2.1° (1.3° IQR). 3D-2D registration yielded 0.3 mm (0.2 mm IQR) in-plane and 0.3 mm (0.2 mm IQR) out-of-plane translational error, with in-plane capture range of ±50 mm and out-of-plane capture range of ±120 mm. The phantom study demonstrated 3D-2D target registration error of 2.5 mm (1.5 mm IQR), and the method was robust over a large dose range, down to 5 [Formula: see text]Gy/frame (an order of magnitude lower than the nominal fluoroscopic dose). The clinical feasibility study demonstrated accurate registration with both preoperative and intraoperative radiographs, yielding 3.1 mm (1.0 mm IQR) projection distance error with robust performance for FOV ranging from 340 × 340 mm2 to 170 × 170 mm2 (at the image plane). The method demonstrated accurate estimation of the target reduction pose in simulation, phantom, and a clinical feasibility study for a broad range of dislocation patterns, initialization error, dose levels, and FOV size. The system provides a novel means of guidance and assessment of pelvic reduction from routinely acquired preoperative CT and intraoperative fluoroscopy. The method has the potential to reduce radiation dose by minimizing trial-and-error and to improve outcomes by guiding more accurate reduction of joint dislocations.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - M Ketcha
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - R Vijayan
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - N Sheth
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | | | - G M Osgood
- Department of Orthopaedic Surgery, The Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
24
|
Wu P, Sisniega A, Stayman JW, Zbijewski W, Foos D, Wang X, Khanna N, Aygun N, Stevens RD, Siewerdsen JH. Cone-beam CT for imaging of the head/brain: Development and assessment of scanner prototype and reconstruction algorithms. Med Phys 2020; 47:2392-2407. [PMID: 32145076 PMCID: PMC7343627 DOI: 10.1002/mp.14124] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 02/06/2020] [Accepted: 02/21/2020] [Indexed: 01/14/2023] Open
Abstract
PURPOSE Our aim was to develop a high-quality, mobile cone-beam computed tomography (CBCT) scanner for point-of-care detection and monitoring of low-contrast, soft-tissue abnormalities in the head/brain, such as acute intracranial hemorrhage (ICH). This work presents an integrated framework of hardware and algorithmic advances for improving soft-tissue contrast resolution and evaluation of its technical performance with human subjects. METHODS Four configurations of a CBCT scanner prototype were designed and implemented to investigate key aspects of hardware (including system geometry, antiscatter grid, bowtie filter) and technique protocols. An integrated software pipeline (c.f., a serial cascade of algorithms) was developed for artifact correction (image lag, glare, beam hardening and x-ray scatter), motion compensation, and three-dimensional image (3D) reconstruction [penalized weighted least squares (PWLS), with a hardware-specific statistical noise model]. The PWLS method was extended in this work to accommodate multiple, independently moving regions with different resolution (to address both motion compensation and image truncation). Imaging performance was evaluated quantitatively and qualitatively with 41 human subjects in the neurosciences critical care unit (NCCU) at our institution. RESULTS The progression of four scanner configurations exhibited systematic improvement in the quality of raw data by variations in system geometry (source-detector distance), antiscatter grid, and bowtie filter. Quantitative assessment of CBCT images in 41 subjects demonstrated: ~70% reduction in image nonuniformity with artifact correction methods (lag, glare, beam hardening, and scatter); ~40% reduction in motion-induced streak artifacts via the multi-motion compensation method; and ~15% improvement in soft-tissue contrast-to-noise ratio (CNR) for PWLS compared to filtered backprojection (FBP) at matched resolution. Each of these components was important to improve contrast resolution for point-of-care cranial imaging. CONCLUSIONS This work presents the first application of a high-quality, point-of-care CBCT system for imaging of the head/ brain in a neurological critical care setting. Hardware configuration iterations and an integrated software pipeline for artifacts correction and PWLS reconstruction mitigated artifacts and noise to achieve image quality that could be valuable for point-of-care detection and monitoring of a variety of intracranial abnormalities, including ICH and hydrocephalus.
Collapse
Affiliation(s)
- P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - D Foos
- Carestream Health, Rochester, NY, 14608, USA
| | - X Wang
- Carestream Health, Rochester, NY, 14608, USA
| | - N Khanna
- Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - N Aygun
- Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - R D Stevens
- Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Neurology, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, 21205, USA
| |
Collapse
|
25
|
Vagdargi P, Uneri A, Sheth N, Sisniega A, De Silva T, Osgood GM, Siewerdsen JH. Calibration and Registration of a Freehand Video-Guided Surgical Drill for Orthopaedic Trauma. Proc SPIE Int Soc Opt Eng 2020; 11315. [PMID: 32476703 DOI: 10.1117/12.2550001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Pelvic trauma surgical procedures rely heavily on guidance with 2D fluoroscopy views for navigation in complex bone corridors. This "fluoro-hunting" paradigm results in extended radiation exposure and possible suboptimal guidewire placement from limited visualization of the fractures site with overlapped anatomy in 2D fluoroscopy. A novel computer vision-based navigation system for freehand guidewire insertion is proposed. The navigation framework is compatible with the rapid workflow in trauma surgery and bridges the gap between intraoperative fluoroscopy and preoperative CT images. The system uses a drill-mounted camera to detect and track poses of simple multimodality (optical/radiographic) markers for registration of the drill axis to fluoroscopy and, in turn, to CT. Surgical navigation is achieved with real-time display of the drill axis position on fluoroscopy views and, optionally, in 3D on the preoperative CT. The camera was corrected for lens distortion effects and calibrated for 3D pose estimation. Custom marker jigs were constructed to calibrate the drill axis and tooltip with respect to the camera frame. A testing platform for evaluation of the navigation system was developed, including a robotic arm for precise, repeatable, placement of the drill. Experiments were conducted for hand-eye calibration between the drill-mounted camera and the robot using the Park and Martin solver. Experiments using checkerboard calibration demonstrated subpixel accuracy [-0.01 ± 0.23 px] for camera distortion correction. The drill axis was calibrated using a cylindrical model and demonstrated sub-mm accuracy [0.14 ± 0.70 mm] and sub-degree angular deviation.
Collapse
Affiliation(s)
- P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21218
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - N Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - G M Osgood
- Department of Orthopedic Surgery, Johns Hopkins Medicine, Baltimore, MD, USA 21218
| | - J H Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21218.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| |
Collapse
|
26
|
Liu SZ, Cao Q, Osgood GM, Siewerdsen JH, Stayman JW, Zbijewski W. Quantitative Assessment of Weight-Bearing Fracture Biomechanics Using Extremity Cone-Beam CT. Proc SPIE Int Soc Opt Eng 2020; 11317:113170I. [PMID: 33612913 PMCID: PMC7891844 DOI: 10.1117/12.2549768] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
PURPOSE We investigate an application of multisource extremity Cone-Beam CT (CBCT) with capability of weight-bearing tomographic imaging to obtain quantitative measurements of load-induced deformation of metal internal fixation hardware (e.g. tibial plate). Such measurements are desirable to improve the detection of delayed fusion or non-union of fractures, potentially facilitating earlier return to weight-bearing activities. METHODS To measure the deformation, we perform a deformable 3D-2D registration of a prior model of the implant to its CBCT projections under load-bearing. This Known-Component Registration (KC-Reg) framework avoids potential errors that emerge when the deformation is estimated directly from 3D reconstructions with metal artifacts. The 3D-2D registration involves a free-form deformable (FFD) point cloud model of the implant and a 3D cubic B-spline representation of the deformation. Gradient correlation is used as the optimization metric for the registration. The proposed approach was tested in experimental studies on the extremity CBCT system. A custom jig was designed to apply controlled axial loads to a fracture model, emulating weight-bearing imaging scenarios. Performance evaluation involved a Sawbone tibia phantom with an ~4 mm fracture gap. The model was fixed with a locking plate and imaged under five loading conditions. To investigate performance in the presence of confounding background gradients, additional experiments were performed with a pre-deformed femoral plate placed in a water bath with Ca bone mineral density inserts. Errors were measured using eight reference BBs for the tibial plate, and surface point distances for the femoral plate, where a prior model of deformed implant was available for comparison. RESULTS Both in the loaded tibial plate case and for the femoral plate with confounding background gradients, the proposed KC-Reg framework estimated implant deformations with errors of <0.2 mm for the majority of the investigated deformation magnitudes (error range 0.14 - 0.25 mm). The accuracy was comparable between 3D-2D registrations performed from 12 x-ray views and registrations obtained from as few as 3 views. This was likely enabled by the unique three-source x-ray unit on the extremity CBCT scanner, which implements two off-central-plane focal spots that provided oblique views of the field-of-view to aid implant pose estimation. CONCLUSION Accurate measurements of fracture hardware deformations under physiological weight-bearing are feasible using an extremity CBCT scanner and FFD 3D-2D registration. The resulting deformed implant models can be incorporated into tomographic reconstructions to reduce metal artifacts and improve quantification of the mineral content of fracture callus in CBCT volumes.
Collapse
Affiliation(s)
- S. Z. Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - Q. Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - G. M. Osgood
- Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, MD 21205
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
- Russell H. Morgan Department of Radiology, Johns Hopkins Hospital, Baltimore, MD 21205
| | - J. W. Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - W. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| |
Collapse
|
27
|
Vijayan RC, Han R, Wu P, Sheth NM, Ketcha MD, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Image-Guided Robotic K-Wire Placement for Orthopaedic Trauma Surgery. Proc SPIE Int Soc Opt Eng 2020; 11315:113151A. [PMID: 36082206 PMCID: PMC9450105 DOI: 10.1117/12.2549713] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE We report the initial development of an image-based solution for robotic assistance of pelvic fracture fixation. The approach uses intraoperative radiographs, preoperative CT, and an end effector of known design to align the robot with target trajectories in CT. The method extends previous work to solve the robot-to-patient registration from a single radiographic view (without C-arm rotation) and addresses the workflow challenges associated with integrating robotic assistance in orthopaedic trauma surgery in a form that could be broadly applicable to isocentric or non-isocentric C-arms. METHODS The proposed method uses 3D-2D known-component registration to localize a robot end effector with respect to the patient by: (1) exploiting the extended size and complex features of pelvic anatomy to register the patient; and (2) capturing multiple end effector poses using precise robotic manipulation. These transformations, along with an offline hand-eye calibration of the end effector, are used to calculate target robot poses that align the end effector with planned trajectories in the patient CT. Geometric accuracy of the registrations was independently evaluated for the patient and the robot in phantom studies. RESULTS The resulting translational difference between the ground truth and patient registrations of a pelvis phantom using a single (AP) view was 1.3 mm, compared to 0.4 mm using dual (AP+Lat) views. Registration of the robot in air (i.e., no background anatomy) with five unique end effector poses achieved mean translational difference ~1.4 mm for K-wire placement in the pelvis, comparable to tracker-based margins of error (commonly ~2 mm). CONCLUSIONS The proposed approach is feasible based on the accuracy of the patient and robot registrations and is a preliminary step in developing an image-guided robotic guidance system that more naturally fits the workflow of fluoroscopically guided orthopaedic trauma surgery. Future work will involve end-to-end development of the proposed guidance system and assessment of the system with delivery of K-wires in cadaver studies.
Collapse
Affiliation(s)
- R. C. Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - R. Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - N. M. Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - P. Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - S. Vogt
- Siemens Healthineers, Forchheim, Germany
| | | | - G. M. Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| |
Collapse
|
28
|
Doerr SA, Uneri A, Huang Y, Jones CK, Zhang X, Ketcha MD, Helm PA, Siewerdsen JH. Data-Driven Detection and Registration of Spine Surgery Instrumentation in Intraoperative Images. Proc SPIE Int Soc Opt Eng 2020; 11315:113152P. [PMID: 36082205 PMCID: PMC9450103 DOI: 10.1117/12.2550052] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Conventional model-based 3D-2D registration algorithms can be challenged by limited capture range, model validity, and stringent intraoperative runtime requirements. In this work, a deep convolutional neural network was used to provide robust initialization of a registration algorithm (known-component registration, KC-Reg) for 3D localization of spine surgery implants, combining the speed and global support of data-driven approaches with the previously demonstrated accuracy of model-based registration. METHODS The approach uses a Faster R-CNN architecture to detect and localize a broad variety and orientation of spinal pedicle screws in clinical images. Training data were generated using projections from 17 clinical cone-beam CT scans and a library of screw models to simulate implants. Network output was processed to provide screw count and 2D poses. The network was tested on two test datasets of 2,000 images, each depicting real anatomy and realistic spine surgery instrumentation - one dataset involving the same patient data as in the training set (but with different screws, poses, image noise, and affine transformations) and one dataset with five patients unseen in the test data. Assessment of device detection was quantified in terms of accuracy and specificity, and localization accuracy was evaluated in terms of intersection-over-union (IOU) and distance between true and predicted bounding box coordinates. RESULTS The overall accuracy of pedicle screw detection was ~86.6% (85.3% for the same-patient dataset and 87.8% for the many-patient dataset), suggesting that the screw detection network performed reasonably well irrespective of disparate, complex anatomical backgrounds. The precision of screw detection was ~92.6% (95.0% and 90.2% for the respective same-patient and many-patient datasets). The accuracy of screw localization was within 1.5 mm (median difference of bounding box coordinates), and median IOU exceeded 0.85. For purposes of initializing a 3D-2D registration algorithm, the accuracy was observed to be well within the typical capture range of KC-Reg.1. CONCLUSIONS Initial evaluation of network performance indicates sufficient accuracy to integrate with algorithms for implant registration, guidance, and verification in spine surgery. Such capability is of potential use in surgical navigation, robotic assistance, and data-intensive analysis of implant placement in large retrospective datasets. Future work includes correspondence of multiple views, 3D localization, screw classification, and expansion of the training dataset to a broader variety of anatomical sites, number of screws, and types of implants.
Collapse
Affiliation(s)
- S. A. Doerr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - Y. Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - C. K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - X. Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - M. D. Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| |
Collapse
|
29
|
Shi G, Subramanian S, Cao Q, Demehri S, Siewerdsen JH, Zbijewski W. Application of a Novel Ultra-High Resolution Multi-Detector CT in Quantitative Imaging of Trabecular Microstructure. Proc SPIE Int Soc Opt Eng 2020; 11317:113171E. [PMID: 33597792 PMCID: PMC7885907 DOI: 10.1117/12.2552385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE To evaluate the performance of a novel ultra-high resolution multi-detector CT scanner (Canon Aquilion Precision UHR CT), capable of visualizing ~150 μm details, in quantitative assessment of bone microarchitecture. Compared to conventional CT, the spatial resolution of UHR CT begins to approach the size of the trabeculae. This might enable measurements of microstructural correlates of osteoporosis, osteoarthritis, and other bone disease. METHODS The UHR CT system features a 160-row x-ray detector with 250×250 μm pixels (measured at isocenter) and a custom-designed x-ray source with a 0.4×0.5 mm focal spot. Visualization of high contrast details down to ~150 μm has been achieved on this device, which is now commercially available for clinical use. To evaluate the performance of UHR CT in quantification of bone microstructure, we imaged a variety of human bone samples (including ulna, radius, and vertebrae) embedded in a ~16 cm diameter plastic cylinder and in an anthropomorphic thorax phantom (QRM-Thorax, QRM Gmbh). Helical UHR CT acquisitions (120 kVp tube voltage) were acquired at scan exposures of 375 mAs - 5 mAs. For comparison, the samples were also imaged using a Normal Resolution (NR) mode available on the scanner, involving 500 μm slice thickness, exposure of 50 mAs, and a focal spot of 0.6×1.3 mm. We obtained micro-CT (μCT) of the bone samples at ~28 μm voxel size as a gold-standard reference. Geometric measurements of bone microstructure were performed in 17 regions-of-interests (ROIs) distributed throughout the bones of the phantoms; image registration was used to place the ROIs at corresponding locations in the UHR CT and NR CT. Trabecular thickness Tb.Th, spacing Tb.Sp, and Bone Volume fraction BvTv were obtained. The UHR and NR imaging protocols were compared terms of correlations to μCT and error of trabecular measurements. The effect of dose on trabecular morphometry was also studied for the UHR CT. Furthermore, we evaluated the sensitivity of texture features of trabecular bone (recently proposed as an alternative to geometric indices of microstructure) to imaging protocol. Image texture evaluation was performed using ~150 regions of interest (ROIs) across all bone samples. Three-dimensional Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRM) features were extracted for each ROI. We analyzed correlation and concordance correlation coefficient (CCC) of the mean ROI values of texture features obtained using the UHR and NR modes. RESULTS UHR CT reconstructions of bone samples clearly demonstrated improved visualization of the trabeculae compared to NR CT. UHR CT achieved substantially better correlations for all three metrics of bone microstructure, in particular for BvTv (correlation coefficient of 0.91 for UHR CT compared to 0.84 for NR CT) and TbSp (correlation of 0.74 for UHR CT and 0.047 for NR CT). The error obtained with UHR CT was generally smaller than that of NR CT. For TbSp, the mean deviation from μCT (averaged across all bone samples) was only ~0.07 for UHR CT, compared to 0.25 for NR CT. Analysis of reproducibility of texture features of trabecular bone between UHR CT and NR CT revealed fair correlations (>0.7) for the majority of GLCM features, but relatively poor CCC (e.g. 0.02 for Energy and 0.04 for Entropy). The magnitude of texture metrics is particularly affected by the enhanced spatial resolution of UHR CT. CONCLUSION The recently introduced UHR CT achieves improved correlation and reduced error in measurements of trabecular bone microstructure compared to conventional resolution CT. Future development of diagnostic strategies based on textural biomarkers derived from UHR CT will need to account for potential sensitivity of texture features to image resolution.
Collapse
Affiliation(s)
- G Shi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - S Subramanian
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - S Demehri
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA 21287
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA 21287
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| |
Collapse
|
30
|
Sheth NM, De Silva T, Uneri A, Ketcha M, Han R, Vijayan R, Osgood GM, Siewerdsen JH. A mobile isocentric C‐arm for intraoperative cone‐beam CT: Technical assessment of dose and 3D imaging performance. Med Phys 2020; 47:958-974. [DOI: 10.1002/mp.13983] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 12/09/2019] [Accepted: 12/13/2019] [Indexed: 01/01/2023] Open
Affiliation(s)
- N. M. Sheth
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - T. De Silva
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - A. Uneri
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - M. Ketcha
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - R. Han
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - R. Vijayan
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| | - G. M. Osgood
- Department of Orthopaedic Surgery Johns Hopkins Medical Institutions Baltimore MD USA
| | - J. H. Siewerdsen
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD USA
| |
Collapse
|
31
|
Siewerdsen JH, Uneri A, Hernandez AM, Burkett GW, Boone JM. Cone‐beam CT dose and imaging performance evaluation with a modular, multipurpose phantom. Med Phys 2019; 47:467-479. [DOI: 10.1002/mp.13952] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Revised: 11/18/2019] [Accepted: 11/23/2019] [Indexed: 12/17/2022] Open
Affiliation(s)
- J. H. Siewerdsen
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD 21205USA
| | - A. Uneri
- Department of Biomedical Engineering Johns Hopkins University Baltimore MD 21205USA
| | - A. M. Hernandez
- Department of Radiology University of California – Davis Sacramento CA 95817USA
| | - G. W. Burkett
- Department of Radiology University of California – Davis Sacramento CA 95817USA
| | - J. M. Boone
- Department of Radiology University of California – Davis Sacramento CA 95817USA
| |
Collapse
|
32
|
Subramanian S, Brehler M, Cao Q, Quevedo Gonzalez FJ, Breighner RE, Carrino JA, Wright T, Yorkston J, Siewerdsen JH, Zbijewski W. Quantitative Evaluation of Bone Microstructure using High-Resolution Extremity Cone-Beam CT with a CMOS Detector. Proc SPIE Int Soc Opt Eng 2019; 10953. [PMID: 31814656 DOI: 10.1117/12.2515504] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Purpose A high-resolution cone-beam CT (CBCT) system for extremity imaging has been developed using a custom complementary metal-oxide-semiconductor (CMOS) x-ray detector. The system has spatial resolution capability beyond that of recently introduced clinical orthopedic CBCT. We evaluate performance of this new scanner in quantifying trabecular microstructure in subchondral bone of the knee. Methods The high-resolution scanner uses the same mechanical platform as the commercially available Carestream OnSight 3D extremity CBCT, but replaces the conventional amorphous silicon flat-panel detector (a-Si:H FPD with 0.137 mm pixels and a ~0.7 mm thick scintillator) with a Dalsa Xineos3030 CMOS detector (0.1 mm pixels and a custom 0.4 mm scintillator). The CMOS system demonstrates ~40% improved spatial resolution (FWHM of a ~0.1 mm tungsten wire) and ~4× faster scan time than FPD-based extremity CBCT (FPD-CBCT). To investigate potential benefits of this enhanced spatial resolution in quantitative assessment of bone microstructure, 26 trabecular core samples were obtained from four cadaveric tibias and imaged using FPD-CBCT (75 μm voxels), CMOS-CBCT (75 μm voxels), and reference micro-CT (μCT, 15 μm voxels). CBCT bone segmentations were obtained using local Bernsen's thresholding combined with global histogram-based pre-thresholding; μCT segmentation involved Otsu's method. Measurements of trabecular thickness (Tb.Th), spacing (Tb.Sp), number (Tb.N) and bone volume (BV/TV) were performed in registered regions of interest in the segmented CBCT and μCT reconstructions. Results CMOS-CBCT achieved noticeably improved delineation of trabecular detail compared to FPD-CBCT. Correlations with reference μCT for metrics of bone microstructure were better for CMOS-CBCT than FPD-CBCT, in particular for Tb.Th (increase in Pearson correlation from 0.84 with FPD-CBCT to 0.96 with CMOS-CBCT) and Tb.Sp (increase from 0.80 to 0.85). This improved quantitative performance of CMOS-CBCT is accompanied by a reduction in scan time, from ~60 sec for a clinical high resolution protocol on FPD-CBCT to ~17 sec for CMOS-CBCT. Conclusion The CMOS-based extremity CBCT prototype achieves improved performance in quantification of bone microstructure, while retaining other diagnostic capabilities of its FPD-based precursor, including weight-bearing imaging. The new system offers a promising platform for quantitative imaging of skeletal health in osteoporosis and osteoarthritis.
Collapse
Affiliation(s)
- S Subramanian
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - M Brehler
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | | | - R E Breighner
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY USA
| | - J A Carrino
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY USA
| | - T Wright
- Biomechanics Laboratory, Hospital for Special Surgery, New York, NY USA
| | | | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
33
|
Abstract
Intraoperative cone-beam CT (CBCT) is increasingly used for surgical navigation and validation of device placement. In spinal deformity correction, CBCT provides visualization of pedicle screws and fixation rods in relation to adjacent anatomy. This work reports and evaluates a method that uses prior information regarding such surgical instrumentation for improved metal artifact reduction (MAR). The known-component MAR (KC-MAR) approach achieves precise localization of instrumentation in projection images using rigid or deformable 3D-2D registration of component models, thereby overcoming residual errors associated with segmentation-based methods. Projection data containing metal components are processed via 2D inpainting of the detector signal, followed by 3D filtered back-projection (FBP). Phantom studies were performed to identify nominal algorithm parameters and quantitatively investigate performance over a range of component material composition and size. A cadaver study emulating screw and rod placement in spinal deformity correction was conducted to evaluate performance under realistic clinical imaging conditions. KC-MAR demonstrated reduction in artifacts (standard deviation in voxel values) across a range of component types and dose levels, reducing the artifact to 5-10 HU. Accurate component delineation was demonstrated for rigid (screw) and deformable (rod) models with sub-mm registration errors, and a single-pixel dilation of the projected components was found to compensate for partial-volume effects. Artifacts associated with spine screws and rods were reduced by 40%-80% in cadaver studies, and the resulting images demonstrated markedly improved visualization of instrumentation (e.g. screw threads) within cortical margins. The KC-MAR algorithm combines knowledge of surgical instrumentation with 3D image reconstruction in a manner that overcomes potential pitfalls of segmentation. The approach is compatible with FBP-thereby maintaining simplicity in a manner that is consistent with surgical workflow-or more sophisticated model-based reconstruction methods that could further improve image quality and/or help reduce radiation dose.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - T Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
34
|
Vijayan R, De Silva T, Han R, Zhang X, Uneri A, Doerr S, Ketcha M, Perdomo-Pantoja A, Theodore N, Siewerdsen JH. Automatic pedicle screw planning using atlas-based registration of anatomy and reference trajectories. Phys Med Biol 2019; 64:165020. [PMID: 31247607 PMCID: PMC8650759 DOI: 10.1088/1361-6560/ab2d66] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
An algorithm for automatic spinal pedicle screw planning is reported and evaluated in simulation and first clinical studies. A statistical atlas of the lumbar spine (N = 40 members) was constructed for active shape model (ASM) registration of target vertebrae to an unsegmented patient CT. The atlas was augmented to include 'reference' trajectories through the pedicles as defined by a spinal neurosurgeon. Following ASM registration, the trajectories are transformed to the patient CT and accumulated to define a patient-specific screw trajectory, diameter, and length. The algorithm was evaluated in leave-one-out analysis (N = 40 members) and for the first time in a clinical study (N = 5 patients undergoing cone-beam CT (CBCT) guided spine surgery), and in simulated low-dose CBCT images. ASM registration achieved (2.0 ± 0.5) mm root-mean-square-error (RMSE) in surface registration in 96% of cases, with outliers owing to limitations in CT image quality (high noise/slice thickness). Trajectory centerlines were conformant to the pedicle in 95% of cases. For all non-breaching trajectories, automatically defined screw diameter and length were similarly conformant to the pedicle and vertebral body (98.7%, Grade A/B). The algorithm performed similarly in CBCT clinical studies (93% centerline and screw conformance) and was consistent at the lowest dose levels tested. Average runtime in planning five-level (lumbar) bilateral screws (ten trajectories) was (312.1 ± 104.0) s. The runtime per level for ASM registration was (41.2 ± 39.9) s, and the runtime per trajectory was (4.1 ± 0.8) s, suggesting a runtime of ~(45.3 ± 39.9) s with a more fully parallelized implementation. The algorithm demonstrated accurate, automatic definition of pedicle screw trajectories, diameter, and length in CT images of the spine without segmentation. The studies support translation to clinical studies in free-hand or robot-assisted spine surgery, quality assurance, and data analytics in which fast trajectory definition is a benefit to workflow.
Collapse
Affiliation(s)
- R Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | | | |
Collapse
|
35
|
Brehler M, Islam A, Vogelsang L, Yang D, Sehnert W, Shakoor D, Demehri S, Siewerdsen JH, Zbijewski W. Coupled Active Shape Models for Automated Segmentation and Landmark Localization in High-Resolution CT of the Foot and Ankle. Proc SPIE Int Soc Opt Eng 2019; 10953. [PMID: 31337927 DOI: 10.1117/12.2515022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Purpose We develop an Active Shape Model (ASM) framework for automated bone segmentation and anatomical landmark localization in weight-bearing Cone-Beam CT (CBCT). To achieve a robust shape model fit in narrow joint spaces of the foot (0.5 - 1 mm), a new approach for incorporating proximity constraints in ASM (coupled ASM, cASM) is proposed. Methods In cASM, shape models of multiple adjacent foot bones are jointly fit to the CBCT volume. This coupling enables checking for proximity between the evolving shapes to avoid situations where a conventional single-bone ASM might erroneously fit to articular surfaces of neighbouring bones. We used 21 extremity CBCT scans of the weight-bearing foot to compare segmentation and landmark localization accuracy of ASM and cASM in leave-one-out validation. Each scan was used as a test image once; shape models of calcaneus, talus, navicular, and cuboid were built from manual surface segmentations of the remaining 20 scans. The models were augmented with seven anatomical landmarks used for common measurements of foot alignment. The landmarks were identified in the original CBCT volumes and mapped onto mean bone shape surfaces. ASM and cASM were run for 100 iterations, and the number of principal shape components was increased every 10 iterations. Automated landmark localization was achieved by applying known point correspondences between landmark vertices on the mean shape and vertices of the final active shape segmentation of the test image. Results Root Mean Squared (RMS) error of bone surface segmentation improved from 3.6 mm with conventional ASM to 2.7 mm with cASM. Furthermore, cASM achieved convergence (no change in RMS error with iteration) after ~40 iterations of shape fitting, compared to ~60 iterations for ASM. Distance error in landmark localization was 25% to 55% lower (depending on the landmark) with cASM than with ASM. The importance of using a coupled model is underscored by the finding that cASM detected and corrected collisions between evolving shapes in 50% to 80% (depending on the bone) of shape model fits. Conclusion The proposed cASM framework improves accuracy of shape model fits, especially in complexes of tightly interlocking, articulated joints. The approach enables automated anatomical analysis in volumetric imaging of the foot and ankle, where narrow joint spaces challenge conventional shape models.
Collapse
Affiliation(s)
- M Brehler
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - A Islam
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | | | - D Yang
- Carestream Health, Rochester, NY USA
| | - W Sehnert
- Carestream Health, Rochester, NY USA
| | - D Shakoor
- Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - S Demehri
- Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA.,Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
36
|
Sisniega A, Stayman JW, Capostagno S, Weiss CR, Ehtiati T, Siewerdsen JH. Convergence criterion for MBIR based on the local noise-power spectrum: Theory and implementation in a framework for accelerated 3D image reconstruction with a morphological pyramid. Proc SPIE Int Soc Opt Eng 2019; 11072. [PMID: 34267413 DOI: 10.1117/12.2534896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Model-based iterative reconstruction (MBIR) offers improved noise-resolution tradeoffs and artifact reduction in cone-beam CT compared to analytical reconstruction, but carries increased computational burden. An important consideration in minimizing computation time is reliable selection of the stopping criterion to perform the minimum number of iterations required to obtain the desired image quality. Most MBIR methods rely on a fixed number of iterations or relative metrics on image or cost-function evolution, and it would be desirable to use metrics that are more representative of the underlying image properties. A second front for reduction of computation time is the use of acceleration techniques (e.g. subsets or momentum). However, most of these techniques do not strictly guarantee convergence of the resulting MBIR method. A data-dependent analytical model of noise-power spectrum (NPS) for penalized weighted least squares (PWLS) reconstruction is proposed as an absolute metric of image properties for the fully converged volume. Distance to convergence is estimated as the root mean squared error (RMSE) between the estimated NPS and an NPS measured on a uniform region of interest (ROI) in the evolving volume. Iterations are stopped when the RMSE falls below a threshold directly related with the properties of the target image. Further acceleration was achieved by combining the spectral stopping criterion with a morphological pyramid (mPyr) in which the minimization of the PWLS cost-function is divided in a cascade of stages. The algorithm parameters (voxel size in this work) change between stages to achieve faster evolution in early stages, and a final stage with the target parameters to guarantee convergence. Transition between stages is governed by the spectral stopping criterion. The approach was evaluated on simulated CBCT data of a realistic digital abdomen phantom. Accuracy of the NPS model and evolution with time of the distance from the measured NPS was assessed in two ROIs. Performance of the spectrally-driven mPyr architecture was compared to a conventional, single stage, PWLS, and to two mPyr designs running a fixed number of iterations. The spectrally-driven mPyr achieved faster convergence, with 40% lower RMSE than the single stage PWLS, and between 10% and 20% RMSE reduction compared to other mPyr designs. The proposed spectral stopping criterion proved to be a suitable choice for a stopping rule, and, in particular, to govern mPyr stage transition.
Collapse
Affiliation(s)
- A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - S Capostagno
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - T Ehtiati
- Siemens Healthineers, Hoffman Estates, IL USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
37
|
Liu SZ, Tilley S, Cao Q, Siewerdsen JH, Stayman JW, Zbijewski W. Known-Component Model-Based Material Decomposition for Dual Energy Imaging of Bone Compositions in the Presence of Metal Implant. Proc SPIE Int Soc Opt Eng 2019; 11072. [PMID: 31359904 DOI: 10.1117/12.2534725] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Dual energy computed tomography (DE CT) is a promising technology for the assessment of bone compositions. One of potential applications involves evaluations of fracture healing using longitudinal measurements of callus mineralization. However, imaging of fractures is often challenged by the presence of metal fixation hardware. In this work, we report on a new simultaneous DE reconstruction-decomposition algorithm that integrates the previously introduced Model-Based Material Decomposition (MBMD) with a Known-Component (KC) framework to mitigate metal artifacts. The algorithm was applied to the DE data obtained on a dedicated extremity cone-beam CT (CBCT) with capability for weight-bearing imaging. To acquire DE projections in a single gantry rotation, we exploited a unique multisource design of the system, where three X-ray sources were mounted parallel to the axis of rotation. The central source provided high energy (HE) data at 120 kVp, while the two remaining sources were operated at a low energy (LE) of 60 kVp. This novel acquisition trajectory further motivates the use of MBMD to accommodate this complex DE sampling pattern. The algorithm was validated in a simulation study using a digital extremity phantom. The phantom consisted of a water background with an insert containing varying concentrations of calcium (50 - 175 mg/mL). Two configurations of titanium implants were considered: a fixation plate and an intramedullary nail. The accuracy of calcium-water decompositions obtained with the proposed KC-MBMD algorithm was compared to MBMD without metal component model. Metal artifacts were almost completely removed by KC-MBMD. Relative absolute errors of calcium concentration in the vicinity of metal were 6% - 31% for KC-MBMD (depending on the calcium insert and implant configuration), compared favorably to 48% - 273% for MBMD. Moreover, accuracy of concentration estimates for KC-MBMD in the presence of metal implant approached that of MBMD in a configuration without implant (6%-23%). The proposed algorithm achieved accurate DE material decomposition in the presence of metal implants using a non-conventional, axial multisource DE acquisition pattern.
Collapse
Affiliation(s)
- S Z Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - S Tilley
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| |
Collapse
|
38
|
Han R, Uneri A, De Silva T, Ketcha M, Goerres J, Vogt S, Kleinszig G, Osgood G, Siewerdsen JH. Atlas-based automatic planning and 3D–2D fluoroscopic guidance in pelvic trauma surgery. ACTA ACUST UNITED AC 2019; 64:095022. [DOI: 10.1088/1361-6560/ab1456] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
39
|
Cao Q, Sisniega A, Stayman JW, Yorkston J, Siewerdsen JH, Zbijewski W. Quantitative Cone-Beam CT of Bone Mineral Density Using Model-Based Reconstruction. Proc SPIE Int Soc Opt Eng 2019; 10948:109480Y. [PMID: 31384094 PMCID: PMC6681810 DOI: 10.1117/12.2513216] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE We develop and validate a model-based framework for artifact correction and image reconstruction to enable application of Cone-Beam CT (CBCT) in quantitative assessment of bone mineral density (BMD). Compared to conventional quantitative CT, this approach does not require a BMD calibration phantom in the field-of-view during an object scan. METHODS The quantitative CBCT (qCBCT) imaging framework combined fast Monte Carlo (MC) scatter estimation, accurate models of detector response, and polyenergetic Poisson likelihood (PolyPL, Elbakri et al 2003). The underlying object model assumed that the tissues were ideal mixtures of water and calcium carbonate (CaCO3). Accuracy and reproducibility of qCBCT was evaluated in benchtop test-retest studies emulating a compact extremity CBCT system (axis-detector distance=56 cm, 90 kVp x-ray beam, ~16 mGy central dose). Various arrangements of Ca inserts (50-500 mg/mL) were placed in water cylinders of ~11 cm to ~15 cm diameter and scanned at multiple positions inside the field-of-view for a total of 20 configurations. In addition, a cadaveric ankle was imaged in five configurations (with and without Ca inserts and water bath). Coefficient of variation (CV) of BMD values across different experimental configurations was used to assess reproducibility under varying imaging conditions. The performance of the model-based qCBCT framework (MC + PolyPL) was compared to FDK with water beam hardening correction and MC scatter correction. RESULTS The PolyPL framework achieved accuracy of 20 mg/mL or better across all insert densities and experimental configurations. By comparison, the accuracy of the FDK-based BMD estimates deteriorated with higher mineralization, resulting in ~120 mg/mL error for a 500 mg/mL Ca insert. Additionally, the model-based approach mitigated residual streaks that were present in FDK reconstructions. The CV of both methods was ~15% at 50 mg/mL Ca and less than ~8% for higher density inserts, where the PolyPL framework achieved 20-25% lower CV than the FDK-based approach. CONCLUSION Accurate and reproducible BMD measurements can be achieved in extremity CBCT, supporting clinical applications in quantitative monitoring of fracture risk, osteoporosis treatment, and early osteoarthritis.
Collapse
Affiliation(s)
- Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | | | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA 21287
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| |
Collapse
|
40
|
Abstract
Volume-of-interest (VOI) imaging is a promising strategy for dose reduction in computed tomography (CT) while retaining image quality. However, implementation of VOI-CT has been challenged by the lack of adequate hardware and the interior tomography reconstruction problem. Multiple aperture devices (MAD) are a novel filtration scheme that can achieve x-ray fluence field modulation in a compact design with small translations. In this work, we propose a general approach for VOI imaging using MADs. MAD trajectories are designed to dynamically tailor the fluence for prescribed VOI. A penalized-likelihood reconstruction algorithm is proposed for fully truncated projections extended with scout views. Physical experiments were conducted to verify the feasibility for non-centered elliptic VOIs. Image quality and dose were estimated and compared with standard fullfield protocols. The ability of MAD-based VOI imaging to retain high image quality while significantly decreasing the total dose is demonstrated, suggesting the potential for dose reduction in clinical CT applications.
Collapse
Affiliation(s)
- W Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| | - G J Gang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| |
Collapse
|
41
|
Uneri A, Zhang X, Stayman JW, Helm PA, Osgood GM, Theodore N, Siewerdsen JH. 3D-2D Image Registration in Virtual Long-Film Imaging: Application to Spinal Deformity Correction. Proc SPIE Int Soc Opt Eng 2019; 10951:109511H. [PMID: 34290470 PMCID: PMC8292105 DOI: 10.1117/12.2513679] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
PURPOSE Intraoperative 2D virtual long-film (VLF) imaging is investigated for 3D guidance and confirmation of the surgical product in spinal deformity correction. Multi-slot-scan geometry (rather than a single-slot "topogram") is exploited to produce parallax views of the scene for accurate 3D colocalization from a single radiograph. METHODS The multi-slot approach uses additional angled collimator apertures to form fan-beams with disparate views (parallax) of anatomy and instrumentation and to extend field-of-view beyond the linear motion limits. Combined with a knowledge of surgical implants (pedicle screws and/or spinal rods modeled as "known components"), 3D-2D image registration is used to solve for pose estimates via optimization of image gradient correlation. Experiments were conducted in cadaver studies emulating the system geometry of the O-arm (Medtronic, Minneapolis MN). RESULTS Experiments demonstrated feasibility of multi-slot VLF and quantified the geometric accuracy of 3D-2D registration using VLF acquisitions. Registration of pedicle screws from a single VLF yielded mean target registration error of (2.0±0.7) mm, comparable to the accuracy of surgical trackers and registration using multiple radiographs (e.g., AP and LAT). CONCLUSIONS 3D-2D registration in a single VLF image offers a promising new solution for image guidance in spinal deformity correction. The ability to accurately resolve pose from a single view absolves workflow challenges of multiple-view registration and suggests application beyond spine surgery, such as reduction of long-bone fractures.
Collapse
Affiliation(s)
- A. Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - X. Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - J. W. Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | | | - G. M. Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD
| | - N. Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD
| |
Collapse
|
42
|
Wu P, Stayman JW, Sisniega A, Zbijewski W, Foos D, Wang X, Aygun N, Stevens R, Siewerdsen JH. Statistical weights for model-based reconstruction in cone-beam CT with electronic noise and dual-gain detector readout. ACTA ACUST UNITED AC 2018; 63:245018. [DOI: 10.1088/1361-6560/aaf0b4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
43
|
De Silva T, Uneri A, Zhang X, Ketcha M, Han R, Sheth N, Martin A, Vogt S, Kleinszig G, Belzberg A, Sciubba DM, Siewerdsen JH. Real-time, image-based slice-to-volume registration for ultrasound-guided spinal intervention. Phys Med Biol 2018; 63:215016. [PMID: 30372418 DOI: 10.1088/1361-6560/aae761] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Real-time fusion of magnetic resonance (MR) and ultrasound (US) images could facilitate safe and accurate needle placement in spinal interventions. We develop an entirely image-based registration method (independent of or complementary to surgical trackers) that includes an efficient US probe pose initialization algorithm. The registration enables the simultaneous display of 2D ultrasound image slices relative to 3D pre-procedure MR images for navigation. A dictionary-based 3D-2D pose initialization algorithm was developed in which likely probe positions are predefined in a dictionary with feature encoding by Haar wavelet filters. Feature vectors representing the 2D US image are computed by scaling and translating multiple Haar basis filters to capture scale, location, and relative intensity patterns of distinct anatomical features. Following pose initialization, fast 3D-2D registration was performed by optimizing normalized cross-correlation between intra- and pre-procedure images using Powell's method. Experiments were performed using a lumbar puncture phantom and a fresh cadaver specimen presenting realistic image quality in spinal US imaging. Accuracy was quantified by comparing registration transforms to ground truth motion imparted by a computer-controlled motion system and calculating target registration error (TRE) in anatomical landmarks. Initialization using a 315-length feature vector yielded median translation accuracy of 2.7 mm (3.4 mm interquartile range, IQR) in the phantom and 2.1 mm (2.5 mm IQR) in the cadaver. By comparison, storing the entire image set in the dictionary and optimizing correlation yielded a comparable median accuracy of 2.1 mm (2.8 mm IQR) in the phantom and 2.9 mm (3.5 mm IQR) in the cadaver. However, the dictionary-based method reduced memory requirements by 47× compared to storing the entire image set. The overall 3D error after registration measured using 3D landmarks was 3.2 mm (1.8 mm IQR) mm in the phantom and 3.0 mm (2.3 mm IQR) mm in the cadaver. The system was implemented in a 3D Slicer interface to facilitate translation to clinical studies. Haar feature based initialization provided accuracy and robustness at a level that was sufficient for real-time registration using an entirely image-based method for ultrasound navigation. Such an approach could improve the accuracy and safety of spinal interventions in broad utilization, since it is entirely software-based and can operate free from the cost and workflow requirements of surgical trackers.
Collapse
Affiliation(s)
- T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
44
|
Han R, De Silva T, Ketcha M, Uneri A, Siewerdsen JH. A momentum-based diffeomorphic demons framework for deformable MR-CT image registration. Phys Med Biol 2018; 63:215006. [PMID: 30353886 DOI: 10.1088/1361-6560/aae66c] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Neuro-navigated procedures require a high degree of geometric accuracy but are subject to geometric error from complex deformation in the deep brain-e.g. regions about the ventricles due to egress of cerebrospinal fluid (CSF) upon neuroendoscopic approach or placement of a ventricular shunt. We report a multi-modality, diffeomorphic, deformable registration method using momentum-based acceleration of the Demons algorithm to solve the transformation relating preoperative MRI and intraoperative CT as a basis for high-precision guidance. The registration method (pMI-Demons) extends the mono-modality, diffeomorphic form of the Demons algorithm to multi-modality registration using pointwise mutual information (pMI) as a similarity metric. The method incorporates a preprocessing step to nonlinearly stretch CT image values and incorporates a momentum-based approach to accelerate convergence. Registration performance was evaluated in phantom and patient images: first, the sensitivity of performance to algorithm parameter selection (including update and displacement field smoothing, histogram stretch, and the momentum term) was analyzed in a phantom study over a range of simulated deformations; and second, the algorithm was applied to registration of MR and CT images for four patients undergoing minimally invasive neurosurgery. Performance was compared to two previously reported methods (free-form deformation using mutual information (MI-FFD) and symmetric normalization using mutual information (MI-SyN)) in terms of target registration error (TRE), Jacobian determinant (J), and runtime. The phantom study identified optimal or nominal settings of algorithm parameters for translation to clinical studies. In the phantom study, the pMI-Demons method achieved comparable registration accuracy to the reference methods and strongly reduced outliers in TRE (p [Formula: see text] 0.001 in Kolmogorov-Smirnov test). Similarly, in the clinical study: median TRE = 1.54 mm (0.83-1.66 mm interquartile range, IQR) for pMI-Demons compared to 1.40 mm (1.02-1.67 mm IQR) for MI-FFD and 1.64 mm (0.90-1.92 mm IQR) for MI-SyN. The pMI-Demons and MI-SyN methods yielded diffeomorphic transformations (J > 0) that preserved topology, whereas MI-FFD yielded unrealistic (J < 0) deformations subject to tissue folding and tearing. Momentum-based acceleration gave a ~35% speedup of the pMI-Demons method, providing registration runtime of 10.5 min (reduced to 2.2 min on GPU), compared to 15.5 min for MI-FFD and 34.7 min for MI-SyN. The pMI-Demons method achieved registration accuracy comparable to MI-FFD and MI-SyN, maintained diffeomorphic transformation similar to MI-SyN, and accelerated runtime in a manner that facilitates translation to image-guided neurosurgery.
Collapse
Affiliation(s)
- R Han
- Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | |
Collapse
|
45
|
Uneri A, Zhang X, Yi T, Stayman JW, Helm PA, Theodore N, Siewerdsen JH. Image quality and dose characteristics for an O-arm intraoperative imaging system with model-based image reconstruction. Med Phys 2018; 45:4857-4868. [PMID: 30180274 DOI: 10.1002/mp.13167] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 08/13/2018] [Accepted: 08/16/2018] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To assess the imaging performance and radiation dose characteristics of the O-arm CBCT imaging system (Medtronic Inc., Littleton MA) and demonstrate the potential for improved image quality and reduced dose via model-based image reconstruction (MBIR). METHODS Two main studies were performed to investigate previously unreported characteristics of the O-arm system. First is an investigation of dose and 3D image quality achieved with filtered back-projection (FBP) - including enhancements in geometric calibration, handling of lateral truncation and detector saturation, and incorporation of an isotropic apodization filter. Second is implementation of an MBIR algorithm based on Huber-penalized likelihood estimation (PLH) and investigation of image quality improvement at reduced dose. Each study involved measurements in quantitative phantoms as a basis for analysis of contrast-to-noise ratio and spatial resolution as well as imaging of a human cadaver to test the findings under realistic imaging conditions. RESULTS View-dependent calibration of system geometry improved the accuracy of reconstruction as quantified by the full-width at half maximum of the point-spread function - from 0.80 to 0.65 mm - and yielded subtle but perceptible improvement in high-contrast detail of bone (e.g., temporal bone). Standard technique protocols for the head and body imparted absorbed dose of 16 and 18 mGy, respectively. For low-to-medium contrast (<100 HU) imaging at fixed spatial resolution (1.3 mm edge-spread function) and fixed dose (6.7 mGy), PLH improved CNR over FBP by +48% in the head and +35% in the body. Evaluation at different dose levels demonstrated 30% increase in CNR at 62% of the dose in the head and 90% increase in CNR at 50% dose in the body. CONCLUSIONS A variety of improvements in FBP implementation (geometric calibration, truncation and saturation effects, and isotropic apodization) offer the potential for improved image quality and reduced radiation dose on the O-arm system. Further gains are possible with MBIR, including improved soft-tissue visualization, low-dose imaging protocols, and extension to methods that naturally incorporate prior information of patient anatomy and/or surgical instrumentation.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - T Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - P A Helm
- Medtronic Inc., Littleton, MA, 01460, USA
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA.,Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD, 21287, USA
| |
Collapse
|
46
|
Wu P, Stayman JW, Mow M, Zbijewski W, Sisniega A, Aygun N, Stevens R, Foos D, Wang X, Siewerdsen JH. Reconstruction-of-difference (RoD) imaging for cone-beam CT neuro-angiography. Phys Med Biol 2018; 63:115004. [PMID: 29722296 DOI: 10.1088/1361-6560/aac225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Timely evaluation of neurovasculature via CT angiography (CTA) is critical to the detection of pathology such as ischemic stroke. Cone-beam CTA (CBCT-A) systems provide potential advantages in the timely use at the point-of-care, although challenges of a relatively slow gantry rotation speed introduce tradeoffs among image quality, data consistency and data sparsity. This work describes and evaluates a new reconstruction-of-difference (RoD) approach that is robust to such challenges. A fast digital simulation framework was developed to test the performance of the RoD over standard reference reconstruction methods such as filtered back-projection (FBP) and penalized likelihood (PL) over a broad range of imaging conditions, grouped into three scenarios to test the trade-off between data consistency, data sparsity and peak contrast. Two experiments were also conducted using a CBCT prototype and an anthropomorphic neurovascular phantom to test the simulation findings in real data. Performance was evaluated primarily in terms of normalized root mean square error (NRMSE) in comparison to truth, with reconstruction parameters chosen to optimize performance in each case to ensure fair comparison. The RoD approach reduced NRMSE in reconstructed images by up to 50%-53% compared to FBP and up to 29%-31% compared to PL for each scenario. Scan protocols well suited to the RoD approach were identified that balance tradeoffs among data consistency, sparsity and peak contrast-for example, a CBCT-A scan with 128 projections acquired in 8.5 s over a 180° + fan angle half-scan for a time attenuation curve with ~8.5 s time-to-peak and 600 HU peak contrast. With imaging conditions such as the simulation scenarios of fixed data sparsity (i.e. varying levels of data consistency and peak contrast), the experiments confirmed the reduction of NRMSE by 34% and 17% compared to FBP and PL, respectively. The RoD approach demonstrated superior performance in 3D angiography compared to FBP and PL in all simulation and physical experiments, suggesting the possibility of CBCT-A on low-cost, mobile imaging platforms suitable to the point-of-care. The algorithm demonstrated accurate reconstruction with a high degree of robustness against data sparsity and inconsistency.
Collapse
Affiliation(s)
- P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, United States of America
| | | | | | | | | | | | | | | | | | | |
Collapse
|
47
|
Wang W, Gang GJ, Siewerdsen JH, Stayman JW. Spatial Resolution and Noise Prediction in Flat-Panel Cone-Beam CT Penalized-likelihood Reconstruction. Proc SPIE Int Soc Opt Eng 2018; 10573. [PMID: 29622857 DOI: 10.1117/12.2294546] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Purpose Model based iterative reconstruction (MBIR) algorithms such as penalized-likelihood (PL) methods have data-dependent and shift-variant image properties. Predictors of local reconstructed noise and resolution have found application in a number of methods that seek to understand, control, and optimize CT data acquisition and reconstruction parameters in a prospective fashion (as opposed to studies based on exhaustive evaluation). However, previous MBIR prediction methods have relied on idealized system models. In this work, we develop and validate new predictors using accurate physical models specific to flat-panel CT systems. Methods Novel predictors for estimation of local spatial resolution and noise properties are developed for PL reconstruction that include a physical model for blur and correlated noise in flat-panel cone-beam CT (CBCT) acquisitions. Prospective predictions (e.g., without reconstruction) of local point spread function and and local noise power spectrum (NPS) model are applied, compared, and validated using a flat-panel CBCT test bench. Results Comparisons between prediction and physical measurements show excellent agreement for both spatial resolution and noise properties. In comparison, traditional prediction methods (that ignore blur/correlation found in flat-panel data) fail to capture important data characteristics and show significant mismatch. Conclusion Novel image property predictors permit prospective assessment of flat-panel CBCT using MBIR. Such predictors enable standard and task-based performance assessments, and are well-suited to evaluation, control, and optimization of the CT imaging chain (e.g., x-ray technique, reconstruction parameters, novel data acquisition methods, etc.) for improved imaging performance and/or dose utilization.
Collapse
Affiliation(s)
- W Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| | - G J Gang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA 21205
| |
Collapse
|
48
|
Cao Q, Brehler M, Sisniega A, Tilley S, Shiraz Bhruwani MM, Stayman JW, Yorkston J, Siewerdsen JH, Zbijewski W. High-Resolution Extremity Cone-Beam CT with a CMOS Detector: Evaluation of a Clinical Prototype in Quantitative Assessment of Bone Microarchitecture. Proc SPIE Int Soc Opt Eng 2018; 10573:105730R. [PMID: 31346302 PMCID: PMC6657686 DOI: 10.1117/12.2293810] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
PURPOSE A prototype high-resolution extremity cone-beam CT (CBCT) system based on a CMOS detector was developed to support quantitative in vivo assessment of bone microarchitecture. We compare the performance of CMOS CBCT to an amorphous silicon (a-Si:H) FPD extremity CBCT in imaging of trabecular bone. METHODS The prototype CMOS-based CBCT involves a DALSA Xineos3030 detector (99 μm pixels) with 400 μm-thick CsI scintillator and a compact 0.3 FS rotating anode x-ray source. We compare the performance of CMOS CBCT to an a-Si:H FPD scanner built on a similar gantry, but using a Varian PaxScan2530 detector with 0.137 mm pixels and a 0.5 FS stationary anode x-ray source. Experimental studies include measurements of Modulation Transfer Function (MTF) for the detectors and in 3D image reconstructions. Image quality in clinical scenarios is evaluated in scans of a cadaver ankle. Metrics of trabecular microarchitecture (BV/TV, Bone Volume/Total Volume, TbSp, Trabecular Spacing, and TbTh, trabecular thickness) are obtained in a human ulna using CMOS CBCT and a-Si:H FPD CBCT and compared to gold standard μCT. RESULTS The CMOS detector achieves ~40% increase in the f20 value (frequency at which MTF reduces to 0.20) compared to the a-Si:H FPD. In the reconstruction domain, the FWHM of a 127 μm tungsten wire is also improved by ~40%. Reconstructions of a cadaveric ankle reveal enhanced modulation of trabecular structures with the CMOS detector and soft-tissue visibility that is similar to that of the a-Si:H FPD system. Correlations of the metrics of bone microarchitecture with gold-standard μCT are improved with CMOS CBCT: from 0.93 to 0.98 for BV/TV, from 0.49 to 0.74 for TbTh, and from 0.9 to 0.96 for TbSp. CONCLUSION Adoption of a CMOS detector in extremity CBCT improved spatial resolution and enhanced performance in metrics of bone microarchitecture compared to a conventional a-Si:H FPD. The results support development of clinical applications of CMOS CBCT in quantitative imaging of bone health.
Collapse
Affiliation(s)
- Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - M Brehler
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - S Tilley
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - M M Shiraz Bhruwani
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | | | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA 21287
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| |
Collapse
|
49
|
Brehler M, Cao Q, Moseley KF, Osgood G, Morris C, Demehri S, Yorkston J, Siewerdsen JH, Zbijewski W. Robust Quantitative Assessment of Trabecular Microarchitecture in Extremity Cone-Beam CT Using Optimized Segmentation Algorithms. Proc SPIE Int Soc Opt Eng 2018; 10578. [PMID: 31337926 DOI: 10.1117/12.2293346] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Purpose In-vivo evaluation of bone microarchitecture remains challenging because of limited resolution of conventional orthopaedic imaging modalities. We investigate the performance of flat-panel detector extremity Cone-Beam CT (CBCT) in quantitative analysis of trabecular bone. To enable accurate morphometry of fine trabecular bone architecture, advanced CBCT pre-processing and segmentation algorithms are developed. Methods The study involved 35 transilliac bone biopsy samples imaged on extremity CBCT (voxel size 75 μm, imaging dose ~13 mGy) and gold standard μCT (voxel size 7.67 μm). CBCT image segmentation was performed using (i) global Otsu's thresholding, (ii) Bernsen's local thresholding, (iii) Bernsen's local thresholding with additional histogram-based global pre-thresholding, and (iv) the same as (iii) but combined with contrast enhancement using a Laplacian Pyramid. Correlations between extremity CBCT with the different segmentation algorithms and gold standard μCT were investigated for measurements of Bone Volume over Total Volume (BV/TV), Trabecular Thickness (Tb.Th), Trabecular Spacing (Tb.Sp), and Trabecular Number (Tb.N). Results The combination of local thresholding with global pre-thresholding and Laplacian contrast enhancement outperformed other CBCT segmentation methods. Using this optimal segmentation scheme, strong correlation between extremity CBCT and μCT was achieved, with Pearson coefficients of 0.93 for BV/TV, 0.89 for Tb.Th, 0.91 for Tb.Sp, and 0.88 for Tb.N (all results statistically significant). Compared to a simple global CBCT segmentation using Otsu's algorithm, the advanced segmentation method achieved ~20% improvement in the correlation coefficient for Tb.Th and ~50% improvement for Tb.Sp. Conclusions Extremity CBCT combined with advanced image pre-processing and segmentation achieves high correlation with gold standard μCT in measurements of trabecular microstructure. This motivates ongoing development of clinical applications of extremity CBCT in in-vivo evaluation of bone health e.g. in early osteoarthritis and osteoporosis.
Collapse
Affiliation(s)
- M Brehler
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - K F Moseley
- Division of Endocrinology, Diabetes and Metabolism, Johns Hopkins University, Baltimore, MD USA
| | - G Osgood
- Department of Orthopedics, Johns Hopkins University, Baltimore, MD USA
| | - C Morris
- Department of Orthopedics, Johns Hopkins University, Baltimore, MD USA
| | - S Demehri
- Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | | | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA.,Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
50
|
Uneri A, Zhang X, Stayman JW, Helm P, Osgood GM, Theodore N, Siewerdsen JH. Advanced Image Registration and Reconstruction using the O-Arm System: Dose Reduction, Image Quality, and Guidance using Known-Component Models. Proc SPIE Int Soc Opt Eng 2018; 10576. [PMID: 34290469 DOI: 10.1117/12.2293874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Purpose Model-based image registration and reconstruction offer strong potential for improved safety and precision in image-guided interventions. Advantages include reduced radiation dose, improved soft-tissue visibility (detection of complications), and accurate guidance with/without a dedicated navigation system. This work reports the development and performance of such methods on an O-arm system for intraoperative imaging and translates them to first clinical studies. Methods Two novel methodologies predicate the work: (1) Known-Component Registration (KC-Reg) for 3D localization of the patient and interventional devices from 2D radiographs; and (2) Penalized-Likelihood reconstruction (PLH) for improved 3D image quality and dose reduction. A thorough assessment of geometric stability, dosimetry, and image quality was performed to define algorithm parameters for imaging and guidance protocols. Laboratory studies included: evaluation of KC-Reg in localization of spine screws delivered in cadaver; and PLH performance in contrast, noise, and resolution in phantoms/cadaver compared to filtered backprojection (FBP). Results KC-Reg was shown to successfully register screw implants within ~1 mm based on as few as 3 radiographs. PLH was shown to improve soft-tissue visibility (61% improvement in CNR) compared to FBP at matched resolution. Cadaver studies verified the selection of algorithm parameters and the methods were successfully translated to clinical studies under an IRB protocol. Conclusions Model-based registration and reconstruction approaches were shown to reduce dose and provide improved visualization of anatomy and surgical instrumentation. Immediate future work will focus on further integration of KC-Reg and PLH for Known-Component Reconstruction (KC-Recon) to provide high-quality intraoperative imaging in the presence of dense instrumentation.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - P Helm
- Medtronic Inc., Littleton, MA
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, MD
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD.,Medtronic Inc., Littleton, MA
| |
Collapse
|