1
|
Huang H, Liu Y, Siewerdsen JH, Lu A, Hu Y, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Deformable motion compensation in interventional cone-beam CT with a context-aware learned autofocus metric. Med Phys 2024; 51:4158-4180. [PMID: 38733602 DOI: 10.1002/mp.17125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 04/02/2024] [Accepted: 05/03/2024] [Indexed: 05/13/2024] Open
Abstract
PURPOSE Interventional Cone-Beam CT (CBCT) offers 3D visualization of soft-tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image-based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first-order image properties and that lack awareness of the underlying anatomy. This work proposes a data-driven approach to motion quantification via a learned, context-aware, deformable metric,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image. METHODS The proposedVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was modeled as a deep convolutional neural network (CNN) trained to recreate a reference-based structural similarity metric-visual information fidelity (VIF). The deep CNN acted on motion-corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion-free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi-branch architecture with a high-resolution branch for estimation of voxel-wise VIF on a small volume of interest. A second contextual, low-resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion-free and motion-corrupted data obtained with a high-fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was evaluated via metrics of correlation with ground truth VIF ${\bm{VIF}}$ and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90-120 kV) and dose (1.19-39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft-tissue boundaries and sharpness of contrast-enhanced vascularity. RESULTS The magnitude and spatial map ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ properly reflects the change in motion amplitudes and frequencies: voxel-wise averaging of the localVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation usingVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high-contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast-enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness. CONCLUSION The proposedVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , featuring a novel context-aware architecture, demonstrated its capacity as a reference-free surrogate of structural similarity to quantify motion-induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x-ray techniques, and anatomical instances. The proposed anatomy- and context-aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.
Collapse
Affiliation(s)
- Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yixuan Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Al-Mnayyis A, Obeidat S, Badr A, Jouryyeh B, Azzam S, Al Bibi H, Al-Gwairy Y, Al Sharie S, Varrassi G. Radiological Insights into Sacroiliitis: A Narrative Review. Clin Pract 2024; 14:106-121. [PMID: 38248433 PMCID: PMC10801489 DOI: 10.3390/clinpract14010009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 12/07/2023] [Accepted: 12/28/2023] [Indexed: 01/23/2024] Open
Abstract
Sacroiliitis is the inflammation of the sacroiliac joint, the largest axial joint in the human body, contributing to 25% of lower back pain cases. It can be detected using various imaging techniques like radiography, MRI, and CT scans. Treatments range from conservative methods to invasive procedures. Recent advancements in artificial intelligence offer precise detection of this condition through imaging. Treatment options range from physical therapy and medications to invasive methods like joint injections and surgery. Future management looks promising with advanced imaging, regenerative medicine, and biologic therapies, especially for conditions like ankylosing spondylitis. We conducted a review on sacroiliitis using imaging data from sources like PubMed and Scopus. Only English studies focusing on sacroiliitis's radiological aspects were included. The findings were organized and presented narratively.
Collapse
Affiliation(s)
- Asma’a Al-Mnayyis
- Department of Clinical Sciences, Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan
| | - Shrouq Obeidat
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | - Ammar Badr
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | - Basil Jouryyeh
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | - Saif Azzam
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | - Hayat Al Bibi
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | - Yara Al-Gwairy
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | - Sarah Al Sharie
- Faculty of Medicine, Yarmouk University, Irbid 21163, Jordan; (S.O.); (A.B.); (B.J.); (S.A.); (H.A.B.); (Y.A.-G.)
| | | |
Collapse
|
3
|
Cancelliere NM, van Nijnatten F, Hummel E, Withagen P, van de Haar P, Nishi H, Agid R, Nicholson P, Hallacoglu B, van Vlimmeren M, Pereira VM. Motion artifact correction for cone beam CT stroke imaging: a prospective series. J Neurointerv Surg 2023; 15:e223-e228. [PMID: 36564201 DOI: 10.1136/jnis-2021-018201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 06/28/2022] [Indexed: 12/25/2022]
Abstract
BACKGROUND Imaging assessment for acute ischemic stroke (AIS) patients in the angiosuite using cone beam CT (CBCT) has created increased interest since endovascular treatment became the first line therapy for proximal vessel occlusions. One of the main challenges of CBCT imaging in AIS patients is degraded image quality due to motion artifacts. This study aims to evaluate the prevalence of motion artifacts in CBCT stroke imaging and the effectiveness of a novel motion artifact correction algorithm for image quality improvement. METHODS Patients presenting with acute stroke symptoms and considered for endovascular treatment were included in the study. CBCT scans were performed using the angiosuite X-ray system. All CBCT scans were post-processed using a motion artifact correction algorithm. Motion artifacts were scored before and after processing using a 4-point scale. RESULTS We prospectively included 310 CBCT scans from acute stroke patients. 51% (n=159/310) of scans had motion artifacts, with 24% being moderate to severe. The post-processing algorithm improved motion artifacts in 91% of scans with motion (n=144/159), restoring clinical diagnostic capability in 34%. Overall, 76% of the scans were sufficient for clinical decision-making before correction, which improved to 93% (n=289/310) after post-processing with our algorithm. CONCLUSIONS Our results demonstrate that CBCT motion artifacts are significantly reduced using a novel post-processing algorithm, which improved brain CBCT image quality and diagnostic assessment for stroke. This is an important step on the road towards a direct-to-angio approach for endovascular thrombectomy (EVT) treatment.
Collapse
Affiliation(s)
- Nicole M Cancelliere
- Department of Neurosurgery, St Michael's Hospital, Toronto, Ontario, Canada
- RADIS lab, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
- Department of Medical Imaging, St Michael's Hospital, Toronto, Ontario, Canada
| | - Fred van Nijnatten
- Image Guided Therapy, Philips Healthcare, Best, Noord-Brabant, The Netherlands
| | - Eric Hummel
- Image Guided Therapy, Philips Healthcare, Best, Noord-Brabant, The Netherlands
| | - Paul Withagen
- Image Guided Therapy, Philips Healthcare, Best, Noord-Brabant, The Netherlands
| | - Peter van de Haar
- Image Guided Therapy, Philips Healthcare, Best, Noord-Brabant, The Netherlands
| | - Hidehisa Nishi
- RADIS lab, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ronit Agid
- Medical Imaging, Toronto Western Hospital, Toronto, Ontario, Canada
| | | | - Bertan Hallacoglu
- Image Guided Therapy, Philips Healthcare, Best, Noord-Brabant, The Netherlands
| | | | - Vitor M Pereira
- Department of Neurosurgery, St Michael's Hospital, Toronto, Ontario, Canada
- RADIS lab, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
- Department of Medical Imaging, St Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Thies M, Wagner F, Maul N, Folle L, Meier M, Rohleder M, Schneider LS, Pfaff L, Gu M, Utz J, Denzinger F, Manhart M, Maier A. Gradient-based geometry learning for fan-beam CT reconstruction. Phys Med Biol 2023; 68:205004. [PMID: 37779386 DOI: 10.1088/1361-6560/acf90e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 09/12/2023] [Indexed: 10/03/2023]
Abstract
Objective.Incorporating computed tomography (CT) reconstruction operators into differentiable pipelines has proven beneficial in many applications. Such approaches usually focus on the projection data and keep the acquisition geometry fixed. However, precise knowledge of the acquisition geometry is essential for high quality reconstruction results. In this paper, the differentiable formulation of fan-beam CT reconstruction is extended to the acquisition geometry.Approach.The CT fan-beam reconstruction is analytically derived with respect to the acquisition geometry. This allows to propagate gradient information from a loss function on the reconstructed image into the geometry parameters. As a proof-of-concept experiment, this idea is applied to rigid motion compensation. The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion-affected reconstruction alone.Main results.The algorithm improves the structural similarity index measure (SSIM) from 0.848 for the initial motion-affected reconstruction to 0.946 after compensation. It also generalizes to real fan-beam sinograms which are rebinned from a helical trajectory where the SSIM increases from 0.639 to 0.742.Significance.Using the proposed method, we are the first to optimize an autofocus-inspired algorithm based on analytical gradients. Next to motion compensation, we see further use cases of our differentiable method for scanner calibration or hybrid techniques employing deep models.
Collapse
Affiliation(s)
- Mareike Thies
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Fabian Wagner
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Noah Maul
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Lukas Folle
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Manuela Meier
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Maximilian Rohleder
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | | | - Laura Pfaff
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Mingxuan Gu
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Jonas Utz
- Department AIBE, FAU Erlangen-Nürnberg, Germany
| | - Felix Denzinger
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| |
Collapse
|
5
|
Li G, Chen X, You C, Huang X, Deng Z, Luo S. A nonconvex model-based combined geometric calibration scheme for micro cone-beam CT with irregular trajectories. Med Phys 2023; 50:2759-2774. [PMID: 36718546 DOI: 10.1002/mp.16257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/21/2022] [Accepted: 01/17/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Many dedicated cone-beam CT (CBCT) systems have irregular scanning trajectories. Compared with the standard CBCT calibration, accurate calibration for CBCT systems with irregular trajectories is a more complex task, since the geometric parameters for each scanning view are variable. Most of the existing calibration methods assume that the intrinsic geometric relationship of the fiducials in the phantom is precisely known, and rarely delve deeper into the issue of whether the phantom accuracy is adapted to the calibration model. PURPOSE A high-precision phantom and a highly robust calibration model are interdependent and mutually supportive, and they are both important for calibration accuracy, especially for the high-resolution CBCT. Therefore, we propose a calibration scheme that considers both accurate phantom measurement and robust geometric calibration. METHODS Our proposed scheme consists of two parts: (1) introducing a measurement model to acquire the accurate intrinsic geometric relationship of the fiducials in the phantom; (2) developing a highly noise-robust nonconvex model-based calibration method. The measurement model in the first part is achieved by extending our previous high-precision geometric calibration model suitable for CBCT with circular trajectories. In the second part, a novel iterative method with optimization constraints based on a back-projection model is developed to solve the geometric parameters of each view. RESULTS The simulations and real experiments show that the measurement errors of the fiducial ball bearings (BBs) are within the subpixel level. With the help of the geometric relationship of the BBs obtained by our measurement method, the classic calibration method can achieve good calibration based on far fewer BBs. All metrics obtained in simulated experiments as well as in real experiments on Micro CT systems with resolutions of 9 and 4.5 μm show that the proposed calibration method has higher calibration accuracy than the competing classic method. It is particularly worth noting that although our measurement model proves to be very accurate, the classic calibration method based on this measurement model can only achieve good calibration results when the resolution of the measurement system is close to that of the system to be calibrated, but our calibration scheme enables high-accuracy calibration even when the resolution of the system to be calibrated is twice that of the measurement system. CONCLUSIONS The proposed combined geometrical calibration scheme does not rely on a phantom with an intricate pattern of fiducials, so it is applicable in Micro CT with high resolution. The two parts of the scheme, the "measurement model" and the "calibration model," prove to be of high accuracy. The combination of these two models can effectively improve the calibration accuracy, especially in some extreme cases.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Xue Chen
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Chenyu You
- Image Processing and Analysis Group (IPAG), Yale University, New Haven, Connecticut, USA
| | - Xinhai Huang
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Zhenhao Deng
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, Department of Biomedical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
6
|
Ibad HA, de Cesar Netto C, Shakoor D, Sisniega A, Liu S, Siewerdsen JH, Carrino JA, Zbijewski W, Demehri S. Computed Tomography: State-of-the-Art Advancements in Musculoskeletal Imaging. Invest Radiol 2023; 58:99-110. [PMID: 35976763 PMCID: PMC9742155 DOI: 10.1097/rli.0000000000000908] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
ABSTRACT Although musculoskeletal magnetic resonance imaging (MRI) plays a dominant role in characterizing abnormalities, novel computed tomography (CT) techniques have found an emerging niche in several scenarios such as trauma, gout, and the characterization of pathologic biomechanical states during motion and weight-bearing. Recent developments and advancements in the field of musculoskeletal CT include 4-dimensional, cone-beam (CB), and dual-energy (DE) CT. Four-dimensional CT has the potential to quantify biomechanical derangements of peripheral joints in different joint positions to diagnose and characterize patellofemoral instability, scapholunate ligamentous injuries, and syndesmotic injuries. Cone-beam CT provides an opportunity to image peripheral joints during weight-bearing, augmenting the diagnosis and characterization of disease processes. Emerging CBCT technologies improved spatial resolution for osseous microstructures in the quantitative analysis of osteoarthritis-related subchondral bone changes, trauma, and fracture healing. Dual-energy CT-based material decomposition visualizes and quantifies monosodium urate crystals in gout, bone marrow edema in traumatic and nontraumatic fractures, and neoplastic disease. Recently, DE techniques have been applied to CBCT, contributing to increased image quality in contrast-enhanced arthrography, bone densitometry, and bone marrow imaging. This review describes 4-dimensional CT, CBCT, and DECT advances, current logistical limitations, and prospects for each technique.
Collapse
Affiliation(s)
- Hamza Ahmed Ibad
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Cesar de Cesar Netto
- Department of Orthopaedics and Rehabilitation, Carver College of Medicine, University of Iowa, Iowa City, IA, USA
| | - Delaram Shakoor
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Stephen Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - John A. Carrino
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Shadpour Demehri
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
7
|
Weight-bearing cone-beam CT: the need for standardised acquisition protocols and measurements to fulfill high expectations-a review of the literature. Skeletal Radiol 2022; 52:1073-1088. [PMID: 36350387 DOI: 10.1007/s00256-022-04223-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/24/2022] [Accepted: 10/24/2022] [Indexed: 11/11/2022]
Abstract
Weight bearing CT (WBCT) of the lower extremity is gaining momentum in evaluation of the foot/ankle and knee. A growing number of international studies use WBCT, which is promising for improving our understanding of anatomy and biomechanics during natural loading of the lower extremity. However, we believe there is risk of excessive enthusiasm for WBCT leading to premature application of the technique, before sufficiently robust protocols are in place e.g. standardised limb positioning and imaging planes, choice of anatomical landmarks and image slices used for individual measurements. Lack of standardisation could limit benefits from introducing WBCT in research and clinical practice because useful imaging information could become obscured. Measurements of bones and joints on WBCT are influenced by joint positioning and magnitude of loading, factors that need to be considered within a 3-D coordinate system. A proportion of WBCT studies examine inter- and intraobserver reproducibility for different radiological measurements in the knee or foot with reproducibility generally reported to be high. However, investigations of test-retest reproducibility are still lacking. Thus, the current ability to evaluate, e.g. the effects of surgery or structural disease progression, is questionable. This paper presents an overview of the relevant literature on WBCT in the lower extremity with an emphasis on factors that may affect measurement reproducibility in the foot/ankle and knee. We discuss the caveats of performing WBCT without consensus on imaging procedures and measurements.
Collapse
|
8
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Ehtiati T, Sisniega A. Reference-free learning-based similarity metric for motion compensation in cone-beam CT. Phys Med Biol 2022; 67. [PMID: 35636391 DOI: 10.1088/1361-6560/ac749a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/30/2022] [Indexed: 11/12/2022]
Abstract
Purpose. Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT.Methods. DL-VIF is a reference-free surrogate for the previously reported visual image fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric.Results. The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.068 and 0.034, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (σDL-VIF = 0.008 versusσgradient entropy = 0.019). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) versus little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images.Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - T Ehtiati
- Siemens Medical Solutions USA, Inc., Imaging & Therapy Systems, Hoffman Estates, IL, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
9
|
Maier J, Nitschke M, Choi JH, Gold G, Fahrig R, Eskofier BM, Maier A. Rigid and Non-Rigid Motion Compensation in Weight-Bearing CBCT of the Knee Using Simulated Inertial Measurements. IEEE Trans Biomed Eng 2022; 69:1608-1619. [PMID: 34714730 PMCID: PMC9134858 DOI: 10.1109/tbme.2021.3123673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Involuntary subject motion is the main source of artifacts in weight-bearing cone-beam CT of the knee. To achieve image quality for clinical diagnosis, the motion needs to be compensated. We propose to use inertial measurement units (IMUs) attached to the leg for motion estimation. METHODS We perform a simulation study using real motion recorded with an optical tracking system. Three IMU-based correction approaches are evaluated, namely rigid motion correction, non-rigid 2D projection deformation and non-rigid 3D dynamic reconstruction. We present an initialization process based on the system geometry. With an IMU noise simulation, we investigate the applicability of the proposed methods in real applications. RESULTS All proposed IMU-based approaches correct motion at least as good as a state-of-the-art marker-based approach. The structural similarity index and the root mean squared error between motion-free and motion corrected volumes are improved by 24-35% and 78-85%, respectively, compared with the uncorrected case. The noise analysis shows that the noise levels of commercially available IMUs need to be improved by a factor of 105 which is currently only achieved by specialized hardware not robust enough for the application. CONCLUSION Our simulation study confirms the feasibility of this novel approach and defines improvements necessary for a real application. SIGNIFICANCE The presented work lays the foundation for IMU-based motion compensation in cone-beam CT of the knee and creates valuable insights for future developments.
Collapse
|
10
|
Hall ME, Black MS, Gold GE, Levenston ME. Validation of watershed-based segmentation of the cartilage surface from sequential CT arthrography scans. Quant Imaging Med Surg 2022; 12:1-14. [PMID: 34993056 PMCID: PMC8666781 DOI: 10.21037/qims-20-1062] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 07/12/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND This study investigated the utility of a 2-dimensional watershed algorithm for identifying the cartilage surface in computed tomography (CT) arthrograms of the knee up to 33 minutes after an intra-articular iohexol injection as boundary blurring increased. METHODS A 2D watershed algorithm was applied to CT arthrograms of 3 bovine stifle joints taken 3, 8, 18, and 33 minutes after iohexol injection and used to segment tibial cartilage. Thickness measurements were compared to a reference standard thickness measurement and the 3-minute time point scan. RESULTS 77.2% of cartilage thickness measurements were within 0.2 mm (1 voxel) of the thickness calculated in the reference scan at the 3-minute time point. 42% fewer voxels could be segmented from the 33-minute scan than the 3-minute scan due to diffusion of the contrast agent out of the joint space and into the cartilage, leading to blurring of the cartilage boundary. The traced watershed lines were closer to the location of the cartilage surface in areas where tissues were in direct contact with each other (cartilage-cartilage or cartilage-meniscus contact). CONCLUSIONS The use of watershed dam lines to guide cartilage segmentation shows promise for identifying cartilage boundaries from CT arthrograms in areas where soft tissues are in direct contact with each other.
Collapse
Affiliation(s)
- Mary E. Hall
- Department of Mechanical Engineering, Stanford University, Stanford, CA, USA
| | - Marianne S. Black
- Department of Mechanical Engineering, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Garry E. Gold
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Marc E. Levenston
- Department of Mechanical Engineering, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
11
|
Unberath M, Gao C, Hu Y, Judish M, Taylor RH, Armand M, Grupp R. The Impact of Machine Learning on 2D/3D Registration for Image-Guided Interventions: A Systematic Review and Perspective. Front Robot AI 2021; 8:716007. [PMID: 34527706 PMCID: PMC8436154 DOI: 10.3389/frobt.2021.716007] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based navigation is widely considered the next frontier of minimally invasive surgery. It is believed that image-based navigation will increase the access to reproducible, safe, and high-precision surgery as it may then be performed at acceptable costs and effort. This is because image-based techniques avoid the need of specialized equipment and seamlessly integrate with contemporary workflows. Furthermore, it is expected that image-based navigation techniques will play a major role in enabling mixed reality environments, as well as autonomous and robot-assisted workflows. A critical component of image guidance is 2D/3D registration, a technique to estimate the spatial relationships between 3D structures, e.g., preoperative volumetric imagery or models of surgical instruments, and 2D images thereof, such as intraoperative X-ray fluoroscopy or endoscopy. While image-based 2D/3D registration is a mature technique, its transition from the bench to the bedside has been restrained by well-known challenges, including brittleness with respect to optimization objective, hyperparameter selection, and initialization, difficulties in dealing with inconsistencies or multiple objects, and limited single-view performance. One reason these challenges persist today is that analytical solutions are likely inadequate considering the complexity, variability, and high-dimensionality of generic 2D/3D registration problems. The recent advent of machine learning-based approaches to imaging problems that, rather than specifying the desired functional mapping, approximate it using highly expressive parametric models holds promise for solving some of the notorious challenges in 2D/3D registration. In this manuscript, we review the impact of machine learning on 2D/3D registration to systematically summarize the recent advances made by introduction of this novel technology. Grounded in these insights, we then offer our perspective on the most pressing needs, significant open problems, and possible next steps.
Collapse
Affiliation(s)
- Mathias Unberath
- Advanced Robotics and Computationally Augmented Environments (ARCADE) Lab, Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | | | | | | | | | | | | |
Collapse
|
12
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
13
|
Maier J, Maier A, Eskofier B, Fahrig R, Choi JH. 3D Non-Rigid Alignment of Low-Dose Scans Allows to Correct for Saturation in Lower Extremity Cone-Beam CT. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:71821-71831. [PMID: 34141516 PMCID: PMC8208599 DOI: 10.1109/access.2021.3079368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detector saturation in cone-beam computed tomography occurs when an object of highly varying shape and material composition is imaged using an automatic exposure control (AEC) system. When imaging a subject's knees, high beam energy ensures the visibility of internal structures but leads to overexposure in less dense border regions. In this work, we propose to use an additional low-dose scan to correct the saturation artifacts of AEC scans. Overexposed pixels are identified in the projection images of the AEC scan using histogram-based thresholding. The saturation-free pixels from the AEC scan are combined with the skin border pixels of the low-dose scan prior to volumetric reconstruction. To compensate for patient motion between the two scans, a 3D non-rigid alignment of the projection images in a backward-forward-projection process based on fiducial marker positions is proposed. On numerical simulations, the projection combination improved the structural similarity index measure from 0.883 to 0.999. Further evaluations were performed on two in vivo subject knee acquisitions, one without and one with motion between the AEC and low-dose scans. Saturation-free reference images were acquired using a beam attenuator. The proposed method could qualitatively restore the information of peripheral tissue structures. Applying the 3D non-rigid alignment made it possible to use the projection images with inter-scan subject motion for projection image combination. The increase in radiation exposure due to the additional low-dose scan was found to be negligibly low. The presented methods allow simple but effective correction of saturation artifacts.
Collapse
Affiliation(s)
- Jennifer Maier
- Pattern Recognition Laboratory, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, Germany
- Machine Learning and Data Analytics Laboratory, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Laboratory, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Bjoern Eskofier
- Machine Learning and Data Analytics Laboratory, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | | | - Jang-Hwan Choi
- Division of Mechanical and Biomedical Engineering, Graduate Program in System Health Science and Engineering, Ewha Womans University, Seoul 03760, South Korea
| |
Collapse
|
14
|
Capostagno S, Sisniega A, Stayman JW, Ehtiati T, Weiss CR, Siewerdsen JH. Deformable motion compensation for interventional cone-beam CT. Phys Med Biol 2021; 66:055010. [PMID: 33594993 DOI: 10.1088/1361-6560/abb16e] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Image-guided therapies in the abdomen and pelvis are often hindered by motion artifacts in cone-beam CT (CBCT) arising from complex, non-periodic, deformable organ motion during long scan times (5-30 s). We propose a deformable image-based motion compensation method to address these challenges and improve CBCT guidance. Motion compensation is achieved by selecting a set of small regions of interest in the uncompensated image to minimize a cost function consisting of an autofocus objective and spatiotemporal regularization penalties. Motion trajectories are estimated using an iterative optimization algorithm (CMA-ES) and used to interpolate a 4D spatiotemporal motion vector field. The motion-compensated image is reconstructed using a modified filtered backprojection approach. Being image-based, the method does not require additional input besides the raw CBCT projection data and system geometry that are used for image reconstruction. Experimental studies investigated: (1) various autofocus objective functions, analyzed using a digital phantom with a range of sinusoidal motion magnitude (4, 8, 12, 16, 20 mm); (2) spatiotemporal regularization, studied using a CT dataset from The Cancer Imaging Archive with deformable sinusoidal motion of variable magnitude (10, 15, 20, 25 mm); and (3) performance in complex anatomy, evaluated in cadavers undergoing simple and complex motion imaged on a CBCT-capable mobile C-arm system (Cios Spin 3D, Siemens Healthineers, Forchheim, Germany). Gradient entropy was found to be the best autofocus objective for soft-tissue CBCT, increasing structural similarity (SSIM) by 42%-92% over the range of motion magnitudes investigated. The optimal temporal regularization strength was found to vary widely (0.5-5 mm-2) over the range of motion magnitudes investigated, whereas optimal spatial regularization strength was relatively constant (0.1). In cadaver studies, deformable motion compensation was shown to improve local SSIM by ∼17% for simple motion and ∼21% for complex motion and provided strong visual improvement of motion artifacts (reduction of blurring and streaks and improved visibility of soft-tissue edges). The studies demonstrate the robustness of deformable motion compensation to a range of motion magnitudes, frequencies, and other factors (e.g. truncation and scatter).
Collapse
Affiliation(s)
- S Capostagno
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | |
Collapse
|
15
|
Preuhs A, Manhart M, Roser P, Hoppe E, Huang Y, Psychogios M, Kowarschik M, Maier A. Appearance Learning for Image-Based Motion Estimation in Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3667-3678. [PMID: 32746114 DOI: 10.1109/tmi.2020.3002695] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals. Geometric information within this process is usually depending on the system setting only, i.e., the scanner position or readout direction. Patient motion therefore corrupts the geometry alignment in the reconstruction process resulting in motion artifacts. We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object. To this end, we train a siamese triplet network to predict the reprojection error (RPE) for the complete acquisition as well as an approximate distribution of the RPE along the single views from the reconstructed volume in a multi-task learning approach. The RPE measures the motion-induced geometric deviations independent of the object based on virtual marker positions, which are available during training. We train our network using 27 patients and deploy a 21-4-2 split for training, validation and testing. In average, we achieve a residual mean RPE of 0.013mm with an inter-patient standard deviation of 0.022mm. This is twice the accuracy compared to previously published results. In a motion estimation benchmark the proposed approach achieves superior results in comparison with two state-of-the-art measures in nine out of twelve experiments. The clinical applicability of the proposed method is demonstrated on a motion-affected clinical dataset.
Collapse
|
16
|
Ko Y, Moon S, Baek J, Shim H. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module. Med Image Anal 2020; 67:101883. [PMID: 33166775 DOI: 10.1016/j.media.2020.101883] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 10/13/2020] [Accepted: 10/14/2020] [Indexed: 12/16/2022]
Abstract
Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.
Collapse
Affiliation(s)
- Youngjun Ko
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea
| | - Seunghyuk Moon
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea
| | - Jongduk Baek
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea.
| | - Hyunjung Shim
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea.
| |
Collapse
|
17
|
Schaffert R, Wang J, Fischer P, Borsdorf A, Maier A. Learning an Attention Model for Robust 2-D/3-D Registration Using Point-To-Plane Correspondences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3159-3174. [PMID: 32305908 DOI: 10.1109/tmi.2020.2988410] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Minimally invasive procedures rely on image guidance for navigation at the operation site to avoid large surgical incisions. X-ray images are often used for guidance, but important structures may be not well visible. These structures can be overlaid from pre-operative 3-D images and accurate alignment can be established using 2-D/3-D registration. Registration based on the point-to-plane correspondence model was recently proposed and shown to achieve state-of-the-art performance. However, registration may still fail in challenging cases due to a large portion of outliers. In this paper, we describe a learning-based correspondence weighting scheme to improve the registration performance. By learning an attention model, inlier correspondences get higher attention in the motion estimation while the outlier correspondences are suppressed. Instead of using per-correspondence labels, our objective function allows to train the model directly by minimizing the registration error. We demonstrate a highly increased robustness, e.g. increasing the success rate from 84.9% to 97.0% for spine registration. In contrast to previously proposed learning-based methods, we also achieve a high accuracy of around 0.5mm mean re-projection distance. In addition, our method requires a relatively small amount of training data, is able to learn from simulated data, and generalizes to images with additional structures which are not present during training. Furthermore, a single model can be trained for both, different views and different anatomical structures.
Collapse
|
18
|
Gu W, Gao C, Grupp R, Fotouhi J, Unberath M. Extended Capture Range of Rigid 2D/3D Registration by Estimating Riemannian Pose Gradients. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:281-291. [PMID: 33145587 PMCID: PMC7605345 DOI: 10.1007/978-3-030-59861-7_29] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Traditional intensity-based 2D/3D registration requires near-perfect initialization in order for image similarity metrics to yield meaningful updates of X-ray pose and reduce the likelihood of getting trapped in a local minimum. The conventional approaches strongly depend on image appearance rather than content, and therefore, fail in revealing large pose offsets that substantially alter the appearance of the same structure. We complement traditional similarity metrics with a convolutional neural network-based (CNN-based) registration solution that captures large-range pose relations by extracting both local and contextual information, yielding meaningful X-ray pose updates without the need for accurate initialization. To register a 2D X-ray image and a 3D CT scan, our CNN accepts a target X-ray image and a digitally reconstructed radiograph at the current pose estimate as input and iteratively outputs pose updates in the direction of the pose gradient on the Riemannian Manifold. Our approach integrates seamlessly with conventional image-based registration frameworks, where long-range relations are captured primarily by our CNN-based method while short-range offsets are recovered accurately with an image similarity-based method. On both synthetic and real X-ray images of the human pelvis, we demonstrate that the proposed method can successfully recover large rotational and translational offsets, irrespective of initialization.
Collapse
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore MD 21218, USA
| | - Cong Gao
- Johns Hopkins University, Baltimore MD 21218, USA
| | - Robert Grupp
- Johns Hopkins University, Baltimore MD 21218, USA
| | | | | |
Collapse
|
19
|
Sisniega A, Thawait GK, Shakoor D, Siewerdsen JH, Demehri S, Zbijewski W. Motion compensation in extremity cone-beam computed tomography. Skeletal Radiol 2019; 48:1999-2007. [PMID: 31172206 PMCID: PMC6814492 DOI: 10.1007/s00256-019-03241-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 04/29/2019] [Accepted: 05/12/2019] [Indexed: 02/02/2023]
Abstract
OBJECTIVES To evaluate the improvement in extremity cone-beam computed tomography (CBCT) image quality in datasets with motion artifact using a motion compensation method based on maximizing image sharpness. METHODS Following IRB approval, retrospective analysis of 308 CBCT scans of lower extremities was performed by a fellowship-trained musculoskeletal radiologist to identify images with moderate to severe motion artifact. Twenty-four scans of 22 patients (18 male, four female; mean, 32 years old, range, 21-74 years old) were chosen for inclusion. Sharp (bone) and smooth (soft tissue) reconstructions were processed using the motion compensation algorithm. Two experts rated visualization of trabecular bone, cortical bone, joint spaces, and tendon on a nine-level Likert scale with and without motion compensation (a total of 96 datasets). Visual grading characteristics (VGC) was used to quantitatively determine the difference in image quality following motion compensation. Intra-class correlation coefficient (ICC) was obtained to assess inter-observer agreement. RESULTS Motion-compensated images exhibited appreciable reduction in artifacts. The observer study demonstrated the associated improvement in diagnostic quality. The fraction of cases receiving scores better than "Fair" increased from less than 10% without compensation to 40-70% following compensation, depending on the task. The area under the VGC curve was 0.75 (tendon) to 0.85 (cortical bone), confirming preference for motion compensated images. ICC values showed excellent agreement between readers before (ICC range, 0.8-0.91) and after motion compensation (ICC range, 0.92-0.97). CONCLUSIONS The motion compensation algorithm significantly improved the visualization of bone and soft tissue structures in extremity CBCT for cases exhibiting patient motion.
Collapse
Affiliation(s)
- Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Gaurav K Thawait
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
- Russel H Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, 21278, USA
| | - Delaram Shakoor
- Russel H Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, 21278, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
- Russel H Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, 21278, USA
| | - Shadpour Demehri
- Russel H Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, 21278, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA.
| |
Collapse
|
20
|
Fotouhi J, Unberath M, Song T, Hajek J, Lee SC, Bier B, Maier A, Osgood G, Armand M, Navab N. Co-localized augmented human and X-ray observers in collaborative surgical ecosystem. Int J Comput Assist Radiol Surg 2019; 14:1553-1563. [PMID: 31350704 DOI: 10.1007/s11548-019-02035-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 07/18/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Image-guided percutaneous interventions are safer alternatives to conventional orthopedic and trauma surgeries. To advance surgical tools in complex bony structures during these procedures with confidence, a large number of images is acquired. While image-guidance is the de facto standard to guarantee acceptable outcome, when these images are presented on monitors far from the surgical site the information content cannot be associated easily with the 3D patient anatomy. METHODS In this article, we propose a collaborative augmented reality (AR) surgical ecosystem to jointly co-localize the C-arm X-ray and surgeon viewer. The technical contributions of this work include (1) joint calibration of a visual tracker on a C-arm scanner and its X-ray source via a hand-eye calibration strategy, and (2) inside-out co-localization of human and X-ray observers in shared tracking and augmentation environments using vision-based simultaneous localization and mapping. RESULTS We present a thorough evaluation of the hand-eye calibration procedure. Results suggest convergence when using 50 pose pairs or more. The mean translation and rotation errors at convergence are 5.7 mm and [Formula: see text], respectively. Further, user-in-the-loop studies were conducted to estimate the end-to-end target augmentation error. The mean distance between landmarks in real and virtual environment was 10.8 mm. CONCLUSIONS The proposed AR solution provides a shared augmented experience between the human and X-ray viewer. The collaborative surgical AR system has the potential to simplify hand-eye coordination for surgeons or intuitively inform C-arm technologists for prospective X-ray view-point planning.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA. .,Department of Computer Science, Johns Hopkins University, Baltimore, USA.
| | - Mathias Unberath
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Tianyu Song
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA
| | - Jonas Hajek
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sing Chun Lee
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Bastian Bier
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Greg Osgood
- Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Baltimore, USA.,Department of Orthopedic Surgery, Johns Hopkins Hospital, Baltimore, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| |
Collapse
|
21
|
Preuhs A, Maier A, Manhart M, Kowarschik M, Hoppe E, Fotouhi J, Navab N, Unberath M. Symmetry prior for epipolar consistency. Int J Comput Assist Radiol Surg 2019; 14:1541-1551. [DOI: 10.1007/s11548-019-02027-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 07/03/2019] [Indexed: 10/26/2022]
|
22
|
Zhang Y, Zhang L, Sun Y. Rigid motion artifact reduction in CT using extended difference function. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:273-285. [PMID: 30856149 DOI: 10.3233/xst-180442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND In computed tomography (CT), a patient motion would result in degraded spatial resolution and image artifacts. OBJECTIVE To eliminate motion artifacts, this study presents a method to estimate the motion parameters from sinograms based on extended difference function. METHODS Based on our previous work, we first divide the projection data into two parts according to view angles and take Radon transform. Then, we calculate the extended difference functions and search for the minimum points. The relative displacements can be determined by the minimum points, and the motion can be estimated by the relationships between the relative displacements and motion parameters. Finally, we introduce the estimated parameters into the reconstruction process to compensate for the motion effects. RESULTS The simulation results show that the running times can reduce by about 30% than our previous work. In phantom experiments, the relative mean rotation excursion (RMRE) and relative mean translation excursion (RMTE) of the new method are lower than the conventional Helgason-Ludwig consistency condition (HLCC) based method and comparable to our previous work. Compare with the HLCC method, the root mean square error (RMSE) of the new method also reduces, while the Pearson correlation coefficient (CC) and mean structural similarity index (MSSIM) increase. CONCLUSIONS The proposed new method yields the improved performance on accuracy of motion estimation with higher computational efficiency, and thus it can produce high-quality images.
Collapse
Affiliation(s)
- Yuan Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Liyi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
- School of Information Engineering, Tianjin University of Commerce, Tianjin, China
| | - Yunshan Sun
- School of Information Engineering, Tianjin University of Commerce, Tianjin, China
| |
Collapse
|
23
|
Shieh CC, Barber J, Counter W, Sykes J, Bennett P, Heng SM, White P, Corde S, Jackson M, Ahern V, Feain I, O’Brien R, Keall PJ. Cone-beam CT reconstruction with gravity-induced motion. ACTA ACUST UNITED AC 2018; 63:205007. [DOI: 10.1088/1361-6560/aae1bb] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
24
|
Klugmann A, Bier B, Müller K, Maier A, Unberath M. Deformable respiratory motion correction for hepatic rotational angiography. Comput Med Imaging Graph 2018; 66:82-89. [DOI: 10.1016/j.compmedimag.2018.03.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 12/01/2022]
|
25
|
Jacobson MW, Ketcha MD, Capostagno S, Martin A, Uneri A, Goerres J, De Silva T, Reaungamornrat S, Han R, Manbachi A, Stayman JW, Vogt S, Kleinszig G, Siewerdsen JH. A line fiducial method for geometric calibration of cone-beam CT systems with diverse scan trajectories. Phys Med Biol 2018; 63:025030. [PMID: 29116058 PMCID: PMC5868366 DOI: 10.1088/1361-6560/aa9910] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Modern cone-beam CT systems, especially C-arms, are capable of diverse source-detector orbits. However, geometric calibration of these systems using conventional configurations of spherical fiducials (BBs) may be challenged for novel source-detector orbits and system geometries. In part, this is because the BB configurations are designed with careful forethought regarding the intended orbit so that BB marker projections do not overlap in projection views. Examples include helical arrangements of BBs (Rougee et al 1993 Proc. SPIE 1897 161-9) such that markers do not overlap in projections acquired from a circular orbit and circular arrangements of BBs (Cho et al 2005 Med. Phys. 32 968-83). As a more general alternative, this work proposes a calibration method based on an array of line-shaped, radio-opaque wire segments. With this method, geometric parameter estimation is accomplished by relating the 3D line equations representing the wires to the 2D line equations of their projections. The use of line fiducials simplifies many challenges with fiducial recognition and extraction in an orbit-independent manner. For example, their projections can overlap only mildly, for any gantry pose, as long as the wires are mutually non-coplanar in 3D. The method was tested in application to circular and non-circular trajectories in simulation and in real orbits executed using a mobile C-arm prototype for cone-beam CT. Results indicated high calibration accuracy, as measured by forward and backprojection/triangulation error metrics. Triangulation errors on the order of microns and backprojected ray deviations uniformly less than 0.2 mm were observed in both real and simulated orbits. Mean forward projection errors less than 0.1 mm were observed in a comprehensive sweep of different C-arm gantry angulations. Finally, successful integration of the method into a CT imaging chain was demonstrated in head phantom scans.
Collapse
Affiliation(s)
- M W Jacobson
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Range Imaging for Motion Compensation in C-Arm Cone-Beam CT of Knees under Weight-Bearing Conditions. J Imaging 2018. [DOI: 10.3390/jimaging4010013] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
27
|
Jang S, Kim S, Kim M, Ra JB. Head motion correction based on filtered backprojection for x-ray CT imaging. Med Phys 2017; 45:589-604. [DOI: 10.1002/mp.12705] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 11/07/2017] [Accepted: 11/22/2017] [Indexed: 11/08/2022] Open
Affiliation(s)
- Seokhwan Jang
- School of Electrical Engineering; KAIST; Daejeon Republic of Korea
| | - Seungeon Kim
- School of Electrical Engineering; KAIST; Daejeon Republic of Korea
| | - Mina Kim
- School of Electrical Engineering; KAIST; Daejeon Republic of Korea
| | - Jong Beom Ra
- School of Electrical Engineering; KAIST; Daejeon Republic of Korea
| |
Collapse
|
28
|
Ouadah S, Jacobson M, Stayman JW, Ehtiati T, Weiss C, Siewerdsen JH. Correction of patient motion in cone-beam CT using 3D-2D registration. Phys Med Biol 2017; 62:8813-8831. [PMID: 28994668 PMCID: PMC5894892 DOI: 10.1088/1361-6560/aa9254] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was >0.995, with significant improvement (p < 0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.
Collapse
Affiliation(s)
- S Ouadah
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, United States of America
| | | | | | | | | | | |
Collapse
|
29
|
Wang J, Schaffert R, Borsdorf A, Heigl B, Huang X, Hornegger J, Maier A. Dynamic 2-D/3-D Rigid Registration Framework Using Point-To-Plane Correspondence Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1939-1954. [PMID: 28489534 DOI: 10.1109/tmi.2017.2702100] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In image-guided interventional procedures, live 2-D X-ray images can be augmented with preoperative 3-D computed tomography or MRI images to provide planning landmarks and enhanced spatial perception. An accurate alignment between the 3-D and 2-D images is a prerequisite for fusion applications. This paper presents a dynamic rigid 2-D/3-D registration framework, which measures the local 3-D-to-2-D misalignment and efficiently constrains the update of both planar and non-planar 3-D rigid transformations using a novel point-to-plane correspondence model. In the simulation evaluation, the proposed method achieved a mean 3-D accuracy of 0.07 mm for the head phantom and 0.05 mm for the thorax phantom using single-view X-ray images. In the evaluation on dynamic motion compensation, our method significantly increases the accuracy comparing with the baseline method. The proposed method is also evaluated on a publicly-available clinical angiogram data set with "gold-standard" registrations. The proposed method achieved a mean 3-D accuracy below 0.8 mm and a mean 2-D accuracy below 0.3 mm using single-view X-ray images. It outperformed the state-of-the-art methods in both accuracy and robustness in single-view registration. The proposed method is intuitive, generic, and suitable for both initial and dynamic registration scenarios.
Collapse
|
30
|
Berger M, Xia Y, Aichinger W, Mentl K, Unberath M, Aichert A, Riess C, Hornegger J, Fahrig R, Maier A. Motion compensation for cone-beam CT using Fourier consistency conditions. Phys Med Biol 2017; 62:7181-7215. [PMID: 28741597 DOI: 10.1088/1361-6560/aa8129] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In cone-beam CT, involuntary patient motion and inaccurate or irreproducible scanner motion substantially degrades image quality. To avoid artifacts this motion needs to be estimated and compensated during image reconstruction. In previous work we showed that Fourier consistency conditions (FCC) can be used in fan-beam CT to estimate motion in the sinogram domain. This work extends the FCC to [Formula: see text] cone-beam CT. We derive an efficient cost function to compensate for [Formula: see text] motion using [Formula: see text] detector translations. The extended FCC method have been tested with five translational motion patterns, using a challenging numerical phantom. We evaluated the root-mean-square-error and the structural-similarity-index between motion corrected and motion-free reconstructions. Additionally, we computed the mean-absolute-difference (MAD) between the estimated and the ground-truth motion. The practical applicability of the method is demonstrated by application to respiratory motion estimation in rotational angiography, but also to motion correction for weight-bearing imaging of knees. Where the latter makes use of a specifically modified FCC version which is robust to axial truncation. The results show a great reduction of motion artifacts. Accurate estimation results were achieved with a maximum MAD value of 708 μm and 1184 μm for motion along the vertical and horizontal detector direction, respectively. The image quality of reconstructions obtained with the proposed method is close to that of motion corrected reconstructions based on the ground-truth motion. Simulations using noise-free and noisy data demonstrate that FCC are robust to noise. Even high-frequency motion was accurately estimated leading to a considerable reduction of streaking artifacts. The method is purely image-based and therefore independent of any auxiliary data.
Collapse
Affiliation(s)
- M Berger
- Pattern Recognition Lab, Friedrich-Alexander-Universtät Erlangen-Nürnberg, 91058 Erlangen, Germany. Graduate School 1773, Heterogeneous Image Systems, 91058 Erlangen, Germany
| | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Nardi C, Buzzi R, Molteni R, Cossi C, Lorini C, Calistri L, Colagrande S. The role of cone beam CT in the study of symptomatic total knee arthroplasty (TKA): a 20 cases report. Br J Radiol 2017; 90:20160925. [PMID: 28467105 DOI: 10.1259/bjr.20160925] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
OBJECTIVE The aims of this study were to evaluate the efficacy of cone beam CT (CBCT) in the study of the patellar tilt angle and rotational alignment of the femoral/tibial component after total knee arthroplasty and to estimate how metallic artefacts impaired detection of periprosthetic bone structures and bordering tendon-muscle structures. METHODS 20 symptomatic total knee arthroplasties were examined using CBCT by three independent observers. The patellar tilt angle and rotational alignment of femoral and tibial components were measured in relation to the femoral flange, transepicondylar axis and tibial tuberosity, respectively. A four-score scale, ranging from "many metallic artefacts" (the structure cannot be identified) to "no metallic artefacts" (the structure can be perfectly identified), was used to judge every structure. RESULTS The patellar tilt angle and rotational alignment of the prosthetic components showed very high intra- and interobserver agreements (intraclass correlation coefficient values 0.895-0.975 and 0.891-0.948, respectively). Bone and tendon-muscle structures cannot be identified in the distal part of the femoral component, whereas they can be well identified in the proximal part of the femoral component and in the proximal/middle third of the tibial stem. CONCLUSION CBCT was an effective tool, providing reproducible measurements of the patellar tilt angle and the rotational alignment of the femoral/tibial component. Furthermore, it allowed bone and tendon-muscle structures analysis with little impediments from metal artefacts. Advances in knowledge: CBCT allows easy and accurate measurements on the rotational axial plane, unburdened by image quality impairment due to metal artefacts.
Collapse
Affiliation(s)
- Cosimo Nardi
- 1 Department of Experimental and Clinical Biomedical Sciences, Radiodiagnostic Unit n. 2, University of Florence-Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| | - Roberto Buzzi
- 2 Department of Surgery and Translation Medicine, University of Florence-Azienda Ospedaliero, Universitaria Careggi, Florence, Italy
| | | | | | - Chiara Lorini
- 5 Department of Health Science, University of Florence, Florence, Italy
| | - Linda Calistri
- 1 Department of Experimental and Clinical Biomedical Sciences, Radiodiagnostic Unit n. 2, University of Florence-Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| | - Stefano Colagrande
- 1 Department of Experimental and Clinical Biomedical Sciences, Radiodiagnostic Unit n. 2, University of Florence-Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| |
Collapse
|
32
|
Sisniega A, Stayman JW, Yorkston J, Siewerdsen JH, Zbijewski W. Motion compensation in extremity cone-beam CT using a penalized image sharpness criterion. Phys Med Biol 2017; 62:3712-3734. [PMID: 28327471 PMCID: PMC5478238 DOI: 10.1088/1361-6560/aa6869] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Cone-beam CT (CBCT) for musculoskeletal imaging would benefit from a method to reduce the effects of involuntary patient motion. In particular, the continuing improvement in spatial resolution of CBCT may enable tasks such as quantitative assessment of bone microarchitecture (0.1 mm-0.2 mm detail size), where even subtle, sub-mm motion blur might be detrimental. We propose a purely image based motion compensation method that requires no fiducials, tracking hardware or prior images. A statistical optimization algorithm (CMA-ES) is used to estimate a motion trajectory that optimizes an objective function consisting of an image sharpness criterion augmented by a regularization term that encourages smooth motion trajectories. The objective function is evaluated using a volume of interest (VOI, e.g. a single bone and surrounding area) where the motion can be assumed to be rigid. More complex motions can be addressed by using multiple VOIs. Gradient variance was found to be a suitable sharpness metric for this application. The performance of the compensation algorithm was evaluated in simulated and experimental CBCT data, and in a clinical dataset. Motion-induced artifacts and blurring were significantly reduced across a broad range of motion amplitudes, from 0.5 mm to 10 mm. Structure similarity index (SSIM) against a static volume was used in the simulation studies to quantify the performance of the motion compensation. In studies with translational motion, the SSIM improved from 0.86 before compensation to 0.97 after compensation for 0.5 mm motion, from 0.8 to 0.94 for 2 mm motion and from 0.52 to 0.87 for 10 mm motion (~70% increase). Similar reduction of artifacts was observed in a benchtop experiment with controlled translational motion of an anthropomorphic hand phantom, where SSIM (against a reconstruction of a static phantom) improved from 0.3 to 0.8 for 10 mm motion. Application to a clinical dataset of a lower extremity showed dramatic reduction of streaks and improvement in delineation of tissue boundaries and trabecular structures throughout the whole volume. The proposed method will support new applications of extremity CBCT in areas where patient motion may not be sufficiently managed by immobilization, such as imaging under load and quantitative assessment of subchondral bone architecture.
Collapse
Affiliation(s)
- A. Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA 21205
| | - J. W. Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA 21205
| | | | - J. H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA 21205
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore MD USA 21205
| | - W. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD USA 21205
| |
Collapse
|
33
|
Sisniega A, Stayman JW, Cao Q, Yorkston J, Siewerdsen JH, Zbijewski W. Image-Based Motion Compensation for High-Resolution Extremities Cone-Beam CT. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9783. [PMID: 27346909 DOI: 10.1117/12.2217243] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
PURPOSE Cone-beam CT (CBCT) of the extremities provides high spatial resolution, but its quantitative accuracy may be challenged by involuntary sub-mm patient motion that cannot be eliminated with simple means of external immobilization. We investigate a two-step iterative motion compensation based on a multi-component metric of image sharpness. METHODS Motion is considered with respect to locally rigid motion within a particular region of interest, and the method supports application to multiple locally rigid regions. Motion is estimated by maximizing a cost function with three components: a gradient metric encouraging image sharpness, an entropy term that favors high contrast and penalizes streaks, and a penalty term encouraging smooth motion. Motion compensation involved initial coarse estimation of gross motion followed by estimation of fine-scale displacements using high resolution reconstructions. The method was evaluated in simulations with synthetic motion (1-4 mm) applied to a wrist volume obtained on a CMOS-based CBCT testbench. Structural similarity index (SSIM) quantified the agreement between motion-compensated and static data. The algorithm was also tested on a motion contaminated patient scan from dedicated extremities CBCT. RESULTS Excellent correction was achieved for the investigated range of displacements, indicated by good visual agreement with the static data. 10-15% improvement in SSIM was attained for 2-4 mm motions. The compensation was robust against increasing motion (4% decrease in SSIM across the investigated range, compared to 14% with no compensation). Consistent performance was achieved across a range of noise levels. Significant mitigation of artifacts was shown in patient data. CONCLUSION The results indicate feasibility of image-based motion correction in extremities CBCT without the need for a priori motion models, external trackers, or fiducials.
Collapse
Affiliation(s)
- A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - Q Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | | | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA; Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|