1
|
Lee J, Kim S, Ahn J, Wang AS, Baek J. X-ray CT metal artifact reduction using neural attenuation field prior. Med Phys 2025. [PMID: 40305006 DOI: 10.1002/mp.17859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 03/26/2025] [Accepted: 04/14/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND The presence of metal objects in computed tomography (CT) imaging introduces severe artifacts that degrade image quality and hinder accurate diagnosis. While several deep learning-based metal artifact reduction (MAR) methods have been proposed, they often exhibit poor performance on unseen data and require large datasets to train neural networks. PURPOSE In this work, we propose a sinogram inpainting method for metal artifact reduction that leverages a neural attenuation field (NAF) as a prior. This new method, dubbed NAFMAR, operates in a self-supervised manner by optimizing a model-based neural field, thus eliminating the need for large training datasets. METHODS NAF is optimized to generate prior images, which are then used to inpaint metal traces in the original sinogram. To address the corruption of x-ray projections caused by metal objects, a 3D forward projection of the original corrupted image is performed to identify metal traces. Consequently, NAF is optimized using a metal trace-masked ray sampling strategy that selectively utilizes uncorrupted rays to supervise the network. Moreover, a metal-aware loss function is proposed to prioritize metal-associated regions during optimization, thereby enhancing the network to learn more informed representations of anatomical features. After optimization, the NAF images are rendered to generate NAF prior images, which serve as priors to correct original projections through interpolation. Experiments are conducted to compare NAFMAR with other prior-based inpainting MAR methods. RESULTS The proposed method provides an accurate prior without requiring extensive datasets. Images corrected using NAFMAR showed sharp features and preserved anatomical structures. Our comprehensive evaluation, involving simulated dental CT and clinical pelvic CT images, demonstrated the effectiveness of NAF prior compared to other prior information, including the linear interpolation and data-driven convolutional neural networks (CNNs). NAFMAR outperformed all compared baselines in terms of structural similarity index measure (SSIM) values, and its peak signal-to-noise ratio (PSNR) value was comparable to that of the dual-domain CNN method. CONCLUSIONS NAFMAR presents an effective, high-fidelity solution for metal artifact reduction in 3D tomographic imaging without the need for large datasets.
Collapse
Affiliation(s)
- Jooho Lee
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| | - Seongjun Kim
- School of Integrated Technology, Yonsei University, Seoul, Republic of Korea
| | - Junhyun Ahn
- School of Integrated Technology, Yonsei University, Seoul, Republic of Korea
| | - Adam S Wang
- Department of Radiology, Stanford University, California, USA
| | - Jongduk Baek
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
2
|
Ghaznavi H, Maraghechi B, Zhang H, Zhu T, Laugeman E, Zhang T, Zhao T, Mazur TR, Darafsheh A. Quantitative use of cone-beam computed tomography in proton therapy: challenges and opportunities. Phys Med Biol 2025; 70:09TR01. [PMID: 40269645 DOI: 10.1088/1361-6560/adc86c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 04/01/2025] [Indexed: 04/25/2025]
Abstract
The fundamental goal in radiation therapy (RT) is to simultaneously maximize tumor cell killing and healthy tissue sparing. Reducing uncertainty margins improves normal tissue sparing, but generally requires advanced techniques. Adaptive RT (ART) is a compelling technique that leverages daily imaging and anatomical information to support reduced margins and to optimize plan quality for each treatment fraction. An especially exciting avenue for ART is proton therapy (PT), which aims to combine daily plan re-optimization with the unique advantages provided by protons, including reduced integral dose and near-zero dose deposition distal to the target along the beam direction. A core component for ART is onboard image guidance, and currently two options are available on proton systems, including cone-beam computed tomography (CBCT) and CT-on-rail (CToR) imaging. While CBCT suffers from poorer image quality compared to CToR imaging, CBCT platforms can be more easily integrated with PT systems and thus may support more streamlined adaptive proton therapy (APT). In this review, we present current status of CBCT application to proton therapy dose evaluation and plan adaptation, including progress, challenges and future directions.
Collapse
Affiliation(s)
- Hamid Ghaznavi
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Borna Maraghechi
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
- Department of Radiation Oncology, City of Hope Cancer Center, Irvine, CA 92618, United States of America
| | - Hailei Zhang
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Tong Zhu
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Eric Laugeman
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Tiezhi Zhang
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Tianyu Zhao
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Thomas R Mazur
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| | - Arash Darafsheh
- Department of Radiation Oncology, WashU Medicine, St. Louis, MO 63110, United States of America
| |
Collapse
|
3
|
Li Y, Ma C, Li Z, Wang Z, Han J, Shan H, Liu J. Semi-supervised spatial-frequency transformer for metal artifact reduction in maxillofacial CT and evaluation with intraoral scan. Eur J Radiol 2025; 187:112087. [PMID: 40273758 DOI: 10.1016/j.ejrad.2025.112087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 01/23/2025] [Accepted: 03/28/2025] [Indexed: 04/26/2025]
Abstract
PURPOSE To develop a semi-supervised domain adaptation technique for metal artifact reduction with a spatial-frequency transformer (SFTrans) model (Semi-SFTrans), and to quantitatively compare its performance with supervised models (Sup-SFTrans and ResUNet) and traditional linear interpolation MAR method (LI) in oral and maxillofacial CT. METHODS Supervised models, including Sup-SFTrans and a state-of-the-art model termed ResUNet, were trained with paired simulated CT images, while semi-supervised model, Semi-SFTrans, was trained with both paired simulated and unpaired clinical CT images. For evaluation on the simulated data, we calculated Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on the images corrected by four methods: LI, ResUNet, Sup-SFTrans, and Semi-SFTrans. For evaluation on the clinical data, we collected twenty-two clinical cases with real metal artifacts, and the corresponding intraoral scan data. Three radiologists visually assessed the severity of artifacts using Likert scales on the original, Sup-SFTrans-corrected, and Semi-SFTrans-corrected images. Quantitative MAR evaluation was conducted by measuring Mean Hounsfield Unit (HU) values, standard deviations, and Signal-to-Noise Ratios (SNRs) across Regions of Interest (ROIs) such as the tongue, bilateral buccal, lips, and bilateral masseter muscles, using paired t-tests and Wilcoxon signed-rank tests. Further, teeth integrity in the corrected images was assessed by comparing teeth segmentation results from the corrected images against the ground-truth segmentation derived from registered intraoral scan data, using Dice Score and Hausdorff Distance. RESULTS Sup-SFTrans outperformed LI, ResUNet and Semi-SFTrans on the simulated dataset. Visual assessments from the radiologists showed that average scores were (2.02 ± 0.91) for original CT, (4.46 ± 0.51) for Semi-SFTrans CT, and (3.64 ± 0.90) for Sup-SFTrans CT, with intra correlation coefficients (ICCs)>0.8 of all groups and p < 0.001 between groups. On soft tissue, both Semi-SFTrans and Sup-SFTrans significantly reduced metal artifacts in tongue (p < 0.001), lips, bilateral buccal regions, and masseter muscle areas (p < 0.05). Semi-SFTrans achieved superior metal artifact reduction than Sup-SFTrans in all ROIs (p < 0.001). SNR results indicated significant differences between Semi-SFTrans and Sup-SFTrans in tongue (p = 0.0391), bilateral buccal (p = 0.0067), lips (p = 0.0208), and bilateral masseter muscle areas (p = 0.0031). Notably, Semi-SFTrans demonstrated better teeth integrity preservation than Sup-SFTrans (Dice Score: p < 0.001; Hausdorff Distance: p = 0.0022). CONCLUSION The semi-supervised MAR model, Semi-SFTrans, demonstrated superior metal artifact reduction performance over supervised counterparts in real dental CT images.
Collapse
Affiliation(s)
- Yuanlin Li
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Chenglong Ma
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Zilong Li
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Zhen Wang
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Jing Han
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China.
| | - Jiannan Liu
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China.
| |
Collapse
|
4
|
Zhang H, Ma Z, Kang D, Yang M. A Beam Hardening Artifact Correction Method for CT Images Based on VGG Feature Extraction Networks. SENSORS (BASEL, SWITZERLAND) 2025; 25:2088. [PMID: 40218600 PMCID: PMC11991146 DOI: 10.3390/s25072088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Revised: 03/24/2025] [Accepted: 03/24/2025] [Indexed: 04/14/2025]
Abstract
In X-ray industrial computed tomography (ICT) imaging, beam hardening artifacts significantly degrade the quality of reconstructed images, leading to cupping effects, ring artifacts, and reduced contrast resolution. These issues are particularly severe in high-density and irregularly shaped aerospace components, where accurate defect detection is critical. To mitigate beam hardening artifacts, this paper proposes a correction method based on the VGG16 feature extraction network. Continuous convolutional layers automatically extract relevant features of beam hardening artifacts, establish a nonlinear mapping between artifact-affected and artifact-free images, and progressively enhance the model's ability to understand and represent complex image features through stacked layers. Then, a dataset of ICT images with beam hardening artifacts is constructed, and VGG16 is employed to extract deep features from both artifact-affected and reference images. By incorporating perceptual loss into a convolutional neural network and optimizing through iterative training, the proposed method effectively suppresses cupping artifacts and reduces edge blurring. Experimental results demonstrated that the method significantly enhanced image contrast, reduced image noise, and restored structural details, thereby improving the reliability of ICT imaging for aerospace applications.
Collapse
Affiliation(s)
- Hong Zhang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China;
- Beijing Power Machinery Research Institute, Beijing 100074, China; (Z.M.); (D.K.)
| | - Zhaoguang Ma
- Beijing Power Machinery Research Institute, Beijing 100074, China; (Z.M.); (D.K.)
| | - Da Kang
- Beijing Power Machinery Research Institute, Beijing 100074, China; (Z.M.); (D.K.)
| | - Min Yang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China;
| |
Collapse
|
5
|
Ma X, Zou M, Fang X, Luo G, Wang W, Dong S, Li X, Wang K, Dong Q, Tian Y, Li S. Convergent-Diffusion Denoising Model for multi-scenario CT Image Reconstruction. Comput Med Imaging Graph 2025; 120:102491. [PMID: 39787736 DOI: 10.1016/j.compmedimag.2024.102491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 10/27/2024] [Accepted: 12/31/2024] [Indexed: 01/12/2025]
Abstract
A generic and versatile CT Image Reconstruction (CTIR) scheme can efficiently mitigate imaging noise resulting from inherent physical limitations, substantially bolstering the dependability of CT imaging diagnostics across a wider spectrum of patient cases. Current CTIR techniques often concentrate on distinct areas such as Low-Dose CT denoising (LDCTD), Sparse-View CT reconstruction (SVCTR), and Metal Artifact Reduction (MAR). Nevertheless, due to the intricate nature of multi-scenario CTIR, these techniques frequently narrow their focus to specific tasks, resulting in limited generalization capabilities for diverse scenarios. We propose a novel Convergent-Diffusion Denoising Model (CDDM) for multi-scenario CTIR, which utilizes a stepwise denoising process to converge toward an imaging-noise-free image with high generalization. CDDM uses a diffusion-based process based on a priori decay distribution to steadily correct imaging noise, thus avoiding the overfitting of individual samples. Within CDDM, a domain-correlated sampling network (DS-Net) provides an innovative sinogram-guided noise prediction scheme to leverage both image and sinogram (i.e., dual-domain) information. DS-Net analyzes the correlation of the dual-domain representations for sampling the noise distribution, introducing sinogram semantics to avoid secondary artifacts. Experimental results validate the practical applicability of our scheme across various CTIR scenarios, including LDCTD, MAR, and SVCTR, with the support of sinogram knowledge.
Collapse
Affiliation(s)
- Xinghua Ma
- The Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang, China; The Computational Bioscience Research Center, King Abdullah University of Science and Technology, Thuwal, Makkah, Saudi Arabia
| | - Mingye Zou
- The Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Xinyan Fang
- The Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Gongning Luo
- The Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang, China; The Computational Bioscience Research Center, King Abdullah University of Science and Technology, Thuwal, Makkah, Saudi Arabia
| | - Wei Wang
- The Faculty of Computing, Harbin Institute of Technology, Shenzhen, Guangdong, China.
| | - Suyu Dong
- The College of Computer and Control Engineering, Northeast Forestry University, Harbin, Heilongjiang, China.
| | - Xiangyu Li
- The Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang, China.
| | - Kuanquan Wang
- The Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Qing Dong
- The Department of Thoracic Surgery at No. 4 Affiliated Hospital, Harbin Medical University, Harbin, Heilongjiang, China
| | - Ye Tian
- The Department of Cardiology at No. 1 Affiliated Hospital, Harbin Medical University, Harbin, Heilongjiang, China
| | - Shuo Li
- The Department of Computer and Data Science, Case Western Reserve University, Cleveland, OH, USA; The Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
6
|
Amadita K, Gray F, Gee E, Ekpo E, Jimenez Y. CT metal artefact reduction for hip and shoulder implants using novel algorithms and machine learning: A systematic review with pairwise and network meta-analyses. Radiography (Lond) 2025; 31:36-52. [PMID: 39509906 DOI: 10.1016/j.radi.2024.10.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 09/25/2024] [Accepted: 10/14/2024] [Indexed: 11/15/2024]
Abstract
INTRODUCTION Many tools have been developed to reduce metal artefacts in computed tomography (CT) images resulting from metallic prosthesis; however, their relative effectiveness in preserving image quality is poorly understood. This paper reviews the literature on novel metal artefact reduction (MAR) methods targeting large metal artefacts in fan-beam CT to examine their effectiveness in reducing metal artefacts and effect on image quality. METHODS The PRISMA checklist was used to search for articles in five electronic databases (MEDLINE, Scopus, Web of Science, IEEE, EMBASE). Studies that assessed the effectiveness of recently developed MAR method on fan-beam CT images of hip and shoulder implants were reviewed. Study quality was assessed using the National Institute of Health (NIH) tool. Meta-analyses were conducted in R, and results that could not be meta-analysed were synthesised narratively. RESULTS Thirty-six studies were reviewed. Of these, 20 studies proposed statistical algorithms and 16 used machine learning (ML), and there were 19 novel comparators. Network meta-analysis of 19 studies showed that Recurrent Neural Network MAR (RNN-MAR) is more effective in reducing noise (LogOR 20.7; 95 % CI 12.6-28.9) without compromising image quality (LogOR 4.4; 95 % CI -13.8-22.5). The network meta-analysis and narrative synthesis showed novel MAR methods reduce noise more effectively than baseline algorithms, with five out of 23 ML methods significantly more effective than Filtered Back Projection (FBP) (p < 0.05). Computation time varied, but ML methods were faster than statistical algorithms. CONCLUSION ML tools are more effective in reducing metal artefacts without compromising image quality and are computationally faster than statistical algorithms. Overall, novel MAR methods were also more effective in reducing noise than the baseline reconstructions. IMPLICATIONS FOR PRACTICE Implementation research is needed to establish the clinical suitability of ML MAR in practice.
Collapse
Affiliation(s)
- K Amadita
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - F Gray
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - E Gee
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - E Ekpo
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - Y Jimenez
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| |
Collapse
|
7
|
Ramachandran P, Anderson D, Colbert Z, Arrington D, Huo M, Pinkham MB, Foote M, Fielding A. Enhancing Gamma Knife Cone-beam Computed Tomography Image Quality Using Pix2pix Generative Adversarial Networks: A Deep Learning Approach. J Med Phys 2025; 50:30-37. [PMID: 40256180 PMCID: PMC12005652 DOI: 10.4103/jmp.jmp_140_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 11/04/2024] [Accepted: 12/01/2024] [Indexed: 04/22/2025] Open
Abstract
Aims The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images. Materials and Methods We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient. Results The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 (p < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively (p < 0.0001). Conclusion The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.
Collapse
Affiliation(s)
- Prabhakar Ramachandran
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia
- School of Chemistry and Physics, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Darcie Anderson
- School of Chemistry and Physics, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Zachery Colbert
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia
| | - Daniel Arrington
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia
| | - Michael Huo
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia
| | - Mark B Pinkham
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia
| | - Matthew Foote
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Queensland, Australia
| | - Andrew Fielding
- School of Chemistry and Physics, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
8
|
Zhu S, Zhang B, Tian Q, Li A, Liu Z, Hou W, Zhao W, Huang X, Xiao Y, Wang Y, Wang R, Li Y, Yang J, Jin C. Reduced-dose deep learning iterative reconstruction for abdominal computed tomography with low tube voltage and tube current. BMC Med Inform Decis Mak 2024; 24:389. [PMID: 39696218 DOI: 10.1186/s12911-024-02811-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 12/06/2024] [Indexed: 12/20/2024] Open
Abstract
BACKGROUND The low tube-voltage technique (e.g., 80 kV) can efficiently reduce the radiation dose and increase the contrast enhancement of vascular and parenchymal structures in abdominal CT. However, a high tube current is always required in this setting and limits the dose reduction potential. This study investigated the feasibility of a deep learning iterative reconstruction algorithm (Deep IR) in reducing the radiation dose while improving the image quality for abdominal computed tomography (CT) with low tube voltage and current. METHODS Sixty patients (male/female, 36/24; Age, 57.72 ± 10.19 years) undergoing the abdominal portal venous phase CT were randomly divided into groups A (100 kV, automatic exposure control [AEC] with reference tube-current of 213 mAs) and B (80 kV, AEC with reference of 130 mAs). Images were reconstructed via hybrid iterative reconstruction (HIR) and Deep IR (levels 1-5). The mean CT and standard deviation (SD) values of four regions of interest (ROI), i.e. liver, spleen, main portal vein and erector spinae at the porta hepatis level in each image serial were measured, and the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The image quality was subjectively scored by two radiologists using a 5-point criterion. RESULTS A significant reduction in the radiation dose of 69.94% (5.09 ± 0.91 mSv vs. 1.53 ± 0.37 mSv) was detected in Group B compared with Group A. After application of the Deep IR, there was no significant change in the CT value, but the SD gradually increased. Group B had higher CT values than group A, and the portal vein CT values significantly differed between the groups (P < 0.003). The SNR and CNR in Group B with Deep IR at levels 1-5 were greater than those in Group A and significantly differed when HIR and Deep IR were applied at levels 1-3 of HIR and Deep IR (P < 0.003). The subjective scores (distortion, clarity of the portal vein, visibility of small structures and overall image quality) with Deep IR at levels 4-5 in Group B were significantly higher than those in group A with HIR (P < 0.003). CONCLUSION Deep IR algorithm can meet the clinical requirements and reduce the radiation dose by 69.94% in portal venous phase abdominal CT with a low tube voltage of 80 kV and a low tube current. Deep IR at levels 4-5 can significantly improve the image quality of the abdominal parenchymal organs and the clarity of the portal vein.
Collapse
Affiliation(s)
- Shumeng Zhu
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Baoping Zhang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Qian Tian
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Ao Li
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Zhe Liu
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Wei Hou
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Wenzhe Zhao
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Xin Huang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Yao Xiao
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Yiming Wang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Rui Wang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Yuhang Li
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China
| | - Jian Yang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China.
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China.
| | - Chao Jin
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, P. R. China.
- Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, 710061, P. R. China.
| |
Collapse
|
9
|
Cai T, Li X, Zhong C, Tang W, Guo J. DiffMAR: A Generalized Diffusion Model for Metal Artifact Reduction in CT Images. IEEE J Biomed Health Inform 2024; 28:6712-6724. [PMID: 39110557 DOI: 10.1109/jbhi.2024.3439729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2024]
Abstract
X-ray imaging frequently introduces varying degrees of metal artifacts to computed tomography (CT) images when metal implants are present. For the metal artifact reduction (MAR) task, existing end-to-end methods often exhibit limited generalization capabilities. While methods based on multiple iterations often suffer from accumulative error, resulting in lower-quality restoration outcomes. In this work, we innovatively present a generalized diffusion model for Metal Artifact Reduction (DiffMAR). The proposed method utilizes a linear degradation process to simulate the physical phenomenon of metal artifact formation in CT images and directly learn an iterative restoration process from paired CT images in the reverse process. During the reverse process of DiffMAR, a Time-Latent Adjustment (TLA) module is designed to adjust time embedding at the latent level, thereby minimizing the accumulative error during iterative restoration. We also designed a structure information extraction (SIE) module to utilize linear interpolation data in the image domain, guiding the generation of anatomical structures during the iterative restoring. This leads to more accurate and robust shadow-free image generation. Comprehensive analysis, including both synthesized data and clinical evidence, confirms that our proposed method surpasses the current state-of-the-art (SOTA) MAR methods in terms of both image generation quality and generalization.
Collapse
|
10
|
Piol A, Sanderson D, del Cerro CF, Lorente-Mur A, Desco M, Abella M. Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios. SENSORS (BASEL, SWITZERLAND) 2024; 24:6782. [PMID: 39517679 PMCID: PMC11548251 DOI: 10.3390/s24216782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 10/10/2024] [Accepted: 10/15/2024] [Indexed: 11/16/2024]
Abstract
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially increases computational cost. Although deep learning-based methods have been proposed for several cases of limited-data CT, few works in the literature have dealt with beam-hardening artifacts, and none have addressed the problems caused by randomly selected projections and a highly limited span. We propose the deep learning-based prior image constrained (PICDL) framework, a hybrid method used to yield CT images free from beam-hardening artifacts in different limited-data scenarios based on the combination of a modified version of the Prior Image Constrained Compressed Sensing (PICCS) algorithm that incorporates the L2 norm (L2-PICCS) with a prior image generated from a preliminary FDK reconstruction with a deep learning (DL) algorithm. The model is based on a modification of the U-Net architecture, incorporating ResNet-34 as a replacement of the original encoder. Evaluation with rodent head studies in a small-animal CT scanner showed that the proposed method was able to correct beam-hardening artifacts, recover patient contours, and compensate streak and deformation artifacts in scenarios with a limited span and a limited number of projections randomly selected. Hallucinations present in the prior image caused by the deep learning model were eliminated, while the target information was effectively recovered by the L2-PICCS algorithm.
Collapse
Affiliation(s)
- Alessandro Piol
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Department of Information Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy
| | - Daniel Sanderson
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| | - Carlos F. del Cerro
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| | - Antonio Lorente-Mur
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| | - Manuel Desco
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), 28029 Madrid, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), 28029 Madrid, Spain
| | - Mónica Abella
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), 28029 Madrid, Spain
| |
Collapse
|
11
|
Liu X, Xie Y, Diao S, Tan S, Liang X. Unsupervised CT Metal Artifact Reduction by Plugging Diffusion Priors in Dual Domains. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3533-3545. [PMID: 38194400 DOI: 10.1109/tmi.2024.3351201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
During the process of computed tomography (CT), metallic implants often cause disruptive artifacts in the reconstructed images, impeding accurate diagnosis. Many supervised deep learning-based approaches have been proposed for metal artifact reduction (MAR). However, these methods heavily rely on training with paired simulated data, which are challenging to acquire. This limitation can lead to decreased performance when applying these methods in clinical practice. Existing unsupervised MAR methods, whether based on learning or not, typically work within a single domain, either in the image domain or the sinogram domain. In this paper, we propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions. Specifically, we first train a diffusion model using CT images without metal artifacts. Subsequently, we iteratively introduce the diffusion priors in both the sinogram domain and image domain to restore the degraded portions caused by metal artifacts. Besides, we design temporally dynamic weight masks for the image-domian fusion. The dual-domain processing empowers our approach to outperform existing unsupervised MAR methods, including another MAR method based on diffusion model. The effectiveness has been qualitatively and quantitatively validated on synthetic datasets. Moreover, our method demonstrates superior visual results among both supervised and unsupervised methods on clinical datasets. Codes are available in github.com/DeepXuan/DuDoDp-MAR.
Collapse
|
12
|
Karageorgos GM, Zhang J, Peters N, Xia W, Niu C, Paganetti H, Wang G, De Man B. A Denoising Diffusion Probabilistic Model for Metal Artifact Reduction in CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3521-3532. [PMID: 38963746 PMCID: PMC11657996 DOI: 10.1109/tmi.2024.3416398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/06/2024]
Abstract
The presence of metal objects leads to corrupted CT projection measurements, resulting in metal artifacts in the reconstructed CT images. AI promises to offer improved solutions to estimate missing sinogram data for metal artifact reduction (MAR), as previously shown with convolutional neural networks (CNNs) and generative adversarial networks (GANs). Recently, denoising diffusion probabilistic models (DDPM) have shown great promise in image generation tasks, potentially outperforming GANs. In this study, a DDPM-based approach is proposed for inpainting of missing sinogram data for improved MAR. The proposed model is unconditionally trained, free from information on metal objects, which can potentially enhance its generalization capabilities across different types of metal implants compared to conditionally trained approaches. The performance of the proposed technique was evaluated and compared to the state-of-the-art normalized MAR (NMAR) approach as well as to CNN-based and GAN-based MAR approaches. The DDPM-based approach provided significantly higher SSIM and PSNR, as compared to NMAR (SSIM: p [Formula: see text]; PSNR: p [Formula: see text]), the CNN (SSIM: p [Formula: see text]; PSNR: p [Formula: see text]) and the GAN (SSIM: p [Formula: see text]; PSNR: p <0.05) methods. The DDPM-MAR technique was further evaluated based on clinically relevant image quality metrics on clinical CT images with virtually introduced metal objects and metal artifacts, demonstrating superior quality relative to the other three models. In general, the AI-based techniques showed improved MAR performance compared to the non-AI-based NMAR approach. The proposed methodology shows promise in enhancing the effectiveness of MAR, and therefore improving the diagnostic accuracy of CT.
Collapse
|
13
|
McKeown T, Gach HM, Hao Y, An H, Robinson CG, Cuculich PS, Yang D. Small metal artifact detection and inpainting in cardiac CT images. ARXIV 2024:arXiv:2409.17342v1. [PMID: 39398205 PMCID: PMC11469418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
Background Quantification of cardiac motion on pre-treatment CT imaging for stereotactic arrhythmia radiotherapy patients is difficult due to the presence of image artifacts caused by metal leads of implantable cardioverter-defibrillators (ICDs). The CT scanners' onboard metal artifact reduction tool does not sufficiently reduce these artifacts. More advanced artifact reduction techniques require the raw CT projection data and thus are not applicable to already reconstructed CT images. New methods are needed to accurately reduce the metal artifacts in already reconstructed CTs to recover the otherwise lost anatomical information. Purpose To develop a methodology to automatically detect metal artifacts in cardiac CT scans and inpaint the affected volume with anatomically consistent structures and values. Methods Breath-hold ECG-gated 4DCT scans of 12 patients who underwent cardiac radiation therapy for treating ventricular tachycardia were collected. The metal artifacts in the images caused by the ICD leads were manually contoured. A 2D U-Net deep learning (DL) model was developed to segment the metal artifacts automatically using eight patients for training, two for validation, and two for testing. A dataset of 592 synthetic CTs was prepared by adding segmented metal artifacts from the patient 4DCT images to artifact-free cardiac CTs of 148 patients. A 3D image inpainting DL model was trained to refill the metal artifact portion in the synthetic images with realistic image contents that approached the ground truth artifact-free images. The trained inpainting model was evaluated by analyzing the automated segmentation results of the four heart chambers with and without artifacts on the synthetic dataset. Additionally, the raw cardiac patient images with metal artifacts were processed using the inpainting model and the results of metal artifact reduction were qualitatively inspected. Results The artifact detection model worked well and produced a Dice score of 0.958 ± 0.008. The inpainting model for synthesized cases was able to recreate images that were nearly identical to the ground truth with a structural similarity index of 0.988 ± 0.012. With the chamber segmentations on the artifact-free images as the reference, the average surface Dice scores improved from 0.684 ± 0.247 to 0.964 ± 0.067 and the Hausdorff distance reduced from 3.4 ± 3.9 mm to 0.7 ± 0.7 mm. The inpainting model's use on cardiac patient CTs was visually inspected and the artifact-inpainted images were visually plausible. Conclusion We successfully developed two deep models to detect and inpaint metal artifacts in cardiac CT images. These deep models are useful to improve the heart chamber segmentation and cardiac motion analysis in CT images corrupted by mental artifacts. The trained models and example data are available to the public through GitHub.
Collapse
Affiliation(s)
| | - H. Michael Gach
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis
- Department of Radiology, School of Medicine, Washington University in Saint Louis
- Department of Biomedical Engineering, Washington University in Saint Louis
| | - Yao Hao
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis
| | - Hongyu An
- Department of Radiology, School of Medicine, Washington University in Saint Louis
- Department of Biomedical Engineering, Washington University in Saint Louis
| | - Clifford G. Robinson
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis
| | - Phillip S. Cuculich
- Department of Cardiology, School of Medicine, Washington University in Saint Louis
| | - Deshan Yang
- Department of Radiation Oncology, Duke University
| |
Collapse
|
14
|
Kleber CEJ, Karius R, Naessens LE, Van Toledo CO, A C van Osch J, Boomsma MF, Heemskerk JWT, van der Molen AJ. Advancements in supervised deep learning for metal artifact reduction in computed tomography: A systematic review. Eur J Radiol 2024; 181:111732. [PMID: 39265203 DOI: 10.1016/j.ejrad.2024.111732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/23/2024] [Accepted: 09/05/2024] [Indexed: 09/14/2024]
Abstract
BACKGROUND Metallic artefacts caused by metal implants, are a common problem in computed tomography (CT) imaging, degrading image quality and diagnostic accuracy. With advancements in artificial intelligence, novel deep learning (DL)-based metal artefact reduction (MAR) algorithms are entering clinical practice. OBJECTIVE This systematic review provides an overview of the performance of the current supervised DL-based MAR algorithms for CT, focusing on three different domains: sinogram, image, and dual domain. METHODS A literature search was conducted in PubMed, EMBASE, Web of Science, and Scopus. Outcomes were assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) or any other objective measure comparing MAR performance to uncorrected images. RESULTS After screening, fourteen studies were selected that compared DL-based MAR-algorithms with uncorrected images. MAR-algorithms were categorised into the three domains. Thirteen MAR-algorithms showed a higher PSNR and SSIM value compared to the uncorrected images and to non-DL MAR-algorithms. One study showed statistically significant better MAR performance on clinical data compared to the uncorrected images and non-DL MAR-algorithms based on Hounsfield unit calculations. CONCLUSION DL MAR-algorithms show promising results in reducing metal artefacts, but standardised methodologies are needed to evaluate DL-based MAR-algorithms on clinical data to improve comparability between algorithms. CLINICAL RELEVANCE STATEMENT Recent studies highlight the effectiveness of supervised Deep Learning-based MAR-algorithms in improving CT image quality by reducing metal artefacts in the sinogram, image and dual domain. A systematic review is needed to provide an overview of newly developed algorithms.
Collapse
Affiliation(s)
- Cecile E J Kleber
- Department of Clinical Technology, Faculty of Mechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Ramez Karius
- Department of Clinical Technology, Faculty of Mechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Lucas E Naessens
- Department of Clinical Technology, Faculty of Mechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Coen O Van Toledo
- Department of Clinical Technology, Faculty of Mechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | | | | | - Jan W T Heemskerk
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Aart J van der Molen
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.
| |
Collapse
|
15
|
Zhang J, Mao H, Chang D, Yu H, Wu W, Shen D. Adaptive and Iterative Learning With Multi-Perspective Regularizations for Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3354-3365. [PMID: 38687653 DOI: 10.1109/tmi.2024.3395348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Metal artifact reduction (MAR) is important for clinical diagnosis with CT images. The existing state-of-the-art deep learning methods usually suppress metal artifacts in sinogram or image domains or both. However, their performance is limited by the inherent characteristics of the two domains, i.e., the errors introduced by local manipulations in the sinogram domain would propagate throughout the whole image during backprojection and lead to serious secondary artifacts, while it is difficult to distinguish artifacts from actual image features in the image domain. To alleviate these limitations, this study analyzes the desirable properties of wavelet transform in-depth and proposes to perform MAR in the wavelet domain. First, wavelet transform yields components that possess spatial correspondence with the image, thereby preventing the spread of local errors to avoid secondary artifacts. Second, using wavelet transform could facilitate identification of artifacts from image since metal artifacts are mainly high-frequency signals. Taking these advantages of the wavelet transform, this paper decomposes an image into multiple wavelet components and introduces multi-perspective regularizations into the proposed MAR model. To improve the transparency and validity of the model, all the modules in the proposed MAR model are designed to reflect their mathematical meanings. In addition, an adaptive wavelet module is also utilized to enhance the flexibility of the model. To optimize the model, an iterative algorithm is developed. The evaluation on both synthetic and real clinical datasets consistently confirms the superior performance of the proposed method over the competing methods.
Collapse
|
16
|
Xie K, Gao L, Zhang Y, Zhang H, Sun J, Lin T, Sui J, Ni X. Metal implant segmentation in CT images based on diffusion model. BMC Med Imaging 2024; 24:204. [PMID: 39107679 PMCID: PMC11301972 DOI: 10.1186/s12880-024-01379-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 07/25/2024] [Indexed: 08/10/2024] Open
Abstract
BACKGROUND Computed tomography (CT) is widely in clinics and is affected by metal implants. Metal segmentation is crucial for metal artifact correction, and the common threshold method often fails to accurately segment metals. PURPOSE This study aims to segment metal implants in CT images using a diffusion model and further validate it with clinical artifact images and phantom images of known size. METHODS A retrospective study was conducted on 100 patients who received radiation therapy without metal artifacts, and simulated artifact data were generated using publicly available mask data. The study utilized 11,280 slices for training and verification, and 2,820 slices for testing. Metal mask segmentation was performed using DiffSeg, a diffusion model incorporating conditional dynamic coding and a global frequency parser (GFParser). Conditional dynamic coding fuses the current segmentation mask and prior images at multiple scales, while GFParser helps eliminate high-frequency noise in the mask. Clinical artifact images and phantom images are also used for model validation. RESULTS Compared with the ground truth, the accuracy of DiffSeg for metal segmentation of simulated data was 97.89% and that of DSC was 95.45%. The mask shape obtained by threshold segmentation covered the ground truth and DSCs were 82.92% and 84.19% for threshold segmentation based on 2500 HU and 3000 HU. Evaluation metrics and visualization results show that DiffSeg performs better than other classical deep learning networks, especially for clinical CT, artifact data, and phantom data. CONCLUSION DiffSeg efficiently and robustly segments metal masks in artifact data with conditional dynamic coding and GFParser. Future work will involve embedding the metal segmentation model in metal artifact reduction to improve the reduction effect.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Liugang Gao
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Yutao Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Jiawei Sun
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Tao Lin
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Xinye Ni
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China.
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
- Changzhou Key Laboratory of Medical Physics, Changzhou, 213000, China.
| |
Collapse
|
17
|
Cam RM, Villa U, Anastasio MA. Learning a stable approximation of an existing but unknown inverse mapping: application to the half-time circular Radon transform. INVERSE PROBLEMS 2024; 40:085002. [PMID: 38933410 PMCID: PMC11197394 DOI: 10.1088/1361-6420/ad4f0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/05/2024] [Accepted: 05/22/2024] [Indexed: 06/28/2024]
Abstract
Supervised deep learning-based methods have inspired a new wave of image reconstruction methods that implicitly learn effective regularization strategies from a set of training data. While they hold potential for improving image quality, they have also raised concerns regarding their robustness. Instabilities can manifest when learned methods are applied to find approximate solutions to ill-posed image reconstruction problems for which a unique and stable inverse mapping does not exist, which is a typical use case. In this study, we investigate the performance of supervised deep learning-based image reconstruction in an alternate use case in which a stable inverse mapping is known to exist but is not yet analytically available in closed form. For such problems, a deep learning-based method can learn a stable approximation of the unknown inverse mapping that generalizes well to data that differ significantly from the training set. The learned approximation of the inverse mapping eliminates the need to employ an implicit (optimization-based) reconstruction method and can potentially yield insights into the unknown analytic inverse formula. The specific problem addressed is image reconstruction from a particular case of radially truncated circular Radon transform (CRT) data, referred to as 'half-time' measurement data. For the half-time image reconstruction problem, we develop and investigate a learned filtered backprojection method that employs a convolutional neural network to approximate the unknown filtering operation. We demonstrate that this method behaves stably and readily generalizes to data that differ significantly from training data. The developed method may find application to wave-based imaging modalities that include photoacoustic computed tomography.
Collapse
Affiliation(s)
- Refik Mert Cam
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| | - Umberto Villa
- Oden Institute for Computational Engineering & Sciences, The University of Texas at Austin, Austin, TX 78712, United States of America
| | - Mark A Anastasio
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
- Department of Bioengineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| |
Collapse
|
18
|
Zhang Y, Liu L, Yu H, Wang T, Zhang Y, Liu Y. ReMAR: a preoperative CT angiography guided metal artifact reduction framework designed for follow-up CTA of endovascular coiling. Phys Med Biol 2024; 69:145015. [PMID: 38959913 DOI: 10.1088/1361-6560/ad5ef4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 07/03/2024] [Indexed: 07/05/2024]
Abstract
Objective. Follow-up computed tomography angiography (CTA) is necessary for ensuring occlusion effect of endovascular coiling. However, the implanted metal coil will introduce artifacts that have a negative spillover into radiologic assessment.Method. A framework named ReMAR is proposed in this paper for metal artifacts reduction (MARs) from follow-up CTA of patients with coiled aneurysms. It employs preoperative CTA to provide the prior knowledge of the aneurysm and the expected position of the coil as a guidance thus balances the metal artifacts removal performance and clinical feasibility. The ReMAR is composed of three modules: segmentation, registration and MAR module. The segmentation and registration modules obtain the metal coil knowledge via implementing aneurysms delineation on preoperative CTA and alignment of follow-up CTA. The MAR module consisting of hybrid convolutional neural network- and transformer- architectures is utilized to restore sinogram and remove the artifact from reconstructed image. Both image quality and vessel rendering effect after metal artifacts removal are assessed in order to responding clinical concerns.Main results. A total of 137 patients undergone endovascular coiling have been enrolled in the study: 13 of them have complete diagnosis/follow-up records for end-to-end validation, while the rest lacked of follow-up records are used for model training. Quantitative metrics show ReMAR significantly reduced the metal-artifact burden in follow-up CTA. Qualitative ranks show ReMAR could preserve the morphology of blood vessels during artifact removal as desired by doctors.Significance. The ReMAR could significantly remove the artifacts caused by implanted metal coil in the follow-up CTA. It can be used to enhance the overall image quality and convince CTA an alternative to invasive follow-up in treated intracranial aneurysm.
Collapse
Affiliation(s)
- Yaoyu Zhang
- College of Electrical Engineering, Sichuan University, Chengdu 610065, People's Republic of China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu 610044, People's Republic of China
| | - Hui Yu
- College of Computer Science, Sichuan University, Chengdu 610065, People's Republic of China
| | - Tao Wang
- College of Computer Science, Sichuan University, Chengdu 610065, People's Republic of China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, People's Republic of China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, Chengdu 610065, People's Republic of China
| |
Collapse
|
19
|
Song Y, Yao T, Peng S, Zhu M, Meng M, Ma J, Zeng D, Huang J, Bian Z, Wang Y. b-MAR: bidirectional artifact representations learning framework for metal artifact reduction in dental CBCT. Phys Med Biol 2024; 69:145010. [PMID: 38588680 DOI: 10.1088/1361-6560/ad3c0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 04/08/2024] [Indexed: 04/10/2024]
Abstract
Objective.Metal artifacts in computed tomography (CT) images hinder diagnosis and treatment significantly. Specifically, dental cone-beam computed tomography (Dental CBCT) images are seriously contaminated by metal artifacts due to the widespread use of low tube voltages and the presence of various high-attenuation materials in dental structures. Existing supervised metal artifact reduction (MAR) methods mainly learn the mapping of artifact-affected images to clean images, while ignoring the modeling of the metal artifact generation process. Therefore, we propose the bidirectional artifact representations learning framework to adaptively encode metal artifacts caused by various dental implants and model the generation and elimination of metal artifacts, thereby improving MAR performance.Approach.Specifically, we introduce an efficient artifact encoder to extract multi-scale representations of metal artifacts from artifact-affected images. These extracted metal artifact representations are then bidirectionally embedded into both the metal artifact generator and the metal artifact eliminator, which can simultaneously improve the performance of artifact removal and artifact generation. The artifact eliminator learns artifact removal in a supervised manner, while the artifact generator learns artifact generation in an adversarial manner. To further improve the performance of the bidirectional task networks, we propose artifact consistency loss to align the consistency of images generated by the eliminator and the generator with or without embedding artifact representations.Main results.To validate the effectiveness of our algorithm, experiments are conducted on simulated and clinical datasets containing various dental metal morphologies. Quantitative metrics are calculated to evaluate the results of the simulation tests, which demonstrate b-MAR improvements of >1.4131 dB in PSNR, >0.3473 HU decrements in RMSE, and >0.0025 promotion in structural similarity index measurement over the current state-of-the-art MAR methods. All results indicate that the proposed b-MAR method can remove artifacts caused by various metal morphologies and restore the structural integrity of dental tissues effectively.Significance.The proposed b-MAR method strengthens the joint learning of the artifact removal process and the artifact generation process by bidirectionally embedding artifact representations, thereby improving the model's artifact removal performance. Compared with other comparison methods, b-MAR can robustly and effectively correct metal artifacts in dental CBCT images caused by different dental metals.
Collapse
Affiliation(s)
- Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Tianyi Yao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Shengwang Peng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Mingqiang Meng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jing Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
20
|
Chang JY, Makary MS. Evolving and Novel Applications of Artificial Intelligence in Thoracic Imaging. Diagnostics (Basel) 2024; 14:1456. [PMID: 39001346 PMCID: PMC11240935 DOI: 10.3390/diagnostics14131456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/01/2024] [Accepted: 07/06/2024] [Indexed: 07/16/2024] Open
Abstract
The advent of artificial intelligence (AI) is revolutionizing medicine, particularly radiology. With the development of newer models, AI applications are demonstrating improved performance and versatile utility in the clinical setting. Thoracic imaging is an area of profound interest, given the prevalence of chest imaging and the significant health implications of thoracic diseases. This review aims to highlight the promising applications of AI within thoracic imaging. It examines the role of AI, including its contributions to improving diagnostic evaluation and interpretation, enhancing workflow, and aiding in invasive procedures. Next, it further highlights the current challenges and limitations faced by AI, such as the necessity of 'big data', ethical and legal considerations, and bias in representation. Lastly, it explores the potential directions for the application of AI in thoracic radiology.
Collapse
Affiliation(s)
- Jin Y Chang
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Mina S Makary
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
- Division of Vascular and Interventional Radiology, Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
21
|
Tyndall DA. A primer and overview of the role of artificial intelligence in oral and maxillofacial radiology. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:112-117. [PMID: 38538401 DOI: 10.1016/j.oooo.2024.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 02/10/2024] [Indexed: 06/23/2024]
Affiliation(s)
- Donald A Tyndall
- Department of Diagnostic Sciences, The University of North Carolina at Chapel Hill Adams School of Dentistry, Chapel Hill, NC.
| |
Collapse
|
22
|
Anhaus JA, Heider M, Killermann P, Hofmann C, Mahnken AH. A New Iterative Metal Artifact Reduction Algorithm for Both Energy-Integrating and Photon-Counting CT Systems. Invest Radiol 2024; 59:526-537. [PMID: 38193772 DOI: 10.1097/rli.0000000000001055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
OBJECTIVES The aim of this study was to introduce and evaluate a new metal artifact reduction framework (iMARv2) that addresses the drawbacks (residual artifacts after correction and user preferences for image quality) associated with the current clinically applied iMAR. MATERIALS AND METHODS A new iMARv2 has been introduced, combining the current iMAR with new modular components to remove residual metal artifacts after image correction. The postcorrection image impression is adjustable with user-selectable strength settings. Phantom scans from an energy-integrating and a photon-counting detector CT were used to assess image quality, including a Gammex phantom and anthropomorphic phantoms. In addition, 36 clinical cases (with metallic implants such as dental fillings, hip replacements, and spinal screws) were reconstructed and evaluated in a blinded and randomized reader study. RESULTS The Gammex phantom showed lower HU errors compared with the uncorrected image at almost all iMAR and iMARv2 settings evaluated, with only minor differences between iMAR and the different iMARv2 settings. In addition, the anthropomorphic phantoms showed a trend toward lower errors with higher iMARv2 strength settings. On average, the iMARv2 strength 3 performed best of all the clinical reconstructions evaluated, with a significant increase in diagnostic confidence and decrease in artifacts. All hip and dental cases showed a significant increase in diagnostic confidence and decrease in artifact strength, and the improvements from iMARv2 in the dental cases were significant compared with iMAR. There were no significant improvements in the spine. CONCLUSIONS This work has introduced and evaluated a new method for metal artifact reduction and demonstrated its utility in routine clinical datasets. The greatest improvements were seen in dental fillings, where iMARv2 significantly improved image quality compared with conventional iMAR.
Collapse
Affiliation(s)
- Julian A Anhaus
- From the Siemens Healthineers, CT Physics, Forchheim, Germany (J.A.A., M.H., C.H.); Clinic of Diagnostic and Interventional Radiology, Philipps-University Marburg, Marburg, Germany (J.A.A., A.H.M.); and Infoteam Software AG, Bubenreuth, Germany (P.K.)
| | | | | | | | | |
Collapse
|
23
|
Wajer R, Wajer A, Kazimierczak N, Wilamowska J, Serafin Z. The Impact of AI on Metal Artifacts in CBCT Oral Cavity Imaging. Diagnostics (Basel) 2024; 14:1280. [PMID: 38928694 PMCID: PMC11203150 DOI: 10.3390/diagnostics14121280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
OBJECTIVE This study aimed to assess the impact of artificial intelligence (AI)-driven noise reduction algorithms on metal artifacts and image quality parameters in cone-beam computed tomography (CBCT) images of the oral cavity. MATERIALS AND METHODS This retrospective study included 70 patients, 61 of whom were analyzed after excluding those with severe motion artifacts. CBCT scans, performed using a Hyperion X9 PRO 13 × 10 CBCT machine, included images with dental implants, amalgam fillings, orthodontic appliances, root canal fillings, and crowns. Images were processed with the ClariCT.AI deep learning model (DLM) for noise reduction. Objective image quality was assessed using metrics such as the differentiation between voxel values (ΔVVs), the artifact index (AIx), and the contrast-to-noise ratio (CNR). Subjective assessments were performed by two experienced readers, who rated overall image quality and artifact intensity on predefined scales. RESULTS Compared with native images, DLM reconstructions significantly reduced the AIx and increased the CNR (p < 0.001), indicating improved image clarity and artifact reduction. Subjective assessments also favored DLM images, with higher ratings for overall image quality and lower artifact intensity (p < 0.001). However, the ΔVV values were similar between the native and DLM images, indicating that while the DLM reduced noise, it maintained the overall density distribution. Orthodontic appliances produced the most pronounced artifacts, while implants generated the least. CONCLUSIONS AI-based noise reduction using ClariCT.AI significantly enhances CBCT image quality by reducing noise and metal artifacts, thereby improving diagnostic accuracy and treatment planning. Further research with larger, multicenter cohorts is recommended to validate these findings.
Collapse
Affiliation(s)
- Róża Wajer
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
| | | | - Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland;
| | - Justyna Wilamowska
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| |
Collapse
|
24
|
Cao W, Parvinian A, Adamo D, Welch B, Callstrom M, Ren L, Missert A, Favazza CP. Deep convolutional-neural-network-based metal artifact reduction for CT-guided interventional oncology procedures (MARIO). Med Phys 2024; 51:4231-4242. [PMID: 38353644 DOI: 10.1002/mp.16980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/20/2023] [Accepted: 01/22/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Computed tomography (CT) is routinely used to guide cryoablation procedures. Notably, CT-guidance provides 3D localization of cryoprobes and can be used to delineate frozen tissue during ablation. However, metal-induced artifacts from ablation probes can make accurate probe placement challenging and degrade the ice ball conspicuity, which in combination could lead to undertreatment of potentially curable lesions. PURPOSE In this work, we propose an image-based neural network (CNN) model for metal artifact reduction for CT-guided interventional procedures. METHODS An image domain metal artifact simulation framework was developed and validated for deep-learning-based metal artifact reduction for interventional oncology (MARIO). CT scans were acquired for 19 different cryoablation probe configurations. The probe configurations varied in the number of probes and the relative orientations. A combination of intensity thresholding and masking based on maximum intensity projections (MIPs) was used to segment both the probes only and probes + artifact in each phantom image. Each of the probe and probe + artifact images were then inserted into 19 unique patient exams, in the image domain, to simulate metal artifact appearance for CT-guided interventional oncology procedures. The resulting 361 pairs of simulated image volumes were partitioned into disjoint training and test datasets of 304 and 57 volumes, respectively. From the training partition, 116 600 image patches with a shape of 128 × 128 × 5 pixels were randomly extracted to be used for training data. The input images consisted of a superposition of the patient and probe + artifact images. The target images consisted of a superposition of the patient and probe only images. This dataset was used to optimize a U-Net type model. The trained model was then applied to 50 independent, previously unseen CT images obtained during renal cryoablations. Three board-certified radiologists with experience in CT-guided ablations performed a blinded review of the MARIO images. A total of 100 images (50 original, 50 MARIO processed) were assessed across different aspects of image quality on a 4-point likert-type item. Statistical analyses were performed using Wilcoxon signed-rank test for paired samples. RESULTS Reader scores were significantly higher for MARIO processed images compared to the original images across all metrics (all p < 0.001). The average scores of the overall image quality, iceball conspicuity, overall metal artifact, needle tip visualization, target region confidence, and worst metal artifact, needle tip visualization, iceball conspicuity, and target region confidence improved by 34.91%, 36.29%, 39.94%, 34.17%, 35.13%, and 45.70%, respectively. CONCLUSIONS The proposed method of image-based metal artifact simulation can be used to train a MARIO algorithm to effectively reduce probe-related metal artifacts in CT-guided cryoablation procedures.
Collapse
Affiliation(s)
- Wenchao Cao
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Ahmad Parvinian
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Daniel Adamo
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Brian Welch
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Liqiang Ren
- Department of Radiology, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Andrew Missert
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | |
Collapse
|
25
|
Xia J, Zhou Y, Deng W, Kang J, Wu W, Qi M, Zhou L, Ma J, Xu Y. PND-Net: Physics-Inspired Non-Local Dual-Domain Network for Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2125-2136. [PMID: 38236665 DOI: 10.1109/tmi.2024.3354925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Metal artifacts caused by the presence of metallic implants tremendously degrade the quality of reconstructed computed tomography (CT) images and therefore affect the clinical diagnosis or reduce the accuracy of organ delineation and dose calculation in radiotherapy. Although various deep learning methods have been proposed for metal artifact reduction (MAR), most of them aim to restore the corrupted sinogram within the metal trace, which removes beam hardening artifacts but ignores other components of metal artifacts. In this paper, based on the physical property of metal artifacts which is verified via Monte Carlo (MC) simulation, we propose a novel physics-inspired non-local dual-domain network (PND-Net) for MAR in CT imaging. Specifically, we design a novel non-local sinogram decomposition network (NSD-Net) to acquire the weighted artifact component and develop an image restoration network (IR-Net) to reduce the residual and secondary artifacts in the image domain. To facilitate the generalization and robustness of our method on clinical CT images, we employ a trainable fusion network (F-Net) in the artifact synthesis path to achieve unpaired learning. Furthermore, we design an internal consistency loss to ensure the data fidelity of anatomical structures in the image domain and introduce the linear interpolation sinogram as prior knowledge to guide sinogram decomposition. NSD-Net, IR-Net, and F-Net are jointly trained so that they can benefit from one another. Extensive experiments on simulation and clinical data demonstrate that our method outperforms state-of-the-art MAR methods.
Collapse
|
26
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
27
|
Liu Y, Xie R, Wang L, Liu H, Liu C, Zhao Y, Bai S, Liu W. Fully automatic AI segmentation of oral surgery-related tissues based on cone beam computed tomography images. Int J Oral Sci 2024; 16:34. [PMID: 38719817 PMCID: PMC11079075 DOI: 10.1038/s41368-024-00294-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 02/21/2024] [Accepted: 03/09/2024] [Indexed: 05/12/2024] Open
Abstract
Accurate segmentation of oral surgery-related tissues from cone beam computed tomography (CBCT) images can significantly accelerate treatment planning and improve surgical accuracy. In this paper, we propose a fully automated tissue segmentation system for dental implant surgery. Specifically, we propose an image preprocessing method based on data distribution histograms, which can adaptively process CBCT images with different parameters. Based on this, we use the bone segmentation network to obtain the segmentation results of alveolar bone, teeth, and maxillary sinus. We use the tooth and mandibular regions as the ROI regions of tooth segmentation and mandibular nerve tube segmentation to achieve the corresponding tasks. The tooth segmentation results can obtain the order information of the dentition. The corresponding experimental results show that our method can achieve higher segmentation accuracy and efficiency compared to existing methods. Its average Dice scores on the tooth, alveolar bone, maxillary sinus, and mandibular canal segmentation tasks were 96.5%, 95.4%, 93.6%, and 94.8%, respectively. These results demonstrate that it can accelerate the development of digital dentistry.
Collapse
Affiliation(s)
- Yu Liu
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Rui Xie
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Lifeng Wang
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Hongpeng Liu
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Chen Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Yimin Zhao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China.
| | - Shizhu Bai
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Wenyong Liu
- Key Laboratory of Biomechanics and Mechanobiology of the Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| |
Collapse
|
28
|
Li Z, Gao Q, Wu Y, Niu C, Zhang J, Wang M, Wang G, Shan H. Quad-Net: Quad-Domain Network for CT Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1866-1879. [PMID: 38194399 DOI: 10.1109/tmi.2024.3351722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Metal implants and other high-density objects in patients introduce severe streaking artifacts in CT images, compromising image quality and diagnostic performance. Although various methods were developed for CT metal artifact reduction over the past decades, including the latest dual-domain deep networks, remaining metal artifacts are still clinically challenging in many cases. Here we extend the state-of-the-art dual-domain deep network approach into a quad-domain counterpart so that all the features in the sinogram, image, and their corresponding Fourier domains are synergized to eliminate metal artifacts optimally without compromising structural subtleties. Our proposed quad-domain network for MAR, referred to as Quad-Net, takes little additional computational cost since the Fourier transform is highly efficient, and works across the four receptive fields to learn both global and local features as well as their relations. Specifically, we first design a Sinogram-Fourier Restoration Network (SFR-Net) in the sinogram domain and its Fourier space to faithfully inpaint metal-corrupted traces. Then, we couple SFR-Net with an Image-Fourier Refinement Network (IFR-Net) which takes both an image and its Fourier spectrum to improve a CT image reconstructed from the SFR-Net output using cross-domain contextual information. Quad-Net is trained on clinical datasets to minimize a composite loss function. Quad-Net does not require precise metal masks, which is of great importance in clinical practice. Our experimental results demonstrate the superiority of Quad-Net over the state-of-the-art MAR methods quantitatively, visually, and statistically. The Quad-Net code is publicly available at https://github.com/longzilicart/Quad-Net.
Collapse
|
29
|
Selles M, Wellenberg RHH, Slotman DJ, Nijholt IM, van Osch JAC, van Dijke KF, Maas M, Boomsma MF. Image quality and metal artifact reduction in total hip arthroplasty CT: deep learning-based algorithm versus virtual monoenergetic imaging and orthopedic metal artifact reduction. Eur Radiol Exp 2024; 8:31. [PMID: 38480603 PMCID: PMC10937891 DOI: 10.1186/s41747-024-00427-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/02/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND To compare image quality, metal artifacts, and diagnostic confidence of conventional computed tomography (CT) images of unilateral total hip arthroplasty patients (THA) with deep learning-based metal artifact reduction (DL-MAR) to conventional CT and 130-keV monoenergetic images with and without orthopedic metal artifact reduction (O-MAR). METHODS Conventional CT and 130-keV monoenergetic images with and without O-MAR and DL-MAR images of 28 unilateral THA patients were reconstructed. Image quality, metal artifacts, and diagnostic confidence in bone, pelvic organs, and soft tissue adjacent to the prosthesis were jointly scored by two experienced musculoskeletal radiologists. Contrast-to-noise ratios (CNR) between bladder and fat and muscle and fat were measured. Wilcoxon signed-rank tests with Holm-Bonferroni correction were used. RESULTS Significantly higher image quality, higher diagnostic confidence, and less severe metal artifacts were observed on DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001 for all comparisons). Higher image quality, higher diagnostic confidence for bone and soft tissue adjacent to the prosthesis, and less severe metal artifacts were observed on DL-MAR when compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.014). CNRs were higher for DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001). Higher CNRs were observed on DL-MAR images compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.010). CONCLUSIONS DL-MAR showed higher image quality, diagnostic confidence, and superior metal artifact reduction compared to conventional CT images and 130-keV monoenergetic images with and without O-MAR in unilateral THA patients. RELEVANCE STATEMENT DL-MAR resulted into improved image quality, stronger reduction of metal artifacts, and improved diagnostic confidence compared to conventional and virtual monoenergetic images with and without metal artifact reduction, bringing DL-based metal artifact reduction closer to clinical application. KEY POINTS • Metal artifacts introduced by total hip arthroplasty hamper radiologic assessment on CT. • A deep-learning algorithm (DL-MAR) was compared to dual-layer CT images with O-MAR. • DL-MAR showed best image quality and diagnostic confidence. • Highest contrast-to-noise ratios were observed on the DL-MAR images.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands.
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands.
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands.
| | - Ruud H H Wellenberg
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | - Ingrid M Nijholt
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | | | - Kees F van Dijke
- Department of Radiology & Nuclear Medicine, Noordwest Ziekenhuisgroep, 1815 JD, Alkmaar, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | | |
Collapse
|
30
|
Schoonhoven R, Skorikov A, Palenstijn WJ, Pelt DM, Hendriksen AA, Batenburg KJ. How auto-differentiation can improve CT workflows: classical algorithms in a modern framework. OPTICS EXPRESS 2024; 32:9019-9041. [PMID: 38571146 DOI: 10.1364/oe.502920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 01/09/2024] [Indexed: 04/05/2024]
Abstract
Many of the recent successes of deep learning-based approaches have been enabled by a framework of flexible, composable computational blocks with their parameters adjusted through an automatic differentiation mechanism to implement various data processing tasks. In this work, we explore how the same philosophy can be applied to existing "classical" (i.e., non-learning) algorithms, focusing on computed tomography (CT) as application field. We apply four key design principles of this approach for CT workflow design: end-to-end optimization, explicit quality criteria, declarative algorithm construction by building the forward model, and use of existing classical algorithms as computational blocks. Through four case studies, we demonstrate that auto-differentiation is remarkably effective beyond the boundaries of neural-network training, extending to CT workflows containing varied combinations of classical and machine learning algorithms.
Collapse
|
31
|
Morioka Y, Ichikawa K, Kawashima H. Quality improvement of images with metal artifact reduction using a noise recovery technique in computed tomography. Phys Eng Sci Med 2024; 47:169-180. [PMID: 37938518 DOI: 10.1007/s13246-023-01353-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/31/2023] [Indexed: 11/09/2023]
Abstract
In metal artifact reduction (MAR) in computed tomography (CT) based on projection data inpainting, X-ray photon noise has not been considered in the inpainting process. This study aims to assess the effectiveness of a MAR technique incorporating noise recovery in such projection data regions, compared with existing MAR techniques based on projection data normalization (NMAR), including one with frequency splitting (FSNMAR). Phantoms simulating hip prostheses and dental fillings were scanned using a 64-row multi slice CT scanner. The projection data was processed by NMAR and NMAR with noise recovery (NRNMAR); the processed data was sent back to the CT system for reconstruction. For the phantoms and clinical cases with hip prostheses and dental fillings, images were reconstructed without MAR, and with NMAR, NRNMAR, and FSNMAR (incorporated in the CT system). To validate the efficacy of noise recovery, noise power spectra (NPSs) were measured from the images of the hip prosthesis phantom with and without metals. The artifact index (AI) was compared between NRNMAR and FSNMAR. The resultant NPSs of NRNMAR were very similar to those of phantom images with no metals, endorsing the efficacy of noise recovery. The NMAR images had unnatural noise textures and FSNMAR caused additional streaks. NRNMAR exhibited some significant improvements in these respects: It reduced the AI by as much as 66.2-88.6% compared to FSNMAR, except for the case of a unilateral prosthesis. In conclusion, NRNMAR, which simply adds white noise to the projection data, would be effective in improving the quality of CT images with metal artifacts reduction.
Collapse
Affiliation(s)
- Yusuke Morioka
- Department of Medical Technology, Toyama Prefectural Central Hospital, 2-2-78 Nishinagae, Toyama, 930-8550, Japan
- Division of Health Sciences, Graduate School of Medical Science, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Katsuhiro Ichikawa
- Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan.
| | - Hiroki Kawashima
- Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| |
Collapse
|
32
|
Zhang J, Gong W, Ye L, Wang F, Shangguan Z, Cheng Y. A Review of deep learning methods for denoising of medical low-dose CT images. Comput Biol Med 2024; 171:108112. [PMID: 38387380 DOI: 10.1016/j.compbiomed.2024.108112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/18/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
To prevent patients from being exposed to excess of radiation in CT imaging, the most common solution is to decrease the radiation dose by reducing the X-ray, and thus the quality of the resulting low-dose CT images (LDCT) is degraded, as evidenced by more noise and streaking artifacts. Therefore, it is important to maintain high quality CT image while effectively reducing radiation dose. In recent years, with the rapid development of deep learning technology, deep learning-based LDCT denoising methods have become quite popular because of their data-driven and high-performance features to achieve excellent denoising results. However, to our knowledge, no relevant article has so far comprehensively introduced and reviewed advanced deep learning denoising methods such as Transformer structures in LDCT denoising tasks. Therefore, based on the literatures related to LDCT image denoising published from year 2016-2023, and in particular from 2020 to 2023, this study presents a systematic survey of current situation, and challenges and future research directions in LDCT image denoising field. Four types of denoising networks are classified according to the network structure: CNN-based, Encoder-Decoder-based, GAN-based, and Transformer-based denoising networks, and each type of denoising network is described and summarized from the perspectives of structural features and denoising performances. Representative deep-learning denoising methods for LDCT are experimentally compared and analyzed. The study results show that CNN-based denoising methods capture image details efficiently through multi-level convolution operation, demonstrating superior denoising effects and adaptivity. Encoder-decoder networks with MSE loss, achieve outstanding results in objective metrics. GANs based methods, employing innovative generators and discriminators, obtain denoised images that exhibit perceptually a closeness to NDCT. Transformer-based methods have potential for improving denoising performances due to their powerful capability in capturing global information. Challenges and opportunities for deep learning based LDCT denoising are analyzed, and future directions are also presented.
Collapse
Affiliation(s)
- Ju Zhang
- College of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Weiwei Gong
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
| | - Lieli Ye
- College of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Fanghong Wang
- Zhijiang College, Zhejiang University of Technology, Shaoxing, China.
| | - Zhibo Shangguan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
| | - Yun Cheng
- Department of Medical Imaging, Zhejiang Hospital, Hangzhou, China.
| |
Collapse
|
33
|
Zhou H, Zhang H, Zhao X, Zhang P, Zhu Y. A model-based direct inversion network (MDIN) for dual spectral computed tomography. Phys Med Biol 2024; 69:055005. [PMID: 38271738 DOI: 10.1088/1361-6560/ad229f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 01/25/2024] [Indexed: 01/27/2024]
Abstract
Objective. Dual spectral computed tomography (DSCT) is a very challenging problem in the field of imaging. Due to the nonlinearity of its mathematical model, the images reconstructed by the conventional CT usually suffer from the beam hardening artifacts. Additionally, several existing DSCT methods rely heavily on the information of the spectra, which is often not readily available in applications. To address this problem, in this study, we aim to develop a novel approach to improve the DSCT reconstruction performance.Approach. A model-based direct inversion network (MDIN) is proposed for DSCT, which can directly predict the basis material images from the collected polychromatic projections. The all operations are performed in the network, requiring neither the conventional algorithms nor the information of the spectra. It can be viewed as an approximation to the inverse procedure of DSCT imaging model. The MDIN is composed of projection pre-decomposition module (PD-module), domain transformation layer (DT-layer), and image post-decomposition module (ID-module). The PD-module first performs the pre-decomposition on the polychromatic projections that consists of a series of stacked one-dimensional convolution layers. The DT-layer is designed to obtain the preliminary decomposed results, which has the characteristics of sparsely connected and learnable parameters. And the ID-module uses a deep neural network to further decompose the reconstructed results of the DT-layer so as to achieve higher-quality basis material images.Main results. Numerical experiments demonstrate that the proposed MDIN has significant advantages in substance decomposition, artifact reduction and noise suppression compared to other methods in the DSCT reconstruction.Significance. The proposed method has a flexible applicability, which can be extended to other CT problems, such as multi-spectral CT and low dose CT.
Collapse
Affiliation(s)
- Haichuan Zhou
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, 471000, People's Republic of China
| | - Huitao Zhang
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
- Shenzhen National Applied Mathematics Center, Southern University of Science and Technology, Shenzhen, 518055, People's Republic of China
| | - Xing Zhao
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
- Shenzhen National Applied Mathematics Center, Southern University of Science and Technology, Shenzhen, 518055, People's Republic of China
| | - Peng Zhang
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
| | - Yining Zhu
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
- Shenzhen National Applied Mathematics Center, Southern University of Science and Technology, Shenzhen, 518055, People's Republic of China
| |
Collapse
|
34
|
O'Connell J, Weil MD, Bazalova-Carter M. Non-coplanar lung SABR treatments delivered with a gantry-mounted x-ray tube. Phys Med Biol 2024; 69:025002. [PMID: 38035372 DOI: 10.1088/1361-6560/ad111a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/30/2023] [Indexed: 12/02/2023]
Abstract
Objective.To create two non-coplanar, stereotactic ablative radiotherapy (SABR) lung patient treatment plans compliant with the radiation therapy oncology group (RTOG) 0813 dosimetric criteria using a simple, isocentric, therapy with kilovoltage arcs (SITKA) system designed to provide low cost external radiotherapy treatments for low- and middle-income countries (LMICs).Approach.A treatment machine design has been proposed featuring a 320 kVp x-ray tube mounted on a gantry. A deep learning cone-beam CT (CBCT) to synthetic CT (sCT) method was employed to remove the additional cost of planning CTs. A novel inverse treatment planning approach using GPU backprojection was used to create a highly non-coplanar treatment plan with circular beam shapes generated by an iris collimator. Treatments were planned and simulated using the TOPAS Monte Carlo (MC) code for two lung patients. Dose distributions were compared to 6 MV volumetric modulated arc therapy (VMAT) planned in Eclipse on the same cases for a Truebeam linac as well as obeying the RTOG 0813 protocols for lung SABR treatments with a prescribed dose of 50 Gy.Main results.The low-cost SITKA treatments were compliant with all RTOG 0813 dosimetric criteria. SITKA treatments showed, on average, a 6.7 and 4.9 Gy reduction of the maximum dose in soft tissue organs at risk (OARs) as compared to VMAT, for the two patients respectively. This was accompanied by a small increase in the mean dose of 0.17 and 0.30 Gy in soft tissue OARs.Significance.The proposed SITKA system offers a maximally low-cost, effective alternative to conventional radiotherapy systems for lung cancer patients, particularly in low-income countries. The system's non-coplanar, isocentric approach, coupled with the deep learning CBCT to sCT and GPU backprojection-based inverse treatment planning, offers lower maximum doses in OARs and comparable conformity to VMAT plans at a fraction of the cost of conventional radiotherapy.
Collapse
Affiliation(s)
| | - Michael D Weil
- Sirius Medicine LLC, Half Moon Bay, CA, United States of America
| | | |
Collapse
|
35
|
Higaki T. [[CT] 5. Various CT Image Reconstruction Methods Applying Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:112-117. [PMID: 38246633 DOI: 10.6009/jjrt.2024-2309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Affiliation(s)
- Toru Higaki
- Graduate School of Advanced Science and Engineering, Hiroshima University
| |
Collapse
|
36
|
Selles M, van Osch JAC, Maas M, Boomsma MF, Wellenberg RHH. Advances in metal artifact reduction in CT images: A review of traditional and novel metal artifact reduction techniques. Eur J Radiol 2024; 170:111276. [PMID: 38142571 DOI: 10.1016/j.ejrad.2023.111276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/14/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
Metal artifacts degrade CT image quality, hampering clinical assessment. Numerous metal artifact reduction methods are available to improve the image quality of CT images with metal implants. In this review, an overview of traditional methods is provided including the modification of acquisition and reconstruction parameters, projection-based metal artifact reduction techniques (MAR), dual energy CT (DECT) and the combination of these techniques. Furthermore, the additional value and challenges of novel metal artifact reduction techniques that have been introduced over the past years are discussed such as photon counting CT (PCCT) and deep learning based metal artifact reduction techniques.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | | | - Mario Maas
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | | - Ruud H H Wellenberg
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| |
Collapse
|
37
|
Wang H, Xie Q, Zeng D, Ma J, Meng D, Zheng Y. OSCNet: Orientation-Shared Convolutional Network for CT Metal Artifact Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:489-502. [PMID: 37656650 DOI: 10.1109/tmi.2023.3310987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
X-ray computed tomography (CT) has been broadly adopted in clinical applications for disease diagnosis and image-guided interventions. However, metals within patients always cause unfavorable artifacts in the recovered CT images. Albeit attaining promising reconstruction results for this metal artifact reduction (MAR) task, most of the existing deep-learning-based approaches have some limitations. The critical issue is that most of these methods have not fully exploited the important prior knowledge underlying this specific MAR task. Therefore, in this paper, we carefully investigate the inherent characteristics of metal artifacts which present rotationally symmetrical streaking patterns. Then we specifically propose an orientation-shared convolution representation mechanism to adapt such physical prior structures and utilize Fourier-series-expansion-based filter parametrization for modelling artifacts, which can finely separate metal artifacts from body tissues. By adopting the classical proximal gradient algorithm to solve the model and then utilizing the deep unfolding technique, we easily build the corresponding orientation-shared convolutional network, termed as OSCNet. Furthermore, considering that different sizes and types of metals would lead to different artifact patterns (e.g., intensity of the artifacts), to better improve the flexibility of artifact learning and fully exploit the reconstructed results at iterative stages for information propagation, we design a simple-yet-effective sub-network for the dynamic convolution representation of artifacts. By easily integrating the sub-network into the proposed OSCNet framework, we further construct a more flexible network structure, called OSCNet+, which improves the generalization performance. Through extensive experiments conducted on synthetic and clinical datasets, we comprehensively substantiate the effectiveness of our proposed methods. Code will be released at https://github.com/hongwang01/OSCNet.
Collapse
|
38
|
Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH. A Systematic Literature Review of 3D Deep Learning Techniques in Computed Tomography Reconstruction. Tomography 2023; 9:2158-2189. [PMID: 38133073 PMCID: PMC10748093 DOI: 10.3390/tomography9060169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
Collapse
Affiliation(s)
- Hameedur Rahman
- Department of Computer Games Development, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Abdur Rehman Khan
- Department of Creative Technologies, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Touseef Sadiq
- Centre for Artificial Intelligence Research, Department of Information and Communication Technology, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
| | - Ashfaq Hussain Farooqi
- Department of Computer Science, Faculty of Computing AI, Air University, Islamabad 44000, Pakistan;
| | - Inam Ullah Khan
- Department of Electronic Engineering, School of Engineering & Applied Sciences (SEAS), Isra University, Islamabad Campus, Islamabad 44000, Pakistan;
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
39
|
Ghods K, Azizi A, Jafari A, Ghods K. Application of Artificial Intelligence in Clinical Dentistry, a Comprehensive Review of Literature. JOURNAL OF DENTISTRY (SHIRAZ, IRAN) 2023; 24:356-371. [PMID: 38149231 PMCID: PMC10749440 DOI: 10.30476/dentjods.2023.96835.1969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 01/04/2023] [Accepted: 03/05/2023] [Indexed: 12/28/2023]
Abstract
Statement of the Problem In recent years, the use of artificial intelligence (AI) has become increasingly popular in dentistry because it facilitates the process of diagnosis and clinical decision-making. However, AI holds multiple prominent drawbacks, which restrict its wide application today. It is necessary for dentists to be aware of AI's pros and cons before its implementation. Purpose Therefore, the present study was conducted to comprehensively review various applications of AI in all dental branches along with its advantages and disadvantages. Materials and Method For this review article, a complete query was carried out on PubMed and Google Scholar databases and the studies published during 2010-2022 were collected using the keywords "Artificial Intelligence", "Dentistry," "Machine learning," "Deep learning," and "Diagnostic System." Ultimately, 116 relevant articles focused on artificial intelligence in dentistry were selected and evaluated. Results In new research AI applications in detecting dental abnormalities and oral malignancies based on radiographic view and histopathological features, designing dental implants and crowns, determining tooth preparation finishing line, analyzing growth patterns, estimating biological age, predicting the viability of dental pulp stem cells, analyzing the gene expression of periapical lesions, forensic dentistry, and predicting the success rate of treatments, have been mentioned. Despite AI's benefits in clinical dentistry, three controversial challenges including ease of use, financial return on investment, and evidence of performance exist and need to be managed. Conclusion As evidenced by the obtained results, the most crucial progression of AI is in oral malignancies' diagnostic systems. However, AI's newest advancements in various branches of dentistry require further scientific work before being applied to clinical practice. Moreover, the immense use of AI in clinical dentistry is only achievable when its challenges are appropriately managed.
Collapse
Affiliation(s)
- Kimia Ghods
- Student of Dentistry, Membership of Dental Material Research Center, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Arash Azizi
- Dept. Oral Medicine, Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Aryan Jafari
- Student of Dentistry, Membership of Dental Material Research Center, Tehran
| | - Kian Ghods
- Dept. of Mathematics and Industrial Engineering, Polytechnique Montreal, Montreal, Canada
| |
Collapse
|
40
|
Choi Y, Jang H, Baek J. Chest tomosynthesis deblurring using CNN with deconvolution layer for vertebrae segmentation. Med Phys 2023; 50:7714-7730. [PMID: 37401539 DOI: 10.1002/mp.16576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 04/13/2023] [Accepted: 06/06/2023] [Indexed: 07/05/2023] Open
Abstract
BACKGROUND Limited scan angles cause severe distortions and artifacts in reconstructed tomosynthesis images when the Feldkamp-Davis-Kress (FDK) algorithm is used for the purpose, which degrades clinical diagnostic performance. These blurring artifacts are fatal in chest tomosynthesis images because precise vertebrae segmentation is crucial for various diagnostic analyses, such as early diagnosis, surgical planning, and injury detection. Moreover, because most spinal pathologies are related to vertebral conditions, the development of methods for accurate and objective vertebrae segmentation in medical images is an important and challenging research area. PURPOSE The existing point-spread-function-(PSF)-based deblurring methods use the same PSF in all sub-volumes without considering the spatially varying property of tomosynthesis images. This increases the PSF estimation error, thus further degrading the deblurring performance. However, the proposed method estimates the PSF more accurately by using sub-CNNs that contain a deconvolution layer for each sub-system, which improves the deblurring performance. METHODS To minimize the effect of the spatially varying property, the proposed deblurring network architecture comprises four modules: (1) block division module, (2) partial PSF module, (3) deblurring block module, and (4) assembling block module. We compared the proposed DL-based method with the FDK algorithm, total-variation iterative reconstruction with GP-BB (TV-IR), 3D U-Net, FBPConvNet, and two-phase deblurring method. To investigate the deblurring performance of the proposed method, we evaluated its vertebrae segmentation performance by comparing the pixel accuracy (PA), intersection-over-union (IoU), and F-score values of reference images to those of the deblurred images. Also, pixel-based evaluations of the reference and deblurred images were performed by comparing their root mean squared error (RMSE) and visual information fidelity (VIF) values. In addition, 2D analysis of the deblurred images were performed by artifact spread function (ASF) and full width half maximum (FWHM) of the ASF curve. RESULTS The proposed method was able to recover the original structure significantly, thereby further improving the image quality. The proposed method yielded the best deblurring performance in terms of vertebrae segmentation and similarity. The IoU, F-score, and VIF values of the chest tomosynthesis images reconstructed using the proposed SV method were 53.5%, 28.7%, and 63.2% higher, respectively, than those of the images reconstructed using the FDK method, and the RMSE value was 80.3% lower. These quantitative results indicate that the proposed method can effectively restore both the vertebrae and the surrounding soft tissue. CONCLUSIONS We proposed a chest tomosynthesis deblurring technique for vertebrae segmentation by considering the spatially varying property of tomosynthesis systems. The results of quantitative evaluations indicated that the vertebrae segmentation performance of the proposed method was better than those of the existing deblurring methods.
Collapse
Affiliation(s)
- Yunsu Choi
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Hanjoo Jang
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, College of Computing, Yonsei University, Incheon, South Korea
| |
Collapse
|
41
|
Puvanasunthararajah S, Camps SM, Wille ML, Fontanarosa D. Deep learning-based ultrasound transducer induced CT metal artifact reduction using generative adversarial networks for ultrasound-guided cardiac radioablation. Phys Eng Sci Med 2023; 46:1399-1410. [PMID: 37548887 DOI: 10.1007/s13246-023-01307-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 07/20/2023] [Indexed: 08/08/2023]
Abstract
In US-guided cardiac radioablation, a possible workflow includes simultaneous US and planning CT acquisitions, which can result in US transducer-induced metal artifacts on the planning CT scans. To reduce the impact of these artifacts, a metal artifact reduction (MAR) algorithm has been developed based on a deep learning Generative Adversarial Network called Cycle-MAR, and compared with iMAR (Siemens), O-MAR (Philips) and MDT (ReVision Radiology), and CCS-MAR (Combined Clustered Scan-based MAR). Cycle-MAR was trained with a supervised learning scheme using sets of paired clinical CT scans with and without simulated artifacts. It was then evaluated on CT scans with real artifacts of an anthropomorphic phantom, and on sets of clinical CT scans with simulated artifacts which were not used for Cycle-MAR training. Image quality metrics and HU value-based analysis were used to evaluate the performance of Cycle-MAR compared to the other algorithms. The proposed Cycle-MAR network effectively reduces the negative impact of the metal artifacts. For example, the calculated HU value improvement percentage for the cardiac structures in the clinical CT scans was 59.58%, 62.22%, and 72.84% after MDT, CCS-MAR, and Cycle-MAR application, respectively. The application of MAR algorithms reduces the impact of US transducer-induced metal artifacts on CT scans. In comparison to iMAR, O-MAR, MDT, and CCS-MAR, the application of developed Cycle-MAR network on CT scans performs better in reducing these metal artifacts.
Collapse
Affiliation(s)
- Sathyathas Puvanasunthararajah
- School of Clinical Sciences, Queensland University of Technology, Brisbane, QLD, Australia.
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia.
| | | | - Marie-Luise Wille
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia
- School of Mechanical, Medical & Process Engineering, Faculty of Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- ARC ITTC for Multiscale 3D Imaging, Modelling, and Manufacturing, Queensland University of Technology, Brisbane, QLD, Australia
| | - Davide Fontanarosa
- School of Clinical Sciences, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
42
|
Shrestha P, LaManna JM, Fahy KF, Kim P, Lee C, Lee JK, Baltic E, Jacobson DL, Hussey DS, Bazylak A. Simultaneous multimaterial operando tomography of electrochemical devices. SCIENCE ADVANCES 2023; 9:eadg8634. [PMID: 37939178 PMCID: PMC10631724 DOI: 10.1126/sciadv.adg8634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 10/06/2023] [Indexed: 11/10/2023]
Abstract
The performance of electrochemical energy devices, such as fuel cells and batteries, is dictated by intricate physiochemical processes within. To better understand and rationally engineer these processes, we need robust operando characterization tools that detect and distinguish multiple interacting components/interfaces in high contrast. Here, we uniquely combine dual-modality tomography (simultaneous neutron and x-ray tomography) and advanced image processing (iterative reconstruction and metal artifact reduction) for high-contrast multimaterial imaging, with signal and contrast enhancements of up to 10 and 48 times, respectively, compared to conventional single-modality imaging. Targeted development and application of these methods to electrochemical devices allow us to resolve operando distributions of six interacting fuel cell components (including void space) with the highest reported pairwise contrast for simultaneous yet decoupled spatiotemporal characterization of component morphology and hydration. Such high-contrast tomography ushers in key gold standards for operando electrochemical characterization, with broader applicability to numerous multimaterial systems.
Collapse
Affiliation(s)
- Pranay Shrestha
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Jacob M. LaManna
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Kieran F. Fahy
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Pascal Kim
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - ChungHyuk Lee
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
- Department of Chemical Engineering, Toronto Metropolitan University, Toronto, Ontario, Canada
| | - Jason K. Lee
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Elias Baltic
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - David L. Jacobson
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Daniel S. Hussey
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Aimy Bazylak
- Bazylak Group, Department of Mechanical & Industrial Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
43
|
Wang T, Yu H, Wang Z, Chen H, Liu Y, Lu J, Zhang Y. SemiMAR: Semi-Supervised Learning for CT Metal Artifact Reduction. IEEE J Biomed Health Inform 2023; 27:5369-5380. [PMID: 37669208 DOI: 10.1109/jbhi.2023.3312292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
Metal artifacts lead to CT imaging quality degradation. With the success of deep learning (DL) in medical imaging, a number of DL-based supervised methods have been developed for metal artifact reduction (MAR). Nonetheless, fully-supervised MAR methods based on simulated data do not perform well on clinical data due to the domain gap. Although this problem can be avoided in an unsupervised way to a certain degree, severe artifacts cannot be well suppressed in clinical practice. Recently, semi-supervised metal artifact reduction (MAR) methods have gained wide attention due to their ability in narrowing the domain gap and improving MAR performance in clinical data. However, these methods typically require large model sizes, posing challenges for optimization. To address this issue, we propose a novel semi-supervised MAR framework. In our framework, only the artifact-free parts are learned, and the artifacts are inferred by subtracting these clean parts from the metal-corrupted CT images. Our approach leverages a single generator to execute all complex transformations, thereby reducing the model's scale and preventing overlap between clean part and artifacts. To recover more tissue details, we distill the knowledge from the advanced dual-domain MAR network into our model in both image domain and latent feature space. The latent space constraint is achieved via contrastive learning. We also evaluate the impact of different generator architectures by investigating several mainstream deep learning-based MAR backbones. Our experiments demonstrate that the proposed method competes favorably with several state-of-the-art semi-supervised MAR techniques in both qualitative and quantitative aspects.
Collapse
|
44
|
Lyu T, Wu Z, Ma G, Jiang C, Zhong X, Xi Y, Chen Y, Zhu W. PDS-MAR: a fine-grained projection-domain segmentation-based metal artifact reduction method for intraoperative CBCT images with guidewires. Phys Med Biol 2023; 68:215007. [PMID: 37802062 DOI: 10.1088/1361-6560/ad00fc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/06/2023] [Indexed: 10/08/2023]
Abstract
Objective.Since the invention of modern Computed Tomography (CT) systems, metal artifacts have been a persistent problem. Due to increased scattering, amplified noise, and limited-angle projection data collection, it is more difficult to suppress metal artifacts in cone-beam CT, limiting its use in human- and robot-assisted spine surgeries where metallic guidewires and screws are commonly used.Approach.To solve this problem, we present a fine-grained projection-domain segmentation-based metal artifact reduction (MAR) method termed PDS-MAR, in which metal traces are augmented and segmented in the projection domain before being inpainted using triangular interpolation. In addition, a metal reconstruction phase is proposed to restore metal areas in the image domain.Main results.The proposed method is tested on both digital phantom data and real scanned cone-beam computed tomography (CBCT) data. It achieves much-improved quantitative results in both metal segmentation and artifact reduction in our phantom study. The results on real scanned data also show the superiority of this method.Significance.The concept of projection-domain metal segmentation would advance MAR techniques in CBCT and has the potential to push forward the use of intraoperative CBCT in human-handed and robotic-assisted minimal invasive spine surgeries.
Collapse
Affiliation(s)
- Tianling Lyu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Zhan Wu
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Gege Ma
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Chen Jiang
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Xinyun Zhong
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Yan Xi
- First-Imaging Tech., Shanghai, People's Republic of China
| | - Yang Chen
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, People's Republic of China
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, People's Republic of China
| | - Wentao Zhu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
45
|
Zhang J, Sun K, Yang J, Hu Y, Gu Y, Cui Z, Zong X, Gao F, Shen D. A generalized dual-domain generative framework with hierarchical consistency for medical image reconstruction and synthesis. COMMUNICATIONS ENGINEERING 2023; 2:72. [PMCID: PMC10956005 DOI: 10.1038/s44172-023-00121-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/26/2023] [Indexed: 01/06/2025]
Abstract
Medical image reconstruction and synthesis are critical for imaging quality, disease diagnosis and treatment. Most of the existing generative models ignore the fact that medical imaging usually occurs in the acquisition domain, which is different from, but associated with, the image domain. Such methods exploit either single-domain or dual-domain information and suffer from inefficient information coupling across domains. Moreover, these models are usually designed specifically and not general enough for different tasks. Here we present a generalized dual-domain generative framework to facilitate the connections within and across domains by elaborately-designed hierarchical consistency constraints. A multi-stage learning strategy is proposed to construct hierarchical constraints effectively and stably. We conducted experiments for representative generative tasks including low-dose PET/CT reconstruction, CT metal artifact reduction, fast MRI reconstruction, and PET/CT synthesis. All these tasks share the same framework and achieve better performance, which validates the effectiveness of our framework. This technology is expected to be applied in clinical imaging to increase diagnosis efficiency and accuracy. A framework applicable in different imaging modalities can facilitate the medical imaging reconstruction efficiency but hindered by inefficient information communication across the data acquisition and imaging domains. Here, Jiadong Zhang and coworkers report a dual-domain generative framework to explore the underlying patterns across domains and apply their method to routine imaging modalities (computed tomography, positron emission tomography, magnetic resonance imaging) under one framework.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Kaicong Sun
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Junwei Yang
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN UK
| | - Yan Hu
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052 Australia
| | - Yuning Gu
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Zhiming Cui
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Xiaopeng Zong
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Fei Gao
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., 200230 Shanghai, China
- Shanghai Clinical Research and Trial Center, 200052 Shanghai, China
| |
Collapse
|
46
|
Li G, Ji L, You C, Gao S, Zhou L, Bai K, Luo S, Gu N. MARGANVAC: metal artifact reduction method based on generative adversarial network with variable constraints. Phys Med Biol 2023; 68:205005. [PMID: 37696272 DOI: 10.1088/1361-6560/acf8ac] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 09/11/2023] [Indexed: 09/13/2023]
Abstract
Objective.Metal artifact reduction (MAR) has been a key issue in CT imaging. Recently, MAR methods based on deep learning have achieved promising results. However, when deploying deep learning-based MAR in real-world clinical scenarios, two prominent challenges arise. One limitation is the lack of paired training data in real applications, which limits the practicality of supervised methods. Another limitation is that image-domain methods suitable for more application scenarios are inadequate in performance while end-to-end approaches with better performance are only applicable to fan-beam CT due to large memory consumption.Approach.We propose a novel image-domain MAR method based on the generative adversarial network with variable constraints (MARGANVAC) to improve MAR performance. The proposed variable constraint is a kind of time-varying cost function that can relax the fidelity constraint at the beginning and gradually strengthen the fidelity constraint as the training progresses. To better deploy our image-domain supervised method into practical scenarios, we develop a transfer method to mimic the real metal artifacts by first extracting the real metal traces and then adding them to artifact-free images to generate paired training data.Main results.The effectiveness of the proposed method is validated in simulated fan-beam experiments and real cone-beam experiments. All quantitative and qualitative results demonstrate that the proposed method achieves superior performance compared with the competing methods.Significance.The MARGANVAC model proposed in this paper is an image-domain model that can be conveniently applied to various scenarios such as fan beam and cone beam CT. At the same time, its performance is on par with the cutting-edge dual-domain MAR approaches. In addition, the metal artifact transfer method proposed in this paper can easily generate paired data with real artifact features, which can be better used for model training in real scenarios.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Longyin Ji
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Chenyu You
- Image Processing and Analysis Group (IPAG), Yale University, New Haven 06510, United States of America
| | - Shuai Gao
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Langrui Zhou
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Keshu Bai
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Ning Gu
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| |
Collapse
|
47
|
Amirian M, Montoya-Zegarra JA, Herzig I, Eggenberger Hotz P, Lichtensteiger L, Morf M, Züst A, Paysan P, Peterlik I, Scheib S, Füchslin RM, Stadelmann T, Schilling FP. Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks. Med Phys 2023; 50:6228-6242. [PMID: 36995003 DOI: 10.1002/mp.16405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 02/25/2023] [Accepted: 03/19/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) is often employed on radiation therapy treatment devices (linear accelerators) used in image-guided radiation therapy (IGRT). For each treatment session, it is necessary to obtain the image of the day in order to accurately position the patient and to enable adaptive treatment capabilities including auto-segmentation and dose calculation. Reconstructed CBCT images often suffer from artifacts, in particular those induced by patient motion. Deep-learning based approaches promise ways to mitigate such artifacts. PURPOSE We propose a novel deep-learning based approach with the goal to reduce motion induced artifacts in CBCT images and improve image quality. It is based on supervised learning and includes neural network architectures employed as pre- and/or post-processing steps during CBCT reconstruction. METHODS Our approach is based on deep convolutional neural networks which complement the standard CBCT reconstruction, which is performed either with the analytical Feldkamp-Davis-Kress (FDK) method, or with an iterative algebraic reconstruction technique (SART-TV). The neural networks, which are based on refined U-net architectures, are trained end-to-end in a supervised learning setup. Labeled training data are obtained by means of a motion simulation, which uses the two extreme phases of 4D CT scans, their deformation vector fields, as well as time-dependent amplitude signals as input. The trained networks are validated against ground truth using quantitative metrics, as well as by using real patient CBCT scans for a qualitative evaluation by clinical experts. RESULTS The presented novel approach is able to generalize to unseen data and yields significant reductions in motion induced artifacts as well as improvements in image quality compared with existing state-of-the-art CBCT reconstruction algorithms (up to +6.3 dB and +0.19 improvements in peak signal-to-noise ratio, PSNR, and structural similarity index measure, SSIM, respectively), as evidenced by validation with an unseen test dataset, and confirmed by a clinical evaluation on real patient scans (up to 74% preference for motion artifact reduction over standard reconstruction). CONCLUSIONS For the first time, it is demonstrated, also by means of clinical evaluation, that inserting deep neural networks as pre- and post-processing plugins in the existing 3D CBCT reconstruction and trained end-to-end yield significant improvements in image quality and reduction of motion artifacts.
Collapse
Affiliation(s)
- Mohammadreza Amirian
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Javier A Montoya-Zegarra
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Ivo Herzig
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Peter Eggenberger Hotz
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Lukas Lichtensteiger
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Marco Morf
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Alexander Züst
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Pascal Paysan
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Igor Peterlik
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Stefan Scheib
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Rudolf Marcel Füchslin
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Thilo Stadelmann
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Frank-Peter Schilling
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| |
Collapse
|
48
|
Tang H, Jiang S, Lin Y, Li Y, Bao X. An improved dual-domain network for metal artifact reduction in CT images using aggregated contextual transformations. Phys Med Biol 2023; 68:175021. [PMID: 37541223 DOI: 10.1088/1361-6560/aced78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 08/04/2023] [Indexed: 08/06/2023]
Abstract
Objective. Metal artifact reduction (MAR) remains a challenging task due to the difficulty of removing artifacts while preserving anatomical details of the tissue. Although current dual-domain networks have shown promising performance in MAR, they heavily rely on the image domain, which can be too smooth and lose important information in the metal-affected area. To address this problem, we propose an improved dual domain network framework.Approach. We enhance sinogram completion performance by utilizing an aggregated contextual transformations network in the sinogram domain. Furthermore, we utilizea prior-projection-based linearized correction method to obtain images with beam-hardening artifacts removed, which are incorporated into the input of the image post-processing network to assist in training the image domain network. Finally, we train the sinogram domain network and the image domain network separately to their respective convergences.Main results. In experiments conducted on a simulated dataset, our method achieves the best average RMSE of 25.1, SSIM of 0.973, and PSNR of 42.1, respectively.Significance. The proposed method is capable of preserving tissue structures near metallic objects while eliminating metal artifacts from the reconstructed images. Related codes will be released athttps://github.com/Corinna-China/AOTDudoNet.
Collapse
Affiliation(s)
- Hui Tang
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
- Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, People's Republic of China
| | - Sudong Jiang
- School of Software Engineering, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| | - Yubing Lin
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| | - Yu Li
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| | - Xudong Bao
- School of Computer Science and Engineering, Laboratory of Image Science and Technology, Southeast University, Nanjing, 210000, Jiangsu, People's Republic of China
| |
Collapse
|
49
|
Tang H, Lin YB, Jiang SD, Li Y, Li T, Bao XD. A new dental CBCT metal artifact reduction method based on a dual-domain processing framework. Phys Med Biol 2023; 68:175016. [PMID: 37524084 DOI: 10.1088/1361-6560/acec29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.Cone beam computed tomography (CBCT) has been wildly used in clinical treatment of dental diseases. However, patients often have metallic implants in mouth, which will lead to severe metal artifacts in the reconstructed images. To reduce metal artifacts in dental CBCT images, which have a larger amount of data and a limited field of view compared to computed tomography images, a new dental CBCT metal artifact reduction method based on a projection correction and a convolutional neural network (CNN) based image post-processing model is proposed in this paper. Approach.The proposed method consists of three stages: (1) volume reconstruction and metal segmentation in the image domain, using the forward projection to get the metal masks in the projection domain; (2) linear interpolation in the projection domain and reconstruction to build a linear interpolation (LI) corrected volume; (3) take the LI corrected volume as prior and perform the prior based beam hardening correction in the projection domain, and (4) combine the constructed projection corrected volume and LI-volume slice-by-slice in the image domain by two concatenated U-Net based models (CNN1 and CNN2). Simulated and clinical dental CBCT cases are used to evaluate the proposed method. The normalized root means square difference (NRMSD) and the structural similarity index (SSIM) are used for the quantitative evaluation of the method.Main results.The proposed method outperforms the frequency domain fusion method (FS-MAR) and a state-of-art CNN based method on the simulated dataset and yields the best NRMSD and SSIM of 4.0196 and 0.9924, respectively. Visual results on both simulated and clinical images also illustrate that the proposed method can effectively reduce metal artifacts.Significance. This study demonstrated that the proposed dual-domain processing framework is suitable for metal artifact reduction in dental CBCT images.
Collapse
Affiliation(s)
- Hui Tang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
- Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, People's Republic of China
| | - Yu Bing Lin
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Su Dong Jiang
- School of Software Engineering, Southeast University, Nanjing, People's Republic of China
| | - Yu Li
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Tian Li
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Xu Dong Bao
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| |
Collapse
|
50
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|