1
|
Li Y, Ma C, Li Z, Wang Z, Han J, Shan H, Liu J. Semi-supervised spatial-frequency transformer for metal artifact reduction in maxillofacial CT and evaluation with intraoral scan. Eur J Radiol 2025; 187:112087. [PMID: 40273758 DOI: 10.1016/j.ejrad.2025.112087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 01/23/2025] [Accepted: 03/28/2025] [Indexed: 04/26/2025]
Abstract
PURPOSE To develop a semi-supervised domain adaptation technique for metal artifact reduction with a spatial-frequency transformer (SFTrans) model (Semi-SFTrans), and to quantitatively compare its performance with supervised models (Sup-SFTrans and ResUNet) and traditional linear interpolation MAR method (LI) in oral and maxillofacial CT. METHODS Supervised models, including Sup-SFTrans and a state-of-the-art model termed ResUNet, were trained with paired simulated CT images, while semi-supervised model, Semi-SFTrans, was trained with both paired simulated and unpaired clinical CT images. For evaluation on the simulated data, we calculated Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on the images corrected by four methods: LI, ResUNet, Sup-SFTrans, and Semi-SFTrans. For evaluation on the clinical data, we collected twenty-two clinical cases with real metal artifacts, and the corresponding intraoral scan data. Three radiologists visually assessed the severity of artifacts using Likert scales on the original, Sup-SFTrans-corrected, and Semi-SFTrans-corrected images. Quantitative MAR evaluation was conducted by measuring Mean Hounsfield Unit (HU) values, standard deviations, and Signal-to-Noise Ratios (SNRs) across Regions of Interest (ROIs) such as the tongue, bilateral buccal, lips, and bilateral masseter muscles, using paired t-tests and Wilcoxon signed-rank tests. Further, teeth integrity in the corrected images was assessed by comparing teeth segmentation results from the corrected images against the ground-truth segmentation derived from registered intraoral scan data, using Dice Score and Hausdorff Distance. RESULTS Sup-SFTrans outperformed LI, ResUNet and Semi-SFTrans on the simulated dataset. Visual assessments from the radiologists showed that average scores were (2.02 ± 0.91) for original CT, (4.46 ± 0.51) for Semi-SFTrans CT, and (3.64 ± 0.90) for Sup-SFTrans CT, with intra correlation coefficients (ICCs)>0.8 of all groups and p < 0.001 between groups. On soft tissue, both Semi-SFTrans and Sup-SFTrans significantly reduced metal artifacts in tongue (p < 0.001), lips, bilateral buccal regions, and masseter muscle areas (p < 0.05). Semi-SFTrans achieved superior metal artifact reduction than Sup-SFTrans in all ROIs (p < 0.001). SNR results indicated significant differences between Semi-SFTrans and Sup-SFTrans in tongue (p = 0.0391), bilateral buccal (p = 0.0067), lips (p = 0.0208), and bilateral masseter muscle areas (p = 0.0031). Notably, Semi-SFTrans demonstrated better teeth integrity preservation than Sup-SFTrans (Dice Score: p < 0.001; Hausdorff Distance: p = 0.0022). CONCLUSION The semi-supervised MAR model, Semi-SFTrans, demonstrated superior metal artifact reduction performance over supervised counterparts in real dental CT images.
Collapse
Affiliation(s)
- Yuanlin Li
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Chenglong Ma
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Zilong Li
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Zhen Wang
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Jing Han
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China.
| | - Jiannan Liu
- Department of Oral Maxillofacial Head and Neck Oncology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai 200011, China.
| |
Collapse
|
2
|
Zou W, Shi G, Lei S, Li G, Zhou G, Jing Y, He J, Tang Z, An Y, Tian J. U-N2C: A Dual Memory-Guided Disentanglement Framework for Unsupervised System Matrix Denoising in Magnetic Particle Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:2867-2882. [PMID: 40315090 DOI: 10.1109/tip.2025.3564845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2025]
Abstract
Recently, Magnetic Particle Imaging, an emerging functional imaging modality, has exhibited outstanding spatial-temporal resolution and sensitivity. The general reconstruction pipeline of Magnetic Particle Imaging involves calibrating a System Matrix and then solving an ill-posed inverse problem combined with the measured particle signals. However, the introduction of noise during the System Matrix calibration procedure is inevitable, which degrades the detailed information in the reconstructed images. Therefore, frequency selection methods based on signal-to-noise ratio are commonly adopted. However, these methods lead to a decrease in the available high-frequency components, which damages the spatial resolution. To address this problem, we propose an unsupervised memory-guided denoising framework with unpaired noisy-clean System Matrix components, called U-N2C. Specifically, we design a Pattern Memory Block to memorize System Matrix patterns, directed by a position-aware frequency index embedding. Meanwhile, we devise a Noise Memory Block to implicitly approximate noise distributions. With the guidance of our dual memory blocks, we can disentangle the noise and content of the System Matrix in the latent space. Furthermore, benefiting from the ability to model complex noise, our method can generate pseudo but high-quality noisy-clean pairs and further enhance our denoising capability. Experiments on both synthetic and real noise demonstrate that our U-N2C achieves cutting-edge performance compared to other methods. Moreover, we conduct extensive qualitative and quantitative ablation studies to verify the effectiveness of our method. Our code has been available at U-N2C.
Collapse
|
3
|
Lee J, Kim S, Ahn J, Wang AS, Baek J. X-ray CT metal artifact reduction using neural attenuation field prior. Med Phys 2025. [PMID: 40305006 DOI: 10.1002/mp.17859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 03/26/2025] [Accepted: 04/14/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND The presence of metal objects in computed tomography (CT) imaging introduces severe artifacts that degrade image quality and hinder accurate diagnosis. While several deep learning-based metal artifact reduction (MAR) methods have been proposed, they often exhibit poor performance on unseen data and require large datasets to train neural networks. PURPOSE In this work, we propose a sinogram inpainting method for metal artifact reduction that leverages a neural attenuation field (NAF) as a prior. This new method, dubbed NAFMAR, operates in a self-supervised manner by optimizing a model-based neural field, thus eliminating the need for large training datasets. METHODS NAF is optimized to generate prior images, which are then used to inpaint metal traces in the original sinogram. To address the corruption of x-ray projections caused by metal objects, a 3D forward projection of the original corrupted image is performed to identify metal traces. Consequently, NAF is optimized using a metal trace-masked ray sampling strategy that selectively utilizes uncorrupted rays to supervise the network. Moreover, a metal-aware loss function is proposed to prioritize metal-associated regions during optimization, thereby enhancing the network to learn more informed representations of anatomical features. After optimization, the NAF images are rendered to generate NAF prior images, which serve as priors to correct original projections through interpolation. Experiments are conducted to compare NAFMAR with other prior-based inpainting MAR methods. RESULTS The proposed method provides an accurate prior without requiring extensive datasets. Images corrected using NAFMAR showed sharp features and preserved anatomical structures. Our comprehensive evaluation, involving simulated dental CT and clinical pelvic CT images, demonstrated the effectiveness of NAF prior compared to other prior information, including the linear interpolation and data-driven convolutional neural networks (CNNs). NAFMAR outperformed all compared baselines in terms of structural similarity index measure (SSIM) values, and its peak signal-to-noise ratio (PSNR) value was comparable to that of the dual-domain CNN method. CONCLUSIONS NAFMAR presents an effective, high-fidelity solution for metal artifact reduction in 3D tomographic imaging without the need for large datasets.
Collapse
Affiliation(s)
- Jooho Lee
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| | - Seongjun Kim
- School of Integrated Technology, Yonsei University, Seoul, Republic of Korea
| | - Junhyun Ahn
- School of Integrated Technology, Yonsei University, Seoul, Republic of Korea
| | - Adam S Wang
- Department of Radiology, Stanford University, California, USA
| | - Jongduk Baek
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
4
|
Zhong L, Xiao R, Shu H, Zheng K, Li X, Wu Y, Ma J, Feng Q, Yang W. NCCT-to-CECT synthesis with contrast-enhanced knowledge and anatomical perception for multi-organ segmentation in non-contrast CT images. Med Image Anal 2025; 100:103397. [PMID: 39612807 DOI: 10.1016/j.media.2024.103397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 09/06/2024] [Accepted: 11/15/2024] [Indexed: 12/01/2024]
Abstract
Contrast-enhanced computed tomography (CECT) is constantly used for delineating organs-at-risk (OARs) in radiation therapy planning. The delineated OARs are needed to transfer from CECT to non-contrast CT (NCCT) for dose calculation. Yet, the use of iodinated contrast agents (CA) in CECT and the dose calculation errors caused by the spatial misalignment between NCCT and CECT images pose risks of adverse side effects. A promising solution is synthesizing CECT images from NCCT scans, which can improve the visibility of organs and abnormalities for more effective multi-organ segmentation in NCCT images. However, existing methods neglect the difference between tissues induced by CA and lack the ability to synthesize the details of organ edges and blood vessels. To address these issues, we propose a contrast-enhanced knowledge and anatomical perception network (CKAP-Net) for NCCT-to-CECT synthesis. CKAP-Net leverages a contrast-enhanced knowledge learning network to capture both similarities and dissimilarities in domain characteristics attributable to CA. Specifically, a CA-based perceptual loss function is introduced to enhance the synthesis of CA details. Furthermore, we design a multi-scale anatomical perception transformer that utilizes multi-scale anatomical information from NCCT images, enabling the precise synthesis of tissue details. Our CKAP-Net is evaluated on a multi-center abdominal NCCT-CECT dataset, a head an neck NCCT-CECT dataset, and an NCMRI-CEMRI dataset. It achieves a MAE of 25.96 ± 2.64, a SSIM of 0.855 ± 0.017, and a PSNR of 32.60 ± 0.02 for CECT synthesis, and a DSC of 81.21 ± 4.44 for segmentation on the internal dataset. Extensive experiments demonstrate that CKAP-Net outperforms state-of-the-art CA synthesis methods and has better generalizability across different datasets.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Ruolin Xiao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, NY, USA
| | - Kaiyi Zheng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Xinming Li
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jianhua Ma
- School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China.
| |
Collapse
|
5
|
Amadita K, Gray F, Gee E, Ekpo E, Jimenez Y. CT metal artefact reduction for hip and shoulder implants using novel algorithms and machine learning: A systematic review with pairwise and network meta-analyses. Radiography (Lond) 2025; 31:36-52. [PMID: 39509906 DOI: 10.1016/j.radi.2024.10.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 09/25/2024] [Accepted: 10/14/2024] [Indexed: 11/15/2024]
Abstract
INTRODUCTION Many tools have been developed to reduce metal artefacts in computed tomography (CT) images resulting from metallic prosthesis; however, their relative effectiveness in preserving image quality is poorly understood. This paper reviews the literature on novel metal artefact reduction (MAR) methods targeting large metal artefacts in fan-beam CT to examine their effectiveness in reducing metal artefacts and effect on image quality. METHODS The PRISMA checklist was used to search for articles in five electronic databases (MEDLINE, Scopus, Web of Science, IEEE, EMBASE). Studies that assessed the effectiveness of recently developed MAR method on fan-beam CT images of hip and shoulder implants were reviewed. Study quality was assessed using the National Institute of Health (NIH) tool. Meta-analyses were conducted in R, and results that could not be meta-analysed were synthesised narratively. RESULTS Thirty-six studies were reviewed. Of these, 20 studies proposed statistical algorithms and 16 used machine learning (ML), and there were 19 novel comparators. Network meta-analysis of 19 studies showed that Recurrent Neural Network MAR (RNN-MAR) is more effective in reducing noise (LogOR 20.7; 95 % CI 12.6-28.9) without compromising image quality (LogOR 4.4; 95 % CI -13.8-22.5). The network meta-analysis and narrative synthesis showed novel MAR methods reduce noise more effectively than baseline algorithms, with five out of 23 ML methods significantly more effective than Filtered Back Projection (FBP) (p < 0.05). Computation time varied, but ML methods were faster than statistical algorithms. CONCLUSION ML tools are more effective in reducing metal artefacts without compromising image quality and are computationally faster than statistical algorithms. Overall, novel MAR methods were also more effective in reducing noise than the baseline reconstructions. IMPLICATIONS FOR PRACTICE Implementation research is needed to establish the clinical suitability of ML MAR in practice.
Collapse
Affiliation(s)
- K Amadita
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - F Gray
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - E Gee
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - E Ekpo
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - Y Jimenez
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| |
Collapse
|
6
|
Deng S, Chen Y, Huang W, Zhang R, Xiong Z. Unsupervised Domain Adaptation for EM Image Denoising With Invertible Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:92-105. [PMID: 39028599 DOI: 10.1109/tmi.2024.3431192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/21/2024]
Abstract
Electron microscopy (EM) image denoising is critical for visualization and subsequent analysis. Despite the remarkable achievements of deep learning-based non-blind denoising methods, their performance drops significantly when domain shifts exist between the training and testing data. To address this issue, unpaired blind denoising methods have been proposed. However, these methods heavily rely on image-to-image translation and neglect the inherent characteristics of EM images, limiting their overall denoising performance. In this paper, we propose the first unsupervised domain adaptive EM image denoising method, which is grounded in the observation that EM images from similar samples share common content characteristics. Specifically, we first disentangle the content representations and the noise components from noisy images and establish a shared domain-agnostic content space via domain alignment to bridge the synthetic images (source domain) and the real images (target domain). To ensure precise domain alignment, we further incorporate domain regularization by enforcing that: the pseudo-noisy images, reconstructed using both content representations and noise components, accurately capture the characteristics of the noisy images from which the noise components originate, all while maintaining semantic consistency with the noisy images from which the content representations originate. To guarantee lossless representation decomposition and image reconstruction, we introduce disentanglement-reconstruction invertible networks. Finally, the reconstructed pseudo-noisy images, paired with their corresponding clean counterparts, serve as valuable training data for the denoising network. Extensive experiments on synthetic and real EM datasets demonstrate the superiority of our method in terms of image restoration quality and downstream neuron segmentation accuracy. Our code is publicly available at https://github.com/sydeng99/DADn.
Collapse
|
7
|
He C, Li K, Xu G, Yan J, Tang L, Zhang Y, Wang Y, Li X. HQG-Net: Unpaired Medical Image Enhancement With High-Quality Guidance. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18404-18418. [PMID: 37796672 DOI: 10.1109/tnnls.2023.3315307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/07/2023]
Abstract
Unpaired medical image enhancement (UMIE) aims to transform a low-quality (LQ) medical image into a high-quality (HQ) one without relying on paired images for training. While most existing approaches are based on Pix2Pix/CycleGAN and are effective to some extent, they fail to explicitly use HQ information to guide the enhancement process, which can lead to undesired artifacts and structural distortions. In this article, we propose a novel UMIE approach that avoids the above limitation of existing methods by directly encoding HQ cues into the LQ enhancement process in a variational fashion and thus model the UMIE task under the joint distribution between the LQ and HQ domains. Specifically, we extract features from an HQ image and explicitly insert the features, which are expected to encode HQ cues, into the enhancement network to guide the LQ enhancement with the variational normalization module. We train the enhancement network adversarially with a discriminator to ensure the generated HQ image falls into the HQ domain. We further propose a content-aware loss to guide the enhancement process with wavelet-based pixel-level and multiencoder-based feature-level constraints. Additionally, as a key motivation for performing image enhancement is to make the enhanced images serve better for downstream tasks, we propose a bi-level learning scheme to optimize the UMIE task and downstream tasks cooperatively, helping generate HQ images both visually appealing and favorable for downstream tasks. Experiments on three medical datasets verify that our method outperforms existing techniques in terms of both enhancement quality and downstream task performance. The code and the newly collected datasets are publicly available at https://github.com/ChunmingHe/HQG-Net.
Collapse
|
8
|
Tang Y, Lyu T, Jin H, Du Q, Wang J, Li Y, Li M, Chen Y, Zheng J. Domain adaptive noise reduction with iterative knowledge transfer and style generalization learning. Med Image Anal 2024; 98:103327. [PMID: 39191093 DOI: 10.1016/j.media.2024.103327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 08/29/2024]
Abstract
Low-dose computed tomography (LDCT) denoising tasks face significant challenges in practical imaging scenarios. Supervised methods encounter difficulties in real-world scenarios as there are no paired data for training. Moreover, when applied to datasets with varying noise patterns, these methods may experience decreased performance owing to the domain gap. Conversely, unsupervised methods do not require paired data and can be directly trained on real-world data. However, they often exhibit inferior performance compared to supervised methods. To address this issue, it is necessary to leverage the strengths of these supervised and unsupervised methods. In this paper, we propose a novel domain adaptive noise reduction framework (DANRF), which integrates both knowledge transfer and style generalization learning to effectively tackle the domain gap problem. Specifically, an iterative knowledge transfer method with knowledge distillation is selected to train the target model using unlabeled target data and a pre-trained source model trained with paired simulation data. Meanwhile, we introduce the mean teacher mechanism to update the source model, enabling it to adapt to the target domain. Furthermore, an iterative style generalization learning process is also designed to enrich the style diversity of the training dataset. We evaluate the performance of our approach through experiments conducted on multi-source datasets. The results demonstrate the feasibility and effectiveness of our proposed DANRF model in multi-source LDCT image processing tasks. Given its hybrid nature, which combines the advantages of supervised and unsupervised learning, and its ability to bridge domain gaps, our approach is well-suited for improving practical low-dose CT imaging in clinical settings. Code for our proposed approach is publicly available at https://github.com/tyfeiii/DANRF.
Collapse
Affiliation(s)
- Yufei Tang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Tianling Lyu
- Research Center of Augmented Intelligence, Zhejiang Lab, Hangzhou, 310000, China
| | - Haoyang Jin
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Qiang Du
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jiping Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Yunxiang Li
- Nanovision Technology Co., Ltd., Beiqing Road, Haidian District, Beijing, 100094, China
| | - Ming Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Yang Chen
- Laboratory of Image Science and Technology, the School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai, Weihai, 264200, China.
| |
Collapse
|
9
|
Cai T, Li X, Zhong C, Tang W, Guo J. DiffMAR: A Generalized Diffusion Model for Metal Artifact Reduction in CT Images. IEEE J Biomed Health Inform 2024; 28:6712-6724. [PMID: 39110557 DOI: 10.1109/jbhi.2024.3439729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2024]
Abstract
X-ray imaging frequently introduces varying degrees of metal artifacts to computed tomography (CT) images when metal implants are present. For the metal artifact reduction (MAR) task, existing end-to-end methods often exhibit limited generalization capabilities. While methods based on multiple iterations often suffer from accumulative error, resulting in lower-quality restoration outcomes. In this work, we innovatively present a generalized diffusion model for Metal Artifact Reduction (DiffMAR). The proposed method utilizes a linear degradation process to simulate the physical phenomenon of metal artifact formation in CT images and directly learn an iterative restoration process from paired CT images in the reverse process. During the reverse process of DiffMAR, a Time-Latent Adjustment (TLA) module is designed to adjust time embedding at the latent level, thereby minimizing the accumulative error during iterative restoration. We also designed a structure information extraction (SIE) module to utilize linear interpolation data in the image domain, guiding the generation of anatomical structures during the iterative restoring. This leads to more accurate and robust shadow-free image generation. Comprehensive analysis, including both synthesized data and clinical evidence, confirms that our proposed method surpasses the current state-of-the-art (SOTA) MAR methods in terms of both image generation quality and generalization.
Collapse
|
10
|
Liu X, Xie Y, Diao S, Tan S, Liang X. Unsupervised CT Metal Artifact Reduction by Plugging Diffusion Priors in Dual Domains. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3533-3545. [PMID: 38194400 DOI: 10.1109/tmi.2024.3351201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
During the process of computed tomography (CT), metallic implants often cause disruptive artifacts in the reconstructed images, impeding accurate diagnosis. Many supervised deep learning-based approaches have been proposed for metal artifact reduction (MAR). However, these methods heavily rely on training with paired simulated data, which are challenging to acquire. This limitation can lead to decreased performance when applying these methods in clinical practice. Existing unsupervised MAR methods, whether based on learning or not, typically work within a single domain, either in the image domain or the sinogram domain. In this paper, we propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions. Specifically, we first train a diffusion model using CT images without metal artifacts. Subsequently, we iteratively introduce the diffusion priors in both the sinogram domain and image domain to restore the degraded portions caused by metal artifacts. Besides, we design temporally dynamic weight masks for the image-domian fusion. The dual-domain processing empowers our approach to outperform existing unsupervised MAR methods, including another MAR method based on diffusion model. The effectiveness has been qualitatively and quantitatively validated on synthetic datasets. Moreover, our method demonstrates superior visual results among both supervised and unsupervised methods on clinical datasets. Codes are available in github.com/DeepXuan/DuDoDp-MAR.
Collapse
|
11
|
Zhong W, Li T, Hou S, Zhang H, Li Z, Wang G, Liu Q, Song X. Unsupervised disentanglement strategy for mitigating artifact in photoacoustic tomography under extremely sparse view. PHOTOACOUSTICS 2024; 38:100613. [PMID: 38764521 PMCID: PMC11101706 DOI: 10.1016/j.pacs.2024.100613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/15/2024] [Accepted: 04/30/2024] [Indexed: 05/21/2024]
Abstract
Traditional methods under sparse view for reconstruction of photoacoustic tomography (PAT) often result in significant artifacts. Here, a novel image to image transformation method based on unsupervised learning artifact disentanglement network (ADN), named PAT-ADN, was proposed to address the issue. This network is equipped with specialized encoders and decoders that are responsible for encoding and decoding the artifacts and content components of unpaired images, respectively. The performance of the proposed PAT-ADN was evaluated using circular phantom data and the animal in vivo experimental data. The results demonstrate that PAT-ADN exhibits excellent performance in effectively removing artifacts. In particular, under extremely sparse view (e.g., 16 projections), structural similarity index and peak signal-to-noise ratio are improved by ∼188 % and ∼85 % in in vivo experimental data using the proposed method compared to traditional reconstruction methods. PAT-ADN improves the imaging performance of PAT, opening up possibilities for its application in multiple domains.
Collapse
Affiliation(s)
- Wenhua Zhong
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Tianle Li
- Nanchang University, Jiluan Academy, Nanchang, China
| | - Shangkun Hou
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Hongyu Zhang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Zilong Li
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Guijun Wang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Qiegen Liu
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Xianlin Song
- Nanchang University, School of Information Engineering, Nanchang, China
| |
Collapse
|
12
|
Zhang Y, Liu L, Yu H, Wang T, Zhang Y, Liu Y. ReMAR: a preoperative CT angiography guided metal artifact reduction framework designed for follow-up CTA of endovascular coiling. Phys Med Biol 2024; 69:145015. [PMID: 38959913 DOI: 10.1088/1361-6560/ad5ef4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 07/03/2024] [Indexed: 07/05/2024]
Abstract
Objective. Follow-up computed tomography angiography (CTA) is necessary for ensuring occlusion effect of endovascular coiling. However, the implanted metal coil will introduce artifacts that have a negative spillover into radiologic assessment.Method. A framework named ReMAR is proposed in this paper for metal artifacts reduction (MARs) from follow-up CTA of patients with coiled aneurysms. It employs preoperative CTA to provide the prior knowledge of the aneurysm and the expected position of the coil as a guidance thus balances the metal artifacts removal performance and clinical feasibility. The ReMAR is composed of three modules: segmentation, registration and MAR module. The segmentation and registration modules obtain the metal coil knowledge via implementing aneurysms delineation on preoperative CTA and alignment of follow-up CTA. The MAR module consisting of hybrid convolutional neural network- and transformer- architectures is utilized to restore sinogram and remove the artifact from reconstructed image. Both image quality and vessel rendering effect after metal artifacts removal are assessed in order to responding clinical concerns.Main results. A total of 137 patients undergone endovascular coiling have been enrolled in the study: 13 of them have complete diagnosis/follow-up records for end-to-end validation, while the rest lacked of follow-up records are used for model training. Quantitative metrics show ReMAR significantly reduced the metal-artifact burden in follow-up CTA. Qualitative ranks show ReMAR could preserve the morphology of blood vessels during artifact removal as desired by doctors.Significance. The ReMAR could significantly remove the artifacts caused by implanted metal coil in the follow-up CTA. It can be used to enhance the overall image quality and convince CTA an alternative to invasive follow-up in treated intracranial aneurysm.
Collapse
Affiliation(s)
- Yaoyu Zhang
- College of Electrical Engineering, Sichuan University, Chengdu 610065, People's Republic of China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu 610044, People's Republic of China
| | - Hui Yu
- College of Computer Science, Sichuan University, Chengdu 610065, People's Republic of China
| | - Tao Wang
- College of Computer Science, Sichuan University, Chengdu 610065, People's Republic of China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, People's Republic of China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, Chengdu 610065, People's Republic of China
| |
Collapse
|
13
|
Song Y, Yao T, Peng S, Zhu M, Meng M, Ma J, Zeng D, Huang J, Bian Z, Wang Y. b-MAR: bidirectional artifact representations learning framework for metal artifact reduction in dental CBCT. Phys Med Biol 2024; 69:145010. [PMID: 38588680 DOI: 10.1088/1361-6560/ad3c0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 04/08/2024] [Indexed: 04/10/2024]
Abstract
Objective.Metal artifacts in computed tomography (CT) images hinder diagnosis and treatment significantly. Specifically, dental cone-beam computed tomography (Dental CBCT) images are seriously contaminated by metal artifacts due to the widespread use of low tube voltages and the presence of various high-attenuation materials in dental structures. Existing supervised metal artifact reduction (MAR) methods mainly learn the mapping of artifact-affected images to clean images, while ignoring the modeling of the metal artifact generation process. Therefore, we propose the bidirectional artifact representations learning framework to adaptively encode metal artifacts caused by various dental implants and model the generation and elimination of metal artifacts, thereby improving MAR performance.Approach.Specifically, we introduce an efficient artifact encoder to extract multi-scale representations of metal artifacts from artifact-affected images. These extracted metal artifact representations are then bidirectionally embedded into both the metal artifact generator and the metal artifact eliminator, which can simultaneously improve the performance of artifact removal and artifact generation. The artifact eliminator learns artifact removal in a supervised manner, while the artifact generator learns artifact generation in an adversarial manner. To further improve the performance of the bidirectional task networks, we propose artifact consistency loss to align the consistency of images generated by the eliminator and the generator with or without embedding artifact representations.Main results.To validate the effectiveness of our algorithm, experiments are conducted on simulated and clinical datasets containing various dental metal morphologies. Quantitative metrics are calculated to evaluate the results of the simulation tests, which demonstrate b-MAR improvements of >1.4131 dB in PSNR, >0.3473 HU decrements in RMSE, and >0.0025 promotion in structural similarity index measurement over the current state-of-the-art MAR methods. All results indicate that the proposed b-MAR method can remove artifacts caused by various metal morphologies and restore the structural integrity of dental tissues effectively.Significance.The proposed b-MAR method strengthens the joint learning of the artifact removal process and the artifact generation process by bidirectionally embedding artifact representations, thereby improving the model's artifact removal performance. Compared with other comparison methods, b-MAR can robustly and effectively correct metal artifacts in dental CBCT images caused by different dental metals.
Collapse
Affiliation(s)
- Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Tianyi Yao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Shengwang Peng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Mingqiang Meng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jing Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
14
|
Yao L, Wang J, Wu Z, Du Q, Yang X, Li M, Zheng J. Parallel processing model for low-dose computed tomography image denoising. Vis Comput Ind Biomed Art 2024; 7:14. [PMID: 38865022 PMCID: PMC11169366 DOI: 10.1186/s42492-024-00165-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/20/2024] [Indexed: 06/13/2024] Open
Abstract
Low-dose computed tomography (LDCT) has gained increasing attention owing to its crucial role in reducing radiation exposure in patients. However, LDCT-reconstructed images often suffer from significant noise and artifacts, negatively impacting the radiologists' ability to accurately diagnose. To address this issue, many studies have focused on denoising LDCT images using deep learning (DL) methods. However, these DL-based denoising methods have been hindered by the highly variable feature distribution of LDCT data from different imaging sources, which adversely affects the performance of current denoising models. In this study, we propose a parallel processing model, the multi-encoder deep feature transformation network (MDFTN), which is designed to enhance the performance of LDCT imaging for multisource data. Unlike traditional network structures, which rely on continual learning to process multitask data, the approach can simultaneously handle LDCT images within a unified framework from various imaging sources. The proposed MDFTN consists of multiple encoders and decoders along with a deep feature transformation module (DFTM). During forward propagation in network training, each encoder extracts diverse features from its respective data source in parallel and the DFTM compresses these features into a shared feature space. Subsequently, each decoder performs an inverse operation for multisource loss estimation. Through collaborative training, the proposed MDFTN leverages the complementary advantages of multisource data distribution to enhance its adaptability and generalization. Numerous experiments were conducted on two public datasets and one local dataset, which demonstrated that the proposed network model can simultaneously process multisource data while effectively suppressing noise and preserving fine structures. The source code is available at https://github.com/123456789ey/MDFTN .
Collapse
Affiliation(s)
- Libing Yao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jiping Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Qiang Du
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xiaodong Yang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Ming Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| |
Collapse
|
15
|
Xia J, Zhou Y, Deng W, Kang J, Wu W, Qi M, Zhou L, Ma J, Xu Y. PND-Net: Physics-Inspired Non-Local Dual-Domain Network for Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2125-2136. [PMID: 38236665 DOI: 10.1109/tmi.2024.3354925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Metal artifacts caused by the presence of metallic implants tremendously degrade the quality of reconstructed computed tomography (CT) images and therefore affect the clinical diagnosis or reduce the accuracy of organ delineation and dose calculation in radiotherapy. Although various deep learning methods have been proposed for metal artifact reduction (MAR), most of them aim to restore the corrupted sinogram within the metal trace, which removes beam hardening artifacts but ignores other components of metal artifacts. In this paper, based on the physical property of metal artifacts which is verified via Monte Carlo (MC) simulation, we propose a novel physics-inspired non-local dual-domain network (PND-Net) for MAR in CT imaging. Specifically, we design a novel non-local sinogram decomposition network (NSD-Net) to acquire the weighted artifact component and develop an image restoration network (IR-Net) to reduce the residual and secondary artifacts in the image domain. To facilitate the generalization and robustness of our method on clinical CT images, we employ a trainable fusion network (F-Net) in the artifact synthesis path to achieve unpaired learning. Furthermore, we design an internal consistency loss to ensure the data fidelity of anatomical structures in the image domain and introduce the linear interpolation sinogram as prior knowledge to guide sinogram decomposition. NSD-Net, IR-Net, and F-Net are jointly trained so that they can benefit from one another. Extensive experiments on simulation and clinical data demonstrate that our method outperforms state-of-the-art MAR methods.
Collapse
|
16
|
Onnis C, van Assen M, Muscogiuri E, Muscogiuri G, Gershon G, Saba L, De Cecco CN. The Role of Artificial Intelligence in Cardiac Imaging. Radiol Clin North Am 2024; 62:473-488. [PMID: 38553181 DOI: 10.1016/j.rcl.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Artificial intelligence (AI) is having a significant impact in medical imaging, advancing almost every aspect of the field, from image acquisition and postprocessing to automated image analysis with outreach toward supporting decision making. Noninvasive cardiac imaging is one of the main and most exciting fields for AI development. The aim of this review is to describe the main applications of AI in cardiac imaging, including CT and MR imaging, and provide an overview of recent advancements and available clinical applications that can improve clinical workflow, disease detection, and prognostication in cardiac disease.
Collapse
Affiliation(s)
- Carlotta Onnis
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA; Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato, SS 554 km 4,500 Monserrato, Cagliari 09042, Italy. https://twitter.com/CarlottaOnnis
| | - Marly van Assen
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA. https://twitter.com/marly_van_assen
| | - Emanuele Muscogiuri
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA; Division of Thoracic Imaging, Department of Radiology, University Hospitals Leuven, Herestraat 49, Leuven 3000, Belgium
| | - Giuseppe Muscogiuri
- Department of Diagnostic and Interventional Radiology, Papa Giovanni XXIII Hospital, Piazza OMS, 1, Bergamo BG 24127, Italy. https://twitter.com/GiuseppeMuscog
| | - Gabrielle Gershon
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA. https://twitter.com/gabbygershon
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato, SS 554 km 4,500 Monserrato, Cagliari 09042, Italy. https://twitter.com/lucasabaITA
| | - Carlo N De Cecco
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA; Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University, Emory University Hospital, 1365 Clifton Road Northeast, Suite AT503, Atlanta, GA 30322, USA.
| |
Collapse
|
17
|
Chen C, Chen Y, Li X, Ning H, Xiao R. Linear semantic transformation for semi-supervised medical image segmentation. Comput Biol Med 2024; 173:108331. [PMID: 38522252 DOI: 10.1016/j.compbiomed.2024.108331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/29/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
Medical image segmentation is a focus research and foundation in developing intelligent medical systems. Recently, deep learning for medical image segmentation has become a standard process and succeeded significantly, promoting the development of reconstruction, and surgical planning of disease diagnosis. However, semantic learning is often inefficient owing to the lack of supervision of feature maps, resulting in that high-quality segmentation models always rely on numerous and accurate data annotations. Learning robust semantic representation in latent spaces remains a challenge. In this paper, we propose a novel semi-supervised learning framework to learn vital attributes in medical images, which constructs generalized representation from diverse semantics to realize medical image segmentation. We first build a self-supervised learning part that achieves context recovery by reconstructing space and intensity of medical images, which conduct semantic representation for feature maps. Subsequently, we combine semantic-rich feature maps and utilize simple linear semantic transformation to convert them into image segmentation. The proposed framework was tested using five medical segmentation datasets. Quantitative assessments indicate the highest scores of our method on IXI (73.78%), ScaF (47.50%), COVID-19-Seg (50.72%), PC-Seg (65.06%), and Brain-MR (72.63%) datasets. Finally, we compared our method with the latest semi-supervised learning methods and obtained 77.15% and 75.22% DSC values, respectively, ranking first on two representative datasets. The experimental results not only proved that the proposed linear semantic transformation was effectively applied to medical image segmentation, but also presented its simplicity and ease-of-use to pursue robust segmentation in semi-supervised learning. Our code is now open at: https://github.com/QingYunA/Linear-Semantic-Transformation-for-Semi-Supervised-Medical-Image-Segmentation.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yunqing Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiaoheng Li
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China; Shunde Innovation School, University of Science and Technology Beijing, Foshan, 100024, China.
| |
Collapse
|
18
|
Li Z, Gao Q, Wu Y, Niu C, Zhang J, Wang M, Wang G, Shan H. Quad-Net: Quad-Domain Network for CT Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1866-1879. [PMID: 38194399 DOI: 10.1109/tmi.2024.3351722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Metal implants and other high-density objects in patients introduce severe streaking artifacts in CT images, compromising image quality and diagnostic performance. Although various methods were developed for CT metal artifact reduction over the past decades, including the latest dual-domain deep networks, remaining metal artifacts are still clinically challenging in many cases. Here we extend the state-of-the-art dual-domain deep network approach into a quad-domain counterpart so that all the features in the sinogram, image, and their corresponding Fourier domains are synergized to eliminate metal artifacts optimally without compromising structural subtleties. Our proposed quad-domain network for MAR, referred to as Quad-Net, takes little additional computational cost since the Fourier transform is highly efficient, and works across the four receptive fields to learn both global and local features as well as their relations. Specifically, we first design a Sinogram-Fourier Restoration Network (SFR-Net) in the sinogram domain and its Fourier space to faithfully inpaint metal-corrupted traces. Then, we couple SFR-Net with an Image-Fourier Refinement Network (IFR-Net) which takes both an image and its Fourier spectrum to improve a CT image reconstructed from the SFR-Net output using cross-domain contextual information. Quad-Net is trained on clinical datasets to minimize a composite loss function. Quad-Net does not require precise metal masks, which is of great importance in clinical practice. Our experimental results demonstrate the superiority of Quad-Net over the state-of-the-art MAR methods quantitatively, visually, and statistically. The Quad-Net code is publicly available at https://github.com/longzilicart/Quad-Net.
Collapse
|
19
|
Yang G, Li C, Yao Y, Wang G, Teng Y. Quasi-supervised learning for super-resolution PET. Comput Med Imaging Graph 2024; 113:102351. [PMID: 38335784 DOI: 10.1016/j.compmedimag.2024.102351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/15/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
Low resolution of positron emission tomography (PET) limits its diagnostic performance. Deep learning has been successfully applied to achieve super-resolution PET. However, commonly used supervised learning methods in this context require many pairs of low- and high-resolution (LR and HR) PET images. Although unsupervised learning utilizes unpaired images, the results are not as good as that obtained with supervised deep learning. In this paper, we propose a quasi-supervised learning method, which is a new type of weakly-supervised learning methods, to recover HR PET images from LR counterparts by leveraging similarity between unpaired LR and HR image patches. Specifically, LR image patches are taken from a patient as inputs, while the most similar HR patches from other patients are found as labels. The similarity between the matched HR and LR patches serves as a prior for network construction. Our proposed method can be implemented by designing a new network or modifying an existing network. As an example in this study, we have modified the cycle-consistent generative adversarial network (CycleGAN) for super-resolution PET. Our numerical and experimental results qualitatively and quantitatively show the merits of our method relative to the state-of-the-art methods. The code is publicly available at https://github.com/PigYang-ops/CycleGAN-QSDL.
Collapse
Affiliation(s)
- Guangtong Yang
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Chen Li
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China.
| |
Collapse
|
20
|
Selles M, Wellenberg RHH, Slotman DJ, Nijholt IM, van Osch JAC, van Dijke KF, Maas M, Boomsma MF. Image quality and metal artifact reduction in total hip arthroplasty CT: deep learning-based algorithm versus virtual monoenergetic imaging and orthopedic metal artifact reduction. Eur Radiol Exp 2024; 8:31. [PMID: 38480603 PMCID: PMC10937891 DOI: 10.1186/s41747-024-00427-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/02/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND To compare image quality, metal artifacts, and diagnostic confidence of conventional computed tomography (CT) images of unilateral total hip arthroplasty patients (THA) with deep learning-based metal artifact reduction (DL-MAR) to conventional CT and 130-keV monoenergetic images with and without orthopedic metal artifact reduction (O-MAR). METHODS Conventional CT and 130-keV monoenergetic images with and without O-MAR and DL-MAR images of 28 unilateral THA patients were reconstructed. Image quality, metal artifacts, and diagnostic confidence in bone, pelvic organs, and soft tissue adjacent to the prosthesis were jointly scored by two experienced musculoskeletal radiologists. Contrast-to-noise ratios (CNR) between bladder and fat and muscle and fat were measured. Wilcoxon signed-rank tests with Holm-Bonferroni correction were used. RESULTS Significantly higher image quality, higher diagnostic confidence, and less severe metal artifacts were observed on DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001 for all comparisons). Higher image quality, higher diagnostic confidence for bone and soft tissue adjacent to the prosthesis, and less severe metal artifacts were observed on DL-MAR when compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.014). CNRs were higher for DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001). Higher CNRs were observed on DL-MAR images compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.010). CONCLUSIONS DL-MAR showed higher image quality, diagnostic confidence, and superior metal artifact reduction compared to conventional CT images and 130-keV monoenergetic images with and without O-MAR in unilateral THA patients. RELEVANCE STATEMENT DL-MAR resulted into improved image quality, stronger reduction of metal artifacts, and improved diagnostic confidence compared to conventional and virtual monoenergetic images with and without metal artifact reduction, bringing DL-based metal artifact reduction closer to clinical application. KEY POINTS • Metal artifacts introduced by total hip arthroplasty hamper radiologic assessment on CT. • A deep-learning algorithm (DL-MAR) was compared to dual-layer CT images with O-MAR. • DL-MAR showed best image quality and diagnostic confidence. • Highest contrast-to-noise ratios were observed on the DL-MAR images.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands.
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands.
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands.
| | - Ruud H H Wellenberg
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | - Ingrid M Nijholt
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | | | - Kees F van Dijke
- Department of Radiology & Nuclear Medicine, Noordwest Ziekenhuisgroep, 1815 JD, Alkmaar, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | | |
Collapse
|
21
|
Li Y, Shao HC, Liang X, Chen L, Li R, Jiang S, Wang J, Zhang Y. Zero-Shot Medical Image Translation via Frequency-Guided Diffusion Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:980-993. [PMID: 37851552 PMCID: PMC11000254 DOI: 10.1109/tmi.2023.3325703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2023]
Abstract
Recently, the diffusion model has emerged as a superior generative model that can produce high quality and realistic images. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. For instance, errors in image translation may distort, shift, or even remove structures and tumors, leading to incorrect diagnosis and inadequate treatments. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to out-of-distribution testing data. We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation. Based on its design, FGDM allows zero-shot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation without any exposure to the source-domain data during training. We evaluated it on three cone-beam CT (CBCT)-to-CT translation tasks for different anatomical sites, and a cross-institutional MR imaging translation task. FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in metrics of Fréchet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), showing its significant advantages in zero-shot medical image translation.
Collapse
|
22
|
Xie K, Gao L, Zhang H, Zhang S, Xi Q, Zhang F, Sun J, Lin T, Sui J, Ni X. GAN-based metal artifacts region inpainting in brain MRI imaging with reflective registration. Med Phys 2024; 51:2066-2080. [PMID: 37665773 DOI: 10.1002/mp.16724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 08/17/2023] [Accepted: 08/19/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Metallic magnetic resonance imaging (MRI) implants can introduce magnetic field distortions, resulting in image distortion, such as bulk shifts and signal-loss artifacts. Metal Artifacts Region Inpainting Network (MARINet), using the symmetry of brain MRI images, has been developed to generate normal MRI images in the image domain and improve image quality. METHODS T1-weighted MRI images containing or located near the teeth of 100 patients were collected. A total of 9000 slices were obtained after data augmentation. Then, MARINet based on U-Net with a dual-path encoder was employed to inpaint the artifacts in MRI images. The input of MARINet contains the original image and the flipped registered image, with partial convolution used concurrently. Subsequently, we compared PConv with partial convolution, and GConv with gated convolution, SDEdit using a diffusion model for inpainting the artifact region of MRI images. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the mask were used to compare the results of these methods. In addition, the artifact masks of clinical MRI images were inpainted by physicians. RESULTS MARINet could directly and effectively inpaint the incomplete MRI images generated by masks in the image domain. For the test results of PConv, GConv, SDEdit, and MARINet, the masked MAEs were 0.1938, 0.1904, 0.1876, and 0.1834, respectively, and the masked PSNRs were 17.39, 17.40, 17.49, and 17.60 dB, respectively. The visualization results also suggest that the network can recover the tissue texture, alveolar shape, and tooth contour. Additionally, for clinical artifact MRI images, MARINet completed the artifact region inpainting task more effectively when compared with other models. CONCLUSIONS By leveraging the quasi-symmetry of brain MRI images, MARINet can directly and effectively inpaint the metal artifacts in MRI images in the image domain, restoring the tooth contour and detail, thereby enhancing the image quality.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| |
Collapse
|
23
|
Curcuru AN, Yang D, An H, Cuculich PS, Robinson CG, Gach HM. Technical note: Minimizing CIED artifacts on a 0.35 T MRI-Linac using deep learning. J Appl Clin Med Phys 2024; 25:e14304. [PMID: 38368615 DOI: 10.1002/acm2.14304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/11/2024] [Accepted: 02/03/2024] [Indexed: 02/20/2024] Open
Abstract
BACKGROUND Artifacts from implantable cardioverter defibrillators (ICDs) are a challenge to magnetic resonance imaging (MRI)-guided radiotherapy (MRgRT). PURPOSE This study tested an unsupervised generative adversarial network to mitigate ICD artifacts in balanced steady-state free precession (bSSFP) cine MRIs and improve image quality and tracking performance for MRgRT. METHODS Fourteen healthy volunteers (Group A) were scanned on a 0.35 T MRI-Linac with and without an MR conditional ICD taped to their left pectoral to simulate an implanted ICD. bSSFP MRI data from 12 of the volunteers were used to train a CycleGAN model to reduce ICD artifacts. The data from the remaining two volunteers were used for testing. In addition, the dataset was reorganized three times using a Leave-One-Out scheme. Tracking metrics [Dice similarity coefficient (DSC), target registration error (TRE), and 95 percentile Hausdorff distance (95% HD)] were evaluated for whole-heart contours. Image quality metrics [normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), and multiscale structural similarity (MS-SSIM) scores] were evaluated. The technique was also tested qualitatively on three additional ICD datasets (Group B) including a patient with an implanted ICD. RESULTS For the whole-heart contour with CycleGAN reconstruction: 1) Mean DSC rose from 0.910 to 0.935; 2) Mean TRE dropped from 4.488 to 2.877 mm; and 3) Mean 95% HD dropped from 10.236 to 7.700 mm. For the whole-body slice with CycleGAN reconstruction: 1) Mean nRMSE dropped from 0.644 to 0.420; 2) Mean MS-SSIM rose from 0.779 to 0.819; and 3) Mean PSNR rose from 18.744 to 22.368. The three Group B datasets evaluated qualitatively displayed a reduction in ICD artifacts in the heart. CONCLUSION CycleGAN-generated reconstructions significantly improved both tracking and image quality metrics when used to mitigate artifacts from ICDs.
Collapse
Affiliation(s)
- Austen N Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Deshan Yang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Hongyu An
- Departments of Radiology, Biomedical Engineering and Neurology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Phillip S Cuculich
- Departments of Cardiovascular Medicine and Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Clifford G Robinson
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - H Michael Gach
- Departments of Radiation Oncology, Radiology and Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
24
|
Li Q, Li R, Li S, Wang T, Cheng Y, Zhang S, Wu W, Zhao J, Qiang Y, Wang L. Unpaired low-dose computed tomography image denoising using a progressive cyclical convolutional neural network. Med Phys 2024; 51:1289-1312. [PMID: 36841936 DOI: 10.1002/mp.16331] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 02/15/2023] [Accepted: 02/18/2023] [Indexed: 02/27/2023] Open
Abstract
BACKGROUND Reducing the radiation dose from computed tomography (CT) can significantly reduce the radiation risk to patients. However, low-dose CT (LDCT) suffers from severe and complex noise interference that affects subsequent diagnosis and analysis. Recently, deep learning-based methods have shown superior performance in LDCT image-denoising tasks. However, most methods require many normal-dose and low-dose CT image pairs, which are difficult to obtain in clinical applications. Unsupervised methods, on the other hand, are more general. PURPOSE Deep learning methods based on GAN networks have been widely used for unsupervised LDCT denoising, but the additional memory requirements of the model also hinder its further clinical application. To this end, we propose a simpler multi-stage denoising framework trained using unpaired data, the progressive cyclical convolutional neural network (PCCNN), which can remove the noise from CT images in latent space. METHODS Our proposed PCCNN introduces a noise transfer model that transfers noise from LDCT to normal-dose CT (NDCT), denoised CT images generated from unpaired CT images, and noisy CT images. The denoising framework also contains a progressive module that effectively removes noise through multi-stage wavelet transforms without sacrificing high-frequency components such as edges and details. RESULTS Compared with seven LDCT denoising algorithms, we perform a quantitative and qualitative evaluation of the experimental results and perform ablation experiments on each network module and loss function. On the AAPM dataset, compared with the contrasted unsupervised methods, our denoising framework has excellent denoising performance increasing the peak signal-to-noise ratio (PSNR) from 29.622 to 30.671, and the structural similarity index (SSIM) was increased from 0.8544 to 0.9199. The PCCNN denoising results were relatively optimal and statistically significant. In the qualitative result comparison, PCCNN without introducing additional blurring and artifacts, the resulting image has higher resolution and complete detail preservation, and the overall structural texture of the image is closer to NDCT. In visual assessments, PCCNN achieves a relatively balanced result in noise suppression, contrast retention, and lesion discrimination. CONCLUSIONS Extensive experimental validation shows that our scheme achieves reconstruction results comparable to supervised learning methods and has performed well in image quality and medical diagnostic acceptability.
Collapse
Affiliation(s)
- Qing Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Runrui Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Saize Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Tao Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yubin Cheng
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Shuming Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Wei Wu
- Department of Clinical Laboratory, Affiliated People's Hospital of Shanxi Medical University, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
- School of Information Engineering, Jinzhong College of Information, Jinzhong, China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Long Wang
- School of Information Engineering, Jinzhong College of Information, Jinzhong, China
| |
Collapse
|
25
|
Li G, Huang X, Huang X, Zong Y, Luo S. PIDNET: Polar Transformation Based Implicit Disentanglement Network for Truncation Artifacts. ENTROPY (BASEL, SWITZERLAND) 2024; 26:101. [PMID: 38392356 PMCID: PMC10887623 DOI: 10.3390/e26020101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/18/2024] [Accepted: 01/22/2024] [Indexed: 02/24/2024]
Abstract
The interior problem, a persistent ill-posed challenge in CT imaging, gives rise to truncation artifacts capable of distorting CT values, thereby significantly impacting clinical diagnoses. Traditional methods have long struggled to effectively solve this issue until the advent of supervised models built on deep neural networks. However, supervised models are constrained by the need for paired data, limiting their practical application. Therefore, we propose a simple and efficient unsupervised method based on the Cycle-GAN framework. Introducing an implicit disentanglement strategy, we aim to separate truncation artifacts from content information. The separated artifact features serve as complementary constraints and the source of generating simulated paired data to enhance the training of the sub-network dedicated to removing truncation artifacts. Additionally, we incorporate polar transformation and an innovative constraint tailored specifically for truncation artifact features, further contributing to the effectiveness of our approach. Experiments conducted on multiple datasets demonstrate that our unsupervised network outperforms the traditional Cycle-GAN model significantly. When compared to state-of-the-art supervised models trained on paired datasets, our model achieves comparable visual results and closely aligns with quantitative evaluation metrics.
Collapse
Affiliation(s)
- Guang Li
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Xinhai Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Xinyu Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Yuan Zong
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Shouhua Luo
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| |
Collapse
|
26
|
Selles M, van Osch JAC, Maas M, Boomsma MF, Wellenberg RHH. Advances in metal artifact reduction in CT images: A review of traditional and novel metal artifact reduction techniques. Eur J Radiol 2024; 170:111276. [PMID: 38142571 DOI: 10.1016/j.ejrad.2023.111276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/14/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
Metal artifacts degrade CT image quality, hampering clinical assessment. Numerous metal artifact reduction methods are available to improve the image quality of CT images with metal implants. In this review, an overview of traditional methods is provided including the modification of acquisition and reconstruction parameters, projection-based metal artifact reduction techniques (MAR), dual energy CT (DECT) and the combination of these techniques. Furthermore, the additional value and challenges of novel metal artifact reduction techniques that have been introduced over the past years are discussed such as photon counting CT (PCCT) and deep learning based metal artifact reduction techniques.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | | | - Mario Maas
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | | - Ruud H H Wellenberg
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| |
Collapse
|
27
|
Wang H, Xie Q, Zeng D, Ma J, Meng D, Zheng Y. OSCNet: Orientation-Shared Convolutional Network for CT Metal Artifact Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:489-502. [PMID: 37656650 DOI: 10.1109/tmi.2023.3310987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
X-ray computed tomography (CT) has been broadly adopted in clinical applications for disease diagnosis and image-guided interventions. However, metals within patients always cause unfavorable artifacts in the recovered CT images. Albeit attaining promising reconstruction results for this metal artifact reduction (MAR) task, most of the existing deep-learning-based approaches have some limitations. The critical issue is that most of these methods have not fully exploited the important prior knowledge underlying this specific MAR task. Therefore, in this paper, we carefully investigate the inherent characteristics of metal artifacts which present rotationally symmetrical streaking patterns. Then we specifically propose an orientation-shared convolution representation mechanism to adapt such physical prior structures and utilize Fourier-series-expansion-based filter parametrization for modelling artifacts, which can finely separate metal artifacts from body tissues. By adopting the classical proximal gradient algorithm to solve the model and then utilizing the deep unfolding technique, we easily build the corresponding orientation-shared convolutional network, termed as OSCNet. Furthermore, considering that different sizes and types of metals would lead to different artifact patterns (e.g., intensity of the artifacts), to better improve the flexibility of artifact learning and fully exploit the reconstructed results at iterative stages for information propagation, we design a simple-yet-effective sub-network for the dynamic convolution representation of artifacts. By easily integrating the sub-network into the proposed OSCNet framework, we further construct a more flexible network structure, called OSCNet+, which improves the generalization performance. Through extensive experiments conducted on synthetic and clinical datasets, we comprehensively substantiate the effectiveness of our proposed methods. Code will be released at https://github.com/hongwang01/OSCNet.
Collapse
|
28
|
Puvanasunthararajah S, Camps SM, Wille ML, Fontanarosa D. Deep learning-based ultrasound transducer induced CT metal artifact reduction using generative adversarial networks for ultrasound-guided cardiac radioablation. Phys Eng Sci Med 2023; 46:1399-1410. [PMID: 37548887 DOI: 10.1007/s13246-023-01307-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 07/20/2023] [Indexed: 08/08/2023]
Abstract
In US-guided cardiac radioablation, a possible workflow includes simultaneous US and planning CT acquisitions, which can result in US transducer-induced metal artifacts on the planning CT scans. To reduce the impact of these artifacts, a metal artifact reduction (MAR) algorithm has been developed based on a deep learning Generative Adversarial Network called Cycle-MAR, and compared with iMAR (Siemens), O-MAR (Philips) and MDT (ReVision Radiology), and CCS-MAR (Combined Clustered Scan-based MAR). Cycle-MAR was trained with a supervised learning scheme using sets of paired clinical CT scans with and without simulated artifacts. It was then evaluated on CT scans with real artifacts of an anthropomorphic phantom, and on sets of clinical CT scans with simulated artifacts which were not used for Cycle-MAR training. Image quality metrics and HU value-based analysis were used to evaluate the performance of Cycle-MAR compared to the other algorithms. The proposed Cycle-MAR network effectively reduces the negative impact of the metal artifacts. For example, the calculated HU value improvement percentage for the cardiac structures in the clinical CT scans was 59.58%, 62.22%, and 72.84% after MDT, CCS-MAR, and Cycle-MAR application, respectively. The application of MAR algorithms reduces the impact of US transducer-induced metal artifacts on CT scans. In comparison to iMAR, O-MAR, MDT, and CCS-MAR, the application of developed Cycle-MAR network on CT scans performs better in reducing these metal artifacts.
Collapse
Affiliation(s)
- Sathyathas Puvanasunthararajah
- School of Clinical Sciences, Queensland University of Technology, Brisbane, QLD, Australia.
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia.
| | | | - Marie-Luise Wille
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia
- School of Mechanical, Medical & Process Engineering, Faculty of Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- ARC ITTC for Multiscale 3D Imaging, Modelling, and Manufacturing, Queensland University of Technology, Brisbane, QLD, Australia
| | - Davide Fontanarosa
- School of Clinical Sciences, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
29
|
Wu Q, Ji X, Gu Y, Xiang J, Quan G, Li B, Zhu J, Coatrieux G, Coatrieux JL, Chen Y. Unsharp Structure Guided Filtering for Self-Supervised Low-Dose CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3283-3294. [PMID: 37235462 DOI: 10.1109/tmi.2023.3280217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Low-dose computed tomography (LDCT) imaging faces great challenges. Although supervised learning has revealed great potential, it requires sufficient and high-quality references for network training. Therefore, existing deep learning methods have been sparingly applied in clinical practice. To this end, this paper presents a novel Unsharp Structure Guided Filtering (USGF) method, which can reconstruct high-quality CT images directly from low-dose projections without clean references. Specifically, we first employ low-pass filters to estimate the structure priors from the input LDCT images. Then, inspired by classical structure transfer techniques, deep convolutional networks are adopted to implement our imaging method which combines guided filtering and structure transfer. Finally, the structure priors serve as the guidance images to alleviate over-smoothing, as they can transfer specific structural characteristics to the generated images. Furthermore, we incorporate traditional FBP algorithms into self-supervised training to enable the transformation of projection domain data to the image domain. Extensive comparisons and analyses on three datasets demonstrate that the proposed USGF has achieved superior performance in terms of noise suppression and edge preservation, and could have a significant impact on LDCT imaging in the future.
Collapse
|
30
|
Liang D, Zhang S, Zhao Z, Wang G, Sun J, Zhao J, Li W, Xu LX. Two-stage generative adversarial networks for metal artifact reduction and visualization in ablation therapy of liver tumors. Int J Comput Assist Radiol Surg 2023; 18:1991-2000. [PMID: 37391537 DOI: 10.1007/s11548-023-02986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 06/12/2023] [Indexed: 07/02/2023]
Abstract
PURPOSE The strong metal artifacts produced by the electrode needle cause poor image quality, thus preventing physicians from observing the surgical situation during the puncture process. To address this issue, we propose a metal artifact reduction and visualization framework for CT-guided ablation therapy of liver tumors. METHODS Our framework contains a metal artifact reduction model and an ablation therapy visualization model. A two-stage generative adversarial network is proposed to reduce the metal artifacts of intraoperative CT images and avoid image blurring. To visualize the puncture process, the axis and tip of the needle are localized, and then the needle is rebuilt in 3D space intraoperatively. RESULTS Experiments show that our proposed metal artifact reduction method achieves higher SSIM (0.891) and PSNR (26.920) values than the state-of-the-art methods. The accuracy of ablation needle reconstruction is 2.76 mm average in needle tip localization and 1.64° average in needle axis localization. CONCLUSION We propose a novel metal artifact reduction and an ablation therapy visualization framework for CT-guided ablation therapy of liver cancer. The experiment results indicate that our approach can reduce metal artifacts and improve image quality. Furthermore, our proposed method demonstrates the potential for displaying the relative position of the tumor and the needle intraoperatively.
Collapse
Affiliation(s)
- Duan Liang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Shunan Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ziqi Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Guangzhi Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wentao Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200240, China
| | - Lisa X Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
31
|
Wang T, Yu H, Wang Z, Chen H, Liu Y, Lu J, Zhang Y. SemiMAR: Semi-Supervised Learning for CT Metal Artifact Reduction. IEEE J Biomed Health Inform 2023; 27:5369-5380. [PMID: 37669208 DOI: 10.1109/jbhi.2023.3312292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
Metal artifacts lead to CT imaging quality degradation. With the success of deep learning (DL) in medical imaging, a number of DL-based supervised methods have been developed for metal artifact reduction (MAR). Nonetheless, fully-supervised MAR methods based on simulated data do not perform well on clinical data due to the domain gap. Although this problem can be avoided in an unsupervised way to a certain degree, severe artifacts cannot be well suppressed in clinical practice. Recently, semi-supervised metal artifact reduction (MAR) methods have gained wide attention due to their ability in narrowing the domain gap and improving MAR performance in clinical data. However, these methods typically require large model sizes, posing challenges for optimization. To address this issue, we propose a novel semi-supervised MAR framework. In our framework, only the artifact-free parts are learned, and the artifacts are inferred by subtracting these clean parts from the metal-corrupted CT images. Our approach leverages a single generator to execute all complex transformations, thereby reducing the model's scale and preventing overlap between clean part and artifacts. To recover more tissue details, we distill the knowledge from the advanced dual-domain MAR network into our model in both image domain and latent feature space. The latent space constraint is achieved via contrastive learning. We also evaluate the impact of different generator architectures by investigating several mainstream deep learning-based MAR backbones. Our experiments demonstrate that the proposed method competes favorably with several state-of-the-art semi-supervised MAR techniques in both qualitative and quantitative aspects.
Collapse
|
32
|
Zhang J, Sun K, Yang J, Hu Y, Gu Y, Cui Z, Zong X, Gao F, Shen D. A generalized dual-domain generative framework with hierarchical consistency for medical image reconstruction and synthesis. COMMUNICATIONS ENGINEERING 2023; 2:72. [PMCID: PMC10956005 DOI: 10.1038/s44172-023-00121-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/26/2023] [Indexed: 01/06/2025]
Abstract
Medical image reconstruction and synthesis are critical for imaging quality, disease diagnosis and treatment. Most of the existing generative models ignore the fact that medical imaging usually occurs in the acquisition domain, which is different from, but associated with, the image domain. Such methods exploit either single-domain or dual-domain information and suffer from inefficient information coupling across domains. Moreover, these models are usually designed specifically and not general enough for different tasks. Here we present a generalized dual-domain generative framework to facilitate the connections within and across domains by elaborately-designed hierarchical consistency constraints. A multi-stage learning strategy is proposed to construct hierarchical constraints effectively and stably. We conducted experiments for representative generative tasks including low-dose PET/CT reconstruction, CT metal artifact reduction, fast MRI reconstruction, and PET/CT synthesis. All these tasks share the same framework and achieve better performance, which validates the effectiveness of our framework. This technology is expected to be applied in clinical imaging to increase diagnosis efficiency and accuracy. A framework applicable in different imaging modalities can facilitate the medical imaging reconstruction efficiency but hindered by inefficient information communication across the data acquisition and imaging domains. Here, Jiadong Zhang and coworkers report a dual-domain generative framework to explore the underlying patterns across domains and apply their method to routine imaging modalities (computed tomography, positron emission tomography, magnetic resonance imaging) under one framework.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Kaicong Sun
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Junwei Yang
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN UK
| | - Yan Hu
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052 Australia
| | - Yuning Gu
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Zhiming Cui
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Xiaopeng Zong
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Fei Gao
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., 200230 Shanghai, China
- Shanghai Clinical Research and Trial Center, 200052 Shanghai, China
| |
Collapse
|
33
|
Li G, Ji L, You C, Gao S, Zhou L, Bai K, Luo S, Gu N. MARGANVAC: metal artifact reduction method based on generative adversarial network with variable constraints. Phys Med Biol 2023; 68:205005. [PMID: 37696272 DOI: 10.1088/1361-6560/acf8ac] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 09/11/2023] [Indexed: 09/13/2023]
Abstract
Objective.Metal artifact reduction (MAR) has been a key issue in CT imaging. Recently, MAR methods based on deep learning have achieved promising results. However, when deploying deep learning-based MAR in real-world clinical scenarios, two prominent challenges arise. One limitation is the lack of paired training data in real applications, which limits the practicality of supervised methods. Another limitation is that image-domain methods suitable for more application scenarios are inadequate in performance while end-to-end approaches with better performance are only applicable to fan-beam CT due to large memory consumption.Approach.We propose a novel image-domain MAR method based on the generative adversarial network with variable constraints (MARGANVAC) to improve MAR performance. The proposed variable constraint is a kind of time-varying cost function that can relax the fidelity constraint at the beginning and gradually strengthen the fidelity constraint as the training progresses. To better deploy our image-domain supervised method into practical scenarios, we develop a transfer method to mimic the real metal artifacts by first extracting the real metal traces and then adding them to artifact-free images to generate paired training data.Main results.The effectiveness of the proposed method is validated in simulated fan-beam experiments and real cone-beam experiments. All quantitative and qualitative results demonstrate that the proposed method achieves superior performance compared with the competing methods.Significance.The MARGANVAC model proposed in this paper is an image-domain model that can be conveniently applied to various scenarios such as fan beam and cone beam CT. At the same time, its performance is on par with the cutting-edge dual-domain MAR approaches. In addition, the metal artifact transfer method proposed in this paper can easily generate paired data with real artifact features, which can be better used for model training in real scenarios.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Longyin Ji
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Chenyu You
- Image Processing and Analysis Group (IPAG), Yale University, New Haven 06510, United States of America
| | - Shuai Gao
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Langrui Zhou
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Keshu Bai
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Ning Gu
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| |
Collapse
|
34
|
Amirian M, Montoya-Zegarra JA, Herzig I, Eggenberger Hotz P, Lichtensteiger L, Morf M, Züst A, Paysan P, Peterlik I, Scheib S, Füchslin RM, Stadelmann T, Schilling FP. Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks. Med Phys 2023; 50:6228-6242. [PMID: 36995003 DOI: 10.1002/mp.16405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 02/25/2023] [Accepted: 03/19/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) is often employed on radiation therapy treatment devices (linear accelerators) used in image-guided radiation therapy (IGRT). For each treatment session, it is necessary to obtain the image of the day in order to accurately position the patient and to enable adaptive treatment capabilities including auto-segmentation and dose calculation. Reconstructed CBCT images often suffer from artifacts, in particular those induced by patient motion. Deep-learning based approaches promise ways to mitigate such artifacts. PURPOSE We propose a novel deep-learning based approach with the goal to reduce motion induced artifacts in CBCT images and improve image quality. It is based on supervised learning and includes neural network architectures employed as pre- and/or post-processing steps during CBCT reconstruction. METHODS Our approach is based on deep convolutional neural networks which complement the standard CBCT reconstruction, which is performed either with the analytical Feldkamp-Davis-Kress (FDK) method, or with an iterative algebraic reconstruction technique (SART-TV). The neural networks, which are based on refined U-net architectures, are trained end-to-end in a supervised learning setup. Labeled training data are obtained by means of a motion simulation, which uses the two extreme phases of 4D CT scans, their deformation vector fields, as well as time-dependent amplitude signals as input. The trained networks are validated against ground truth using quantitative metrics, as well as by using real patient CBCT scans for a qualitative evaluation by clinical experts. RESULTS The presented novel approach is able to generalize to unseen data and yields significant reductions in motion induced artifacts as well as improvements in image quality compared with existing state-of-the-art CBCT reconstruction algorithms (up to +6.3 dB and +0.19 improvements in peak signal-to-noise ratio, PSNR, and structural similarity index measure, SSIM, respectively), as evidenced by validation with an unseen test dataset, and confirmed by a clinical evaluation on real patient scans (up to 74% preference for motion artifact reduction over standard reconstruction). CONCLUSIONS For the first time, it is demonstrated, also by means of clinical evaluation, that inserting deep neural networks as pre- and post-processing plugins in the existing 3D CBCT reconstruction and trained end-to-end yield significant improvements in image quality and reduction of motion artifacts.
Collapse
Affiliation(s)
- Mohammadreza Amirian
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Javier A Montoya-Zegarra
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Ivo Herzig
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Peter Eggenberger Hotz
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Lukas Lichtensteiger
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Marco Morf
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Alexander Züst
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Pascal Paysan
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Igor Peterlik
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Stefan Scheib
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Rudolf Marcel Füchslin
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Thilo Stadelmann
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Frank-Peter Schilling
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| |
Collapse
|
35
|
Wu B, Li C, Zhang J, Lai H, Feng Q, Huang M. Unsupervised dual-domain disentangled network for removal of rigid motion artifacts in MRI. Comput Biol Med 2023; 165:107373. [PMID: 37611424 DOI: 10.1016/j.compbiomed.2023.107373] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/28/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Motion artifacts in magnetic resonance imaging (MRI) have always been a serious issue because they can affect subsequent diagnosis and treatment. Supervised deep learning methods have been investigated for the removal of motion artifacts; however, they require paired data that are difficult to obtain in clinical settings. Although unsupervised methods are widely proposed to fully use clinical unpaired data, they generally focus on anatomical structures generated by the spatial domain while ignoring phase error (deviations or inaccuracies in phase information that are possibly caused by rigid motion artifacts during image acquisition) provided by the frequency domain. In this study, a 2D unsupervised deep learning method named unsupervised disentangled dual-domain network (UDDN) was proposed to effectively disentangle and remove unwanted rigid motion artifacts from images. In UDDN, a dual-domain encoding module was presented to capture different types of information from the spatial and frequency domains to enrich the information. Moreover, a cross-domain attention fusion module was proposed to effectively fuse information from different domains, reduce information redundancy, and improve the performance of motion artifact removal. UDDN was validated on a publicly available dataset and a clinical dataset. Qualitative and quantitative experimental results showed that our method could effectively remove motion artifacts and reconstruct image details. Moreover, the performance of UDDN surpasses that of several state-of-the-art unsupervised methods and is comparable with that of the supervised method. Therefore, our method has great potential for clinical application in MRI, such as real-time removal of rigid motion artifacts.
Collapse
Affiliation(s)
- Boya Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Caixia Li
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
36
|
Li M, Wang J, Chen Y, Tang Y, Wu Z, Qi Y, Jiang H, Zheng J, Tsui BMW. Low-Dose CT Image Synthesis for Domain Adaptation Imaging Using a Generative Adversarial Network With Noise Encoding Transfer Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2616-2630. [PMID: 37030685 DOI: 10.1109/tmi.2023.3261822] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Deep learning (DL) based image processing methods have been successfully applied to low-dose x-ray images based on the assumption that the feature distribution of the training data is consistent with that of the test data. However, low-dose computed tomography (LDCT) images from different commercial scanners may contain different amounts and types of image noise, violating this assumption. Moreover, in the application of DL based image processing methods to LDCT, the feature distributions of LDCT images from simulation and clinical CT examination can be quite different. Therefore, the network models trained with simulated image data or LDCT images from one specific scanner may not work well for another CT scanner and image processing task. To solve such domain adaptation problem, in this study, a novel generative adversarial network (GAN) with noise encoding transfer learning (NETL), or GAN-NETL, is proposed to generate a paired dataset with a different noise style. Specifically, we proposed a method to perform noise encoding operator and incorporate it into the generator to extract a noise style. Meanwhile, with a transfer learning (TL) approach, the image noise encoding operator transformed the noise type of the source domain to that of the target domain for realistic noise generation. One public and two private datasets are used to evaluate the proposed method. Experiment results demonstrated the feasibility and effectiveness of our proposed GAN-NETL model in LDCT image synthesis. In addition, we conduct additional image denoising study using the synthesized clinical LDCT data, which verified the merit of the proposed synthesis in improving the performance of the DL based LDCT processing method.
Collapse
|
37
|
Du M, Liang K, Zhang L, Gao H, Liu Y, Xing Y. Deep-Learning-Based Metal Artefact Reduction With Unsupervised Domain Adaptation Regularization for Practical CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2133-2145. [PMID: 37022909 DOI: 10.1109/tmi.2023.3244252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
CT metal artefact reduction (MAR) methods based on supervised deep learning are often troubled by domain gap between simulated training dataset and real-application dataset, i.e., methods trained on simulation cannot generalize well to practical data. Unsupervised MAR methods can be trained directly on practical data, but they learn MAR with indirect metrics and often perform unsatisfactorily. To tackle the domain gap problem, we propose a novel MAR method called UDAMAR based on unsupervised domain adaptation (UDA). Specifically, we introduce a UDA regularization loss into a typical image-domain supervised MAR method, which mitigates the domain discrepancy between simulated and practical artefacts by feature-space alignment. Our adversarial-based UDA focuses on a low-level feature space where the domain difference of metal artefacts mainly lies. UDAMAR can simultaneously learn MAR from simulated data with known labels and extract critical information from unlabeled practical data. Experiments on both clinical dental and torso datasets show the superiority of UDAMAR by outperforming its supervised backbone and two state-of-the-art unsupervised methods. We carefully analyze UDAMAR by both experiments on simulated metal artefacts and various ablation studies. On simulation, its close performance to the supervised methods and advantages over the unsupervised methods justify its efficacy. Ablation studies on the influence from the weight of UDA regularization loss, UDA feature layers, and the amount of practical data used for training further demonstrate the robustness of UDAMAR. UDAMAR provides a simple and clean design and is easy to implement. These advantages make it a very feasible solution for practical CT MAR.
Collapse
|
38
|
Wang J, Tang Y, Wu Z, Du Q, Yao L, Yang X, Li M, Zheng J. A self-supervised guided knowledge distillation framework for unpaired low-dose CT image denoising. Comput Med Imaging Graph 2023; 107:102237. [PMID: 37116340 DOI: 10.1016/j.compmedimag.2023.102237] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/21/2023] [Accepted: 04/13/2023] [Indexed: 04/30/2023]
Abstract
Low-dose computed tomography (LDCT) can significantly reduce the damage of X-ray to the human body, but the reduction of CT dose will produce images with severe noise and artifacts, which will affect the diagnosis of doctors. Recently, deep learning has attracted more and more attention from researchers. However, most of the denoising networks applied to deep learning-based LDCT imaging are supervised methods, which require paired data for network training. In a realistic imaging scenario, obtaining well-aligned image pairs is challenging due to the error in the table re-positioning and the patient's physiological movement during data acquisition. In contrast, the unpaired learning method can overcome the drawbacks of supervised learning, making it more feasible to collect unpaired training data in most real-world imaging applications. In this study, we develop a novel unpaired learning framework, Self-Supervised Guided Knowledge Distillation (SGKD), which enables the guidance of supervised learning using the results generated by self-supervised learning. The proposed SGKD scheme contains two stages of network training. First, we can achieve the LDCT image quality improvement by the designed self-supervised cycle network. Meanwhile, it can also produce two complementary training datasets from the unpaired LDCT and NDCT images. Second, a knowledge distillation strategy with the above two datasets is exploited to further improve the LDCT image denoising performance. To evaluate the effectiveness and feasibility of the proposed method, extensive experiments were performed on the simulated AAPM challenging and real-world clinical LDCT datasets. The qualitative and quantitative results show that the proposed SGKD achieves better performance in terms of noise suppression and detail preservation compared with some state-of-the-art network models.
Collapse
Affiliation(s)
- Jiping Wang
- Institute of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Yufei Tang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Zhongyi Wu
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Qiang Du
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Libing Yao
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Xiaodong Yang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Ming Li
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China.
| | - Jian Zheng
- Institute of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China.
| |
Collapse
|
39
|
Selles M, Slotman DJ, van Osch JAC, Nijholt IM, Wellenberg RHH, Maas M, Boomsma MF. Is AI the way forward for reducing metal artifacts in CT? development of a generic deep learning-based method and initial evaluation in patients with sacroiliac joint implants. Eur J Radiol 2023; 163:110844. [PMID: 37119708 DOI: 10.1016/j.ejrad.2023.110844] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/01/2023]
Abstract
PURPOSE To develop a deep learning-based metal artifact reduction technique (dl-MAR) and quantitatively compare metal artifacts on dl-MAR-corrected CT-images, orthopedic metal artifact reduction (O-MAR)-corrected CT-images and uncorrected CT-images after sacroiliac (SI) joint fusion. METHODS dl-MAR was trained on CT-images with simulated metal artifacts. Pre-surgery CT-images and uncorrected, O-MAR-corrected and dl-MAR-corrected post-surgery CT-images of twenty-five patients undergoing SI joint fusion were retrospectively obtained. Image registration was applied to align pre-surgery with post-surgery CT-images within each patient, allowing placement of regions of interest (ROIs) on the same anatomical locations. Six ROIs were placed on the metal implant and the contralateral side in bone lateral of the SI joint, the gluteus medius muscle and the iliacus muscle. Metal artifacts were quantified as the difference in Hounsfield units (HU) between pre- and post-surgery CT-values within the ROIs on the uncorrected, O-MAR-corrected and dl-MAR-corrected images. Noise was quantified as standard deviation in HU within the ROIs. Metal artifacts and noise in the post-surgery CT-images were compared using linear multilevel regression models. RESULTS Metal artifacts were significantly reduced by O-MAR and dl-MAR in bone (p < 0.001), contralateral bone (O-MAR: p = 0.009; dl-MAR: p < 0.001), gluteus medius (p < 0.001), contralateral gluteus medius (p < 0.001), iliacus (p < 0.001) and contralateral iliacus (O-MAR: p = 0.024; dl-MAR: p < 0.001) compared to uncorrected images. Images corrected with dl-MAR resulted in stronger artifact reduction than images corrected with O-MAR in contralateral bone (p < 0.001), gluteus medius (p = 0.006), contralateral gluteus medius (p < 0.001), iliacus (p = 0.017), and contralateral iliacus (p < 0.001). Noise was reduced by O-MAR in bone (p = 0.009) and gluteus medius (p < 0.001) while noise was reduced by dl-MAR in all ROIs (p < 0.001) in comparison to uncorrected images. CONCLUSION dl-MAR showed superior metal artifact reduction compared to O-MAR in CT-images with SI joint fusion implants.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands
| | | | | | - Ruud H H Wellenberg
- Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | |
Collapse
|
40
|
Wang H, Li Y, Zhang H, Meng D, Zheng Y. InDuDoNet+: A deep unfolding dual domain network for metal artifact reduction in CT images. Med Image Anal 2023; 85:102729. [PMID: 36623381 DOI: 10.1016/j.media.2022.102729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 11/27/2022] [Accepted: 12/09/2022] [Indexed: 12/25/2022]
Abstract
During the computed tomography (CT) imaging process, metallic implants within patients often cause harmful artifacts, which adversely degrade the visual quality of reconstructed CT images and negatively affect the subsequent clinical diagnosis. For the metal artifact reduction (MAR) task, current deep learning based methods have achieved promising performance. However, most of them share two main common limitations: (1) the CT physical imaging geometry constraint is not comprehensively incorporated into deep network structures; (2) the entire framework has weak interpretability for the specific MAR task; hence, the role of each network module is difficult to be evaluated. To alleviate these issues, in the paper, we construct a novel deep unfolding dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded. Concretely, we derive a joint spatial and Radon domain reconstruction model and propose an optimization algorithm with only simple operators for solving it. By unfolding the iterative steps involved in the proposed algorithm into the corresponding network modules, we easily build the InDuDoNet+ with clear interpretability. Furthermore, we analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance. Comprehensive experiments on synthesized data and clinical data substantiate the superiority of the proposed methods as well as the superior generalization performance beyond the current state-of-the-art (SOTA) MAR methods. Code is available at https://github.com/hongwang01/InDuDoNet_plus.
Collapse
Affiliation(s)
| | | | - Haimiao Zhang
- Beijing Information Science and Technology University, Beijing, China
| | - Deyu Meng
- Xi'an Jiaotong University, Xi'an, China; Peng Cheng Laboratory, Shenzhen, China; Macau University of Science and Technology, Taipa, Macao.
| | | |
Collapse
|
41
|
Zhu M, Zhu Q, Song Y, Guo Y, Zeng D, Bian Z, Wang Y, Ma J. Physics-informed sinogram completion for metal artifact reduction in CT imaging. Phys Med Biol 2023; 68. [PMID: 36808913 DOI: 10.1088/1361-6560/acbddf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/21/2023] [Indexed: 02/23/2023]
Abstract
Objective.Metal artifacts in the computed tomography (CT) imaging are unavoidably adverse to the clinical diagnosis and treatment outcomes. Most metal artifact reduction (MAR) methods easily result in the over-smoothing problem and loss of structure details near the metal implants, especially for these metal implants with irregular elongated shapes. To address this problem, we present the physics-informed sinogram completion (PISC) method for MAR in CT imaging, to reduce metal artifacts and recover more structural textures.Approach.Specifically, the original uncorrected sinogram is firstly completed by the normalized linear interpolation algorithm to reduce metal artifacts. Simultaneously, the uncorrected sinogram is also corrected based on the beam-hardening correction physical model, to recover the latent structure information in metal trajectory region by leveraging the attenuation characteristics of different materials. Both corrected sinograms are fused with the pixel-wise adaptive weights, which are manually designed according to the shape and material information of metal implants. To furtherly reduce artifacts and improve the CT image quality, a post-processing frequency split algorithm is adopted to yield the final corrected CT image after reconstructing the fused sinogram.Main results.We qualitatively and quantitatively evaluated the presented PISC method on two simulated datasets and three real datasets. All results demonstrate that the presented PISC method can effectively correct the metal implants with various shapes and materials, in terms of artifact suppression and structure preservation.Significance.We proposed a sinogram-domain MAR method to compensate for the over-smoothing problem existing in most MAR methods by taking advantage of the physical prior knowledge, which has the potential to improve the performance of the deep learning based MAR approaches.
Collapse
Affiliation(s)
- Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Qisen Zhu
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yi Guo
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
42
|
Zhu Y, Zhao H, Wang T, Deng L, Yang Y, Jiang Y, Li N, Chan Y, Dai J, Zhang C, Li Y, Xie Y, Liang X. Sinogram domain metal artifact correction of CT via deep learning. Comput Biol Med 2023; 155:106710. [PMID: 36842222 DOI: 10.1016/j.compbiomed.2023.106710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 02/12/2023] [Accepted: 02/19/2023] [Indexed: 02/22/2023]
Abstract
PURPOSE Metal artifacts can significantly decrease the quality of computed tomography (CT) images. This occurs as X-rays penetrate implanted metals, causing severe attenuation and resulting in metal artifacts in the CT images. This degradation in image quality can hinder subsequent clinical diagnosis and treatment planning. Beam hardening artifacts are often manifested as severe strip artifacts in the image domain, affecting the overall quality of the reconstructed CT image. In the sinogram domain, metal is typically located in specific areas, and image processing in these regions can preserve image information in other areas, making the model more robust. To address this issue, we propose a region-based correction of beam hardening artifacts in the sinogram domain using deep learning. METHODS We present a model composed of three modules: (a) a Sinogram Metal Segmentation Network (Seg-Net), (b) a Sinogram Enhancement Network (Sino-Net), and (c) a Fusion Module. The model starts by using the Attention U-Net network to segment the metal regions in the sinogram. The segmented metal regions are then interpolated to obtain a sinogram image free of metal. The Sino-Net is then applied to compensate for the loss of organizational and artifact information in the metal regions. The corrected metal sinogram and the interpolated metal-free sinogram are then used to reconstruct the metal CT and metal-free CT images, respectively. Finally, the Fusion Module combines the two CT images to produce the result. RESULTS Our proposed method shows strong performance in both qualitative and quantitative evaluations. The peak signal-to-noise ratio (PSNR) of the CT image before and after correction was 18.22 and 30.32, respectively. The structural similarity index measure (SSIM) improved from 0.75 to 0.99, and the weighted peak signal-to-noise ratio (WPSNR) increased from 21.69 to 35.68. CONCLUSIONS Our proposed method demonstrates the reliability of high-accuracy correction of beam hardening artifacts.
Collapse
Affiliation(s)
- Yulin Zhu
- The First Dongguan Affiliated Hospital, Guangdong Medical University, Dongguan, 523808, China
| | - Hanqing Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lei Deng
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yupeng Yang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yunhui Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
43
|
Koetzier LR, Mastrodicasa D, Szczykutowicz TP, van der Werf NR, Wang AS, Sandfort V, van der Molen AJ, Fleischmann D, Willemink MJ. Deep Learning Image Reconstruction for CT: Technical Principles and Clinical Prospects. Radiology 2023; 306:e221257. [PMID: 36719287 PMCID: PMC9968777 DOI: 10.1148/radiol.221257] [Citation(s) in RCA: 115] [Impact Index Per Article: 57.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/26/2022] [Accepted: 10/13/2022] [Indexed: 02/01/2023]
Abstract
Filtered back projection (FBP) has been the standard CT image reconstruction method for 4 decades. A simple, fast, and reliable technique, FBP has delivered high-quality images in several clinical applications. However, with faster and more advanced CT scanners, FBP has become increasingly obsolete. Higher image noise and more artifacts are especially noticeable in lower-dose CT imaging using FBP. This performance gap was partly addressed by model-based iterative reconstruction (MBIR). Yet, its "plastic" image appearance and long reconstruction times have limited widespread application. Hybrid iterative reconstruction partially addressed these limitations by blending FBP with MBIR and is currently the state-of-the-art reconstruction technique. In the past 5 years, deep learning reconstruction (DLR) techniques have become increasingly popular. DLR uses artificial intelligence to reconstruct high-quality images from lower-dose CT faster than MBIR. However, the performance of DLR algorithms relies on the quality of data used for model training. Higher-quality training data will become available with photon-counting CT scanners. At the same time, spectral data would greatly benefit from the computational abilities of DLR. This review presents an overview of the principles, technical approaches, and clinical applications of DLR, including metal artifact reduction algorithms. In addition, emerging applications and prospects are discussed.
Collapse
Affiliation(s)
| | | | - Timothy P. Szczykutowicz
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Niels R. van der Werf
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Adam S. Wang
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Veit Sandfort
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Aart J. van der Molen
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Dominik Fleischmann
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Martin J. Willemink
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| |
Collapse
|
44
|
Wu P, Qiao Y, Chu M, Zhang S, Bai J, Gutierrez-Chico JL, Tu S. Reciprocal assistance of intravascular imaging in three-dimensional stent reconstruction: Using cross-modal translation based on disentanglement representation. Comput Med Imaging Graph 2023; 104:102166. [PMID: 36586195 DOI: 10.1016/j.compmedimag.2022.102166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 12/21/2022] [Accepted: 12/21/2022] [Indexed: 12/27/2022]
Abstract
BACKGROUND Accurate and efficient 3-dimension (3D) reconstruction of coronary stents in intravascular imaging of optical coherence tomography (OCT) or intravascular ultrasound (IVUS) is important for optimization of complex percutaneous coronary interventions (PCI). Deep learning has been used to address this technical challenge. However, manual annotation of stent is strenuous, especially for IVUS images. To this end, we aim to explore whether the OCT and IVUS images can assist each other in stent 3D reconstruction when one of them is lack of labeled dataset. METHODS We firstly performed cross-modal translation between OCT and IVUS images, where disentangled representation was employed to generate synthetic images with good stent consistency. The reciprocal assistance of OCT and IVUS in stent 3D reconstruction was then conducted by applying unsupervised and semi-supervised learning with the aid of synthetic images. Stent consistency in synthetic images and reciprocal effectiveness in stent 3D reconstruction were quantitatively assessed by F1-Score (FS) on two datasets: OCT-High Definition IVUS (HD IVUS) and OCT-Conventional IVUS (IVUS). RESULTS The employment of disentangled representation achieved higher stent consistency in synthetic images (OCT to HD IVUS: FS=0.789 vs 0.684; HD IVUS to OCT: FS=0.766 vs 0.682; OCT to IVUS: FS=0.806 vs 0.664; IVUS to OCT: FS=0.724 vs 0.673). For stent 3D reconstruction, the assistance from synthetic images significantly promoted unsupervised adaptation across modalities (OCT to HD IVUS: FS=0.776 vs 0.109; HD IVUS to OCT: FS=0.826 vs 0.125; OCT to IVUS: FS=0.782 vs 0.068; IVUS to OCT: FS=0.815 vs 0.123), and improved performance in semi-supervised learning, especially when only limited labeled data was available. CONCLUSION The intravascular images of OCT and IVUS can provide reciprocal assistance to each other in stent 3D reconstruction by cross-modal translation, where the stent consistency in synthetic images was maintained by disentangled representation.
Collapse
Affiliation(s)
- Peng Wu
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuchuan Qiao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Miao Chu
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Su Zhang
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jingfeng Bai
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | | | - Shengxian Tu
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
45
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
46
|
Zhang Z, Yang M, Li H, Chen S, Wang J, Xu L. An Innovative Low-dose CT Inpainting Algorithm based on Limited-angle Imaging Inpainting Model. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:131-152. [PMID: 36373341 DOI: 10.3233/xst-221260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
BACKGROUND With the popularity of computed tomography (CT) technique, an increasing number of patients are receiving CT scans. Simultaneously, the public's attention to CT radiation dose is also increasing. How to obtain CT images suitable for clinical diagnosis while reducing the radiation dose has become the focus of researchers. OBJECTIVE To demonstrate that limited-angle CT imaging technique can be used to acquire lower dose CT images, we propose a generative adversarial network-based image inpainting model-Low-dose imaging and Limited-angle imaging inpainting Model (LDLAIM), this method can effectively restore low-dose CT images with limited-angle imaging, which verifies that limited-angle CT imaging technique can be used to acquire low-dose CT images. METHODS In this work, we used three datasets, including chest and abdomen dataset, head dataset and phantom dataset. They are used to synthesize low-dose and limited-angle CT images for network training. During training stage, we divide each dataset into training set, validation set and testing set according to the ratio of 8:1:1, and use the validation set to validate after finishing an epoch training, and use the testing set to test after finishing all the training. The proposed method is based on generative adversarial networks(GANs), which consists of a generator and a discriminator. The generator consists of residual blocks and encoder-decoder, and uses skip connection. RESULTS We use SSIM, PSNR and RMSE to evaluate the performance of the proposed method. In the chest and abdomen dataset, the mean SSIM, PSNR and RMSE of the testing set are 0.984, 35.385 and 0.017, respectively. In the head dataset, the mean SSIM, PSNR and RMSE of the testing set are 0.981, 38.664 and 0.011, respectively. In the phantom dataset, the mean SSIM, PSNR and RMSE of the testing set are 0.977, 33.468 and 0.022, respectively. By comparing the experimental results of other algorithms in these three datasets, it can be found that the proposed method is superior to other algorithms in these indicators. Meanwhile, the proposed method also achieved the highest score in the subjective quality score. CONCLUSIONS Experimental results show that the proposed method can effectively restore CT images when both low-dose CT imaging techniques and limited-angle CT imaging techniques are used simultaneously. This work proves that the limited-angle CT imaging technique can be used to reduce the CT radiation dose, and also provides a new idea for the research of low-dose CT imaging.
Collapse
Affiliation(s)
- Ziheng Zhang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, Anhui, China
- University of Science and Technology of China, Hefei, Anhui, China
| | - Minghan Yang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, Anhui, China
| | - Huijuan Li
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, Anhui, China
- University of Science and Technology of China, Hefei, Anhui, China
| | - Shuai Chen
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, Anhui, China
| | - Jianye Wang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, Anhui, China
| | - Lei Xu
- The First Affiliated Hospitalof University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
47
|
Richtsmeier D, O'Connell J, Rodesch PA, Iniewski K, Bazalova-Carter M. Metal artifact correction in photon-counting detector computed tomography: metal trace replacement using high-energy data. Med Phys 2023; 50:380-396. [PMID: 36227611 DOI: 10.1002/mp.16049] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 09/23/2022] [Accepted: 09/28/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Metal artifacts have been an outstanding issue in computed tomography (CT) since its first uses in the clinic and continue to interfere. Metal artifact reduction (MAR) methods continue to be proposed and photon-counting detectors (PCDs) have recently been the subject of research toward this purpose. PCDs offer the ability to distinguish the energy of incident x-rays and sort them in a set number of energy bins. High-energy data captured using PCDs have been shown to reduce metal artifacts in reconstructions due to reduced beam hardening. PURPOSE High-energy reconstructions using PCD-CT have their drawbacks, such as reduced image contrast and increased noise. Here, we demonstrate a MAR algorithm, trace replacement MAR (TRMAR), in which the data corrupted by metal artifacts in full energy spectrum projections are corrected using the high-energy data captured during the same scan. The resulting reconstructions offer similar MAR to that seen in high-energy reconstructions, but with improved image quality. METHODS Experimental data were collected using a bench-top PCD-CT system with a cadmium zinc telluride PCD. Simulations were performed to determine the optimal high-energy threshold and to test TRMAR in simulations using the XCAT phantom and a biological sample. For experiments a 100-mm diameter cylindrical phantom containing vials of water, two screws, various densities of Ca(ClO4 )2 , and a spatial resolution phantom was imaged with and without the screws. The screws were segmented in the initial reconstruction and forward projected to identify them in the sinogram space in order to perform TRMAR. The resulting reconstructions were compared to the control and to reconstructions corrected using normalized metal artifact reduction (NMAR). Additionally, a beef short rib was imaged with and without metal to provide a more realistic phantom. RESULTS XCAT simulations showed a reduction in the streak artifact from -978 HU in uncorrected images to -10 HU with TRMAR. The magnitude of the metal artifact in uncorrected images of the 100-mm phantom was -442 HU, compared to the desired -81 HU with no metal. TRMAR reduced the magnitude of the artifact to -142 HU, with NMAR reducing the magnitude to -96 HU. Relative image noise was reduced from 176% in the high-energy image to 56% using TRMAR. Density quantification was better with NMAR, with the Ca(ClO4 )2 vial affected most by metal artifacts showing 0.8% error compared to 2.1% with TRMAR. Small features were preserved to a greater extent with TRMAR, with the limiting spatial frequency at 20% of the MTF fully maintained at 1.31 lp/mm, while with NMAR it was reduced to 1.22 lp/mm. Images of the beef short rib showed better delineation of the shape of the metal using TRMAR. CONCLUSIONS NMAR offers slightly better performance compared to TRMAR in streak reduction and image quality metrics. However, TRMAR is less susceptible to metal segmentation errors and can closely approximate the reduction in the streak metal artifact seen in NMAR at 1/3 the computation time. With the recent introduction of PCD-CT into the clinic, TRMAR offers notable potential for fast, effective MAR.
Collapse
Affiliation(s)
- Devon Richtsmeier
- Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, Canada
| | - Jericho O'Connell
- Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, Canada
| | - Pierre-Antoine Rodesch
- Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, Canada
| | - Kris Iniewski
- Redlen Technologies, Saanichton, British Columbia, Canada
| | | |
Collapse
|
48
|
Xie C, Hu Y, Han L, Fu J, Vardhanabhuti V, Yang H. Prediction of Individual Lymph Node Metastatic Status in Esophageal Squamous Cell Carcinoma Using Routine Computed Tomography Imaging: Comparison of Size-Based Measurements and Radiomics-Based Models. Ann Surg Oncol 2022; 29:8117-8126. [PMID: 36018524 DOI: 10.1245/s10434-022-12207-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 06/08/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Lymph node status is vital for prognosis and treatment decisions for esophageal squamous cell carcinoma (ESCC). This study aimed to construct and evaluate an optimal radiomics-based method for a more accurate evaluation of individual regional lymph node status in ESCC and to compare it with traditional size-based measurements. METHODS The study consecutively collected 3225 regional lymph nodes from 530 ESCC patients receiving upfront surgery from January 2011 to October 2015. Computed tomography (CT) scans for individual lymph nodes were analyzed. The study evaluated the predictive performance of machine-learning models trained on features extracted from two-dimensional (2D) and three-dimensional (3D) radiomics by different contouring methods. Robust and important radiomics features were selected, and classification models were further established and validated. RESULTS The lymph node metastasis rate was 13.2% (427/3225). The average short-axis diameter was 6.4 mm for benign lymph nodes and 7.9 mm for metastatic lymph nodes. The division of lymph node stations into five regions according to anatomic lymph node drainage (cervical, upper mediastinal, middle mediastinal, lower mediastinal, and abdominal regions) improved the predictive performance. The 2D radiomics method showed optimal diagnostic results, with more efficient segmentation of nodal lesions. In the test set, this optimal model achieved an area under the receiver operating characteristic curve of 0.841-0.891, an accuracy of 84.2-94.7%, a sensitivity of 65.7-83.3%, and a specificity of 84.4-96.7%. CONCLUSIONS The 2D radiomics-based models noninvasively predicted the metastatic status of an individual lymph node in ESCC and outperformed the conventional size-based measurement. The 2D radiomics-based model could be incorporated into the current clinical workflow to enable better decision-making for treatment strategies.
Collapse
Affiliation(s)
- Chenyi Xie
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China.,Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Pok Fu Lam, Hong Kong SAR, China
| | - Yihuai Hu
- Department of Thoracic Surgery, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Esophageal Cancer Institute, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Lujun Han
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianhua Fu
- Department of Thoracic Surgery, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Esophageal Cancer Institute, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Pok Fu Lam, Hong Kong SAR, China.
| | - Hong Yang
- Department of Thoracic Surgery, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Esophageal Cancer Institute, Sun Yat-sen University Cancer Center, Guangzhou, China.
| |
Collapse
|
49
|
Cao Z, Gao X, Chang Y, Liu G, Pei Y. A novel approach for eliminating metal artifacts based on MVCBCT and CycleGAN. Front Oncol 2022; 12:1024160. [PMID: 36439465 PMCID: PMC9686009 DOI: 10.3389/fonc.2022.1024160] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 10/27/2022] [Indexed: 08/15/2023] Open
Abstract
Purpose To develop a metal artifact reduction (MAR) algorithm and eliminate the adverse effects of metal artifacts on imaging diagnosis and radiotherapy dose calculations. Methods Cycle-consistent adversarial network (CycleGAN) was used to generate synthetic CT (sCT) images from megavoltage cone beam CT (MVCBCT) images. In this study, there were 140 head cases with paired CT and MVCBCT images, from which 97 metal-free cases were used for training. Based on the trained model, metal-free sCT (sCT_MF) images and metal-containing sCT (sCT_M) images were generated from the MVCBCT images of 29 metal-free cases and 14 metal cases, respectively. Then, the sCT_MF and sCT_M images were quantitatively evaluated for imaging and dosimetry accuracy. Results The structural similarity (SSIM) index of the sCT_MF and metal-free CT (CT_MF) images were 0.9484, and the peak signal-to-noise ratio (PSNR) was 31.4 dB. Compared with the CT images, the sCT_MF images had similar relative electron density (RED) and dose distribution, and their gamma pass rate (1 mm/1%) reached 97.99% ± 1.14%. The sCT_M images had high tissue resolution with no metal artifacts, and the RED distribution accuracy in the range of 1.003 to 1.056 was improved significantly. The RED and dose corrections were most significant for the planning target volume (PTV), mandible and oral cavity. The maximum correction of Dmean and D50 for the oral cavity reached 90 cGy. Conclusions Accurate sCT_M images were generated from MVCBCT images based on CycleGAN, which eliminated the metal artifacts in clinical images completely and corrected the RED and dose distributions accurately for clinical application.
Collapse
Affiliation(s)
- Zheng Cao
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
- Hematology and Oncology Department, Hefei First People’s Hospital, Hefei, China
| | - Xiang Gao
- Hematology and Oncology Department, Hefei First People’s Hospital, Hefei, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| | - Yuanji Pei
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| |
Collapse
|
50
|
Chen C, Xing Y, Gao H, Zhang L, Chen Z. Sam's Net: A Self-Augmented Multistage Deep-Learning Network for End-to-End Reconstruction of Limited Angle CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2912-2924. [PMID: 35576423 DOI: 10.1109/tmi.2022.3175529] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). Given incomplete projection data, images reconstructed by conventional analytical algorithms and iterative methods suffer from severe structural distortions and artifacts. In this paper, we proposed a self-augmented multi-stage deep-learning network (Sam's Net) for end-to-end reconstruction of limited angle CT. With the merit of the alternating minimization technique, Sam's Net integrates multi-stage self-constraints into cross-domain optimization to provide additional constraints on the manifold of neural networks. In practice, a sinogram completion network (SCNet) and artifact suppression network (ASNet), together with domain transformation layers constitute the backbone for cross-domain optimization. An online self-augmentation module was designed following the manner defined by alternating minimization, which enables a self-augmented learning procedure and multi-stage inference manner. Besides, a substitution operation was applied as a hard constraint for the solution space based on the data fidelity and a learnable weighting layer was constructed for data consistency refinement. Sam's Net forms a new framework for ill-posed reconstruction problems. In the training phase, the self-augmented procedure guides the optimization into a tightened solution space with enriched diverse data distribution and enhanced data consistency. In the inference phase, multi-stage prediction can improve performance progressively. Extensive experiments with both simulated and practical projections under 90-degree and 120-degree fan-beam configurations validate that Sam's Net can significantly improve the reconstruction quality with high stability and robustness.
Collapse
|