1
|
Lee J, Kim S, Ahn J, Wang AS, Baek J. X-ray CT metal artifact reduction using neural attenuation field prior. Med Phys 2025. [PMID: 40305006 DOI: 10.1002/mp.17859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 03/26/2025] [Accepted: 04/14/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND The presence of metal objects in computed tomography (CT) imaging introduces severe artifacts that degrade image quality and hinder accurate diagnosis. While several deep learning-based metal artifact reduction (MAR) methods have been proposed, they often exhibit poor performance on unseen data and require large datasets to train neural networks. PURPOSE In this work, we propose a sinogram inpainting method for metal artifact reduction that leverages a neural attenuation field (NAF) as a prior. This new method, dubbed NAFMAR, operates in a self-supervised manner by optimizing a model-based neural field, thus eliminating the need for large training datasets. METHODS NAF is optimized to generate prior images, which are then used to inpaint metal traces in the original sinogram. To address the corruption of x-ray projections caused by metal objects, a 3D forward projection of the original corrupted image is performed to identify metal traces. Consequently, NAF is optimized using a metal trace-masked ray sampling strategy that selectively utilizes uncorrupted rays to supervise the network. Moreover, a metal-aware loss function is proposed to prioritize metal-associated regions during optimization, thereby enhancing the network to learn more informed representations of anatomical features. After optimization, the NAF images are rendered to generate NAF prior images, which serve as priors to correct original projections through interpolation. Experiments are conducted to compare NAFMAR with other prior-based inpainting MAR methods. RESULTS The proposed method provides an accurate prior without requiring extensive datasets. Images corrected using NAFMAR showed sharp features and preserved anatomical structures. Our comprehensive evaluation, involving simulated dental CT and clinical pelvic CT images, demonstrated the effectiveness of NAF prior compared to other prior information, including the linear interpolation and data-driven convolutional neural networks (CNNs). NAFMAR outperformed all compared baselines in terms of structural similarity index measure (SSIM) values, and its peak signal-to-noise ratio (PSNR) value was comparable to that of the dual-domain CNN method. CONCLUSIONS NAFMAR presents an effective, high-fidelity solution for metal artifact reduction in 3D tomographic imaging without the need for large datasets.
Collapse
Affiliation(s)
- Jooho Lee
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| | - Seongjun Kim
- School of Integrated Technology, Yonsei University, Seoul, Republic of Korea
| | - Junhyun Ahn
- School of Integrated Technology, Yonsei University, Seoul, Republic of Korea
| | - Adam S Wang
- Department of Radiology, Stanford University, California, USA
| | - Jongduk Baek
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
2
|
Tian X, Anantrasirichai N, Nicholson L, Achim A. The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy. BIOLOGICAL IMAGING 2024; 4:e15. [PMID: 39776613 PMCID: PMC11704141 DOI: 10.1017/s2633903x24000163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 08/18/2024] [Accepted: 09/28/2024] [Indexed: 01/11/2025]
Abstract
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. In vivo OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while ex vivo confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired in vivo OCT to ex vivo confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.
Collapse
Affiliation(s)
- Xin Tian
- Visual Information Laboratory, University of Bristol, Bristol, UK
| | | | - Lindsay Nicholson
- Autoimmune Inflammation Research, University of Bristol, Bristol, UK
| | - Alin Achim
- Visual Information Laboratory, University of Bristol, Bristol, UK
| |
Collapse
|
3
|
Liu X, Xie Y, Diao S, Tan S, Liang X. Unsupervised CT Metal Artifact Reduction by Plugging Diffusion Priors in Dual Domains. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3533-3545. [PMID: 38194400 DOI: 10.1109/tmi.2024.3351201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
During the process of computed tomography (CT), metallic implants often cause disruptive artifacts in the reconstructed images, impeding accurate diagnosis. Many supervised deep learning-based approaches have been proposed for metal artifact reduction (MAR). However, these methods heavily rely on training with paired simulated data, which are challenging to acquire. This limitation can lead to decreased performance when applying these methods in clinical practice. Existing unsupervised MAR methods, whether based on learning or not, typically work within a single domain, either in the image domain or the sinogram domain. In this paper, we propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions. Specifically, we first train a diffusion model using CT images without metal artifacts. Subsequently, we iteratively introduce the diffusion priors in both the sinogram domain and image domain to restore the degraded portions caused by metal artifacts. Besides, we design temporally dynamic weight masks for the image-domian fusion. The dual-domain processing empowers our approach to outperform existing unsupervised MAR methods, including another MAR method based on diffusion model. The effectiveness has been qualitatively and quantitatively validated on synthetic datasets. Moreover, our method demonstrates superior visual results among both supervised and unsupervised methods on clinical datasets. Codes are available in github.com/DeepXuan/DuDoDp-MAR.
Collapse
|
4
|
Fukuda M, Kotaki S, Nozawa M, Kuwada C, Kise Y, Ariji E, Ariji Y. A cycle generative adversarial network for generating synthetic contrast-enhanced computed tomographic images from non-contrast images in the internal jugular lymph node-bearing area. Odontology 2024; 112:1343-1352. [PMID: 38607582 DOI: 10.1007/s10266-024-00933-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/24/2024] [Indexed: 04/13/2024]
Abstract
The objectives of this study were to create a mutual conversion system between contrast-enhanced computed tomography (CECT) and non-CECT images using a cycle generative adversarial network (cycleGAN) for the internal jugular region. Image patches were cropped from CT images in 25 patients who underwent both CECT and non-CECT imaging. Using a cycleGAN, synthetic CECT and non-CECT images were generated from original non-CECT and CECT images, respectively. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were calculated. Visual Turing tests were used to determine whether oral and maxillofacial radiologists could tell the difference between synthetic versus original images, and receiver operating characteristic (ROC) analyses were used to assess the radiologists' performances in discriminating lymph nodes from blood vessels. The PSNR of non-CECT images was higher than that of CECT images, while the SSIM was higher in CECT images. The Visual Turing test showed a higher perceptual quality in CECT images. The area under the ROC curve showed almost perfect performances in synthetic as well as original CECT images. In conclusion, synthetic CECT images created by cycleGAN appeared to have the potential to provide effective information in patients who could not receive contrast enhancement.
Collapse
Affiliation(s)
- Motoki Fukuda
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan.
| | - Shinya Kotaki
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| | - Michihito Nozawa
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| |
Collapse
|
5
|
McKeown T, Gach HM, Hao Y, An H, Robinson CG, Cuculich PS, Yang D. Small metal artifact detection and inpainting in cardiac CT images. ARXIV 2024:arXiv:2409.17342v1. [PMID: 39398205 PMCID: PMC11469418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
Background Quantification of cardiac motion on pre-treatment CT imaging for stereotactic arrhythmia radiotherapy patients is difficult due to the presence of image artifacts caused by metal leads of implantable cardioverter-defibrillators (ICDs). The CT scanners' onboard metal artifact reduction tool does not sufficiently reduce these artifacts. More advanced artifact reduction techniques require the raw CT projection data and thus are not applicable to already reconstructed CT images. New methods are needed to accurately reduce the metal artifacts in already reconstructed CTs to recover the otherwise lost anatomical information. Purpose To develop a methodology to automatically detect metal artifacts in cardiac CT scans and inpaint the affected volume with anatomically consistent structures and values. Methods Breath-hold ECG-gated 4DCT scans of 12 patients who underwent cardiac radiation therapy for treating ventricular tachycardia were collected. The metal artifacts in the images caused by the ICD leads were manually contoured. A 2D U-Net deep learning (DL) model was developed to segment the metal artifacts automatically using eight patients for training, two for validation, and two for testing. A dataset of 592 synthetic CTs was prepared by adding segmented metal artifacts from the patient 4DCT images to artifact-free cardiac CTs of 148 patients. A 3D image inpainting DL model was trained to refill the metal artifact portion in the synthetic images with realistic image contents that approached the ground truth artifact-free images. The trained inpainting model was evaluated by analyzing the automated segmentation results of the four heart chambers with and without artifacts on the synthetic dataset. Additionally, the raw cardiac patient images with metal artifacts were processed using the inpainting model and the results of metal artifact reduction were qualitatively inspected. Results The artifact detection model worked well and produced a Dice score of 0.958 ± 0.008. The inpainting model for synthesized cases was able to recreate images that were nearly identical to the ground truth with a structural similarity index of 0.988 ± 0.012. With the chamber segmentations on the artifact-free images as the reference, the average surface Dice scores improved from 0.684 ± 0.247 to 0.964 ± 0.067 and the Hausdorff distance reduced from 3.4 ± 3.9 mm to 0.7 ± 0.7 mm. The inpainting model's use on cardiac patient CTs was visually inspected and the artifact-inpainted images were visually plausible. Conclusion We successfully developed two deep models to detect and inpaint metal artifacts in cardiac CT images. These deep models are useful to improve the heart chamber segmentation and cardiac motion analysis in CT images corrupted by mental artifacts. The trained models and example data are available to the public through GitHub.
Collapse
Affiliation(s)
| | - H. Michael Gach
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis
- Department of Radiology, School of Medicine, Washington University in Saint Louis
- Department of Biomedical Engineering, Washington University in Saint Louis
| | - Yao Hao
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis
| | - Hongyu An
- Department of Radiology, School of Medicine, Washington University in Saint Louis
- Department of Biomedical Engineering, Washington University in Saint Louis
| | - Clifford G. Robinson
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis
| | - Phillip S. Cuculich
- Department of Cardiology, School of Medicine, Washington University in Saint Louis
| | - Deshan Yang
- Department of Radiation Oncology, Duke University
| |
Collapse
|
6
|
Zhang J, Mao H, Chang D, Yu H, Wu W, Shen D. Adaptive and Iterative Learning With Multi-Perspective Regularizations for Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3354-3365. [PMID: 38687653 DOI: 10.1109/tmi.2024.3395348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Metal artifact reduction (MAR) is important for clinical diagnosis with CT images. The existing state-of-the-art deep learning methods usually suppress metal artifacts in sinogram or image domains or both. However, their performance is limited by the inherent characteristics of the two domains, i.e., the errors introduced by local manipulations in the sinogram domain would propagate throughout the whole image during backprojection and lead to serious secondary artifacts, while it is difficult to distinguish artifacts from actual image features in the image domain. To alleviate these limitations, this study analyzes the desirable properties of wavelet transform in-depth and proposes to perform MAR in the wavelet domain. First, wavelet transform yields components that possess spatial correspondence with the image, thereby preventing the spread of local errors to avoid secondary artifacts. Second, using wavelet transform could facilitate identification of artifacts from image since metal artifacts are mainly high-frequency signals. Taking these advantages of the wavelet transform, this paper decomposes an image into multiple wavelet components and introduces multi-perspective regularizations into the proposed MAR model. To improve the transparency and validity of the model, all the modules in the proposed MAR model are designed to reflect their mathematical meanings. In addition, an adaptive wavelet module is also utilized to enhance the flexibility of the model. To optimize the model, an iterative algorithm is developed. The evaluation on both synthetic and real clinical datasets consistently confirms the superior performance of the proposed method over the competing methods.
Collapse
|
7
|
Cao W, Parvinian A, Adamo D, Welch B, Callstrom M, Ren L, Missert A, Favazza CP. Deep convolutional-neural-network-based metal artifact reduction for CT-guided interventional oncology procedures (MARIO). Med Phys 2024; 51:4231-4242. [PMID: 38353644 DOI: 10.1002/mp.16980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/20/2023] [Accepted: 01/22/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Computed tomography (CT) is routinely used to guide cryoablation procedures. Notably, CT-guidance provides 3D localization of cryoprobes and can be used to delineate frozen tissue during ablation. However, metal-induced artifacts from ablation probes can make accurate probe placement challenging and degrade the ice ball conspicuity, which in combination could lead to undertreatment of potentially curable lesions. PURPOSE In this work, we propose an image-based neural network (CNN) model for metal artifact reduction for CT-guided interventional procedures. METHODS An image domain metal artifact simulation framework was developed and validated for deep-learning-based metal artifact reduction for interventional oncology (MARIO). CT scans were acquired for 19 different cryoablation probe configurations. The probe configurations varied in the number of probes and the relative orientations. A combination of intensity thresholding and masking based on maximum intensity projections (MIPs) was used to segment both the probes only and probes + artifact in each phantom image. Each of the probe and probe + artifact images were then inserted into 19 unique patient exams, in the image domain, to simulate metal artifact appearance for CT-guided interventional oncology procedures. The resulting 361 pairs of simulated image volumes were partitioned into disjoint training and test datasets of 304 and 57 volumes, respectively. From the training partition, 116 600 image patches with a shape of 128 × 128 × 5 pixels were randomly extracted to be used for training data. The input images consisted of a superposition of the patient and probe + artifact images. The target images consisted of a superposition of the patient and probe only images. This dataset was used to optimize a U-Net type model. The trained model was then applied to 50 independent, previously unseen CT images obtained during renal cryoablations. Three board-certified radiologists with experience in CT-guided ablations performed a blinded review of the MARIO images. A total of 100 images (50 original, 50 MARIO processed) were assessed across different aspects of image quality on a 4-point likert-type item. Statistical analyses were performed using Wilcoxon signed-rank test for paired samples. RESULTS Reader scores were significantly higher for MARIO processed images compared to the original images across all metrics (all p < 0.001). The average scores of the overall image quality, iceball conspicuity, overall metal artifact, needle tip visualization, target region confidence, and worst metal artifact, needle tip visualization, iceball conspicuity, and target region confidence improved by 34.91%, 36.29%, 39.94%, 34.17%, 35.13%, and 45.70%, respectively. CONCLUSIONS The proposed method of image-based metal artifact simulation can be used to train a MARIO algorithm to effectively reduce probe-related metal artifacts in CT-guided cryoablation procedures.
Collapse
Affiliation(s)
- Wenchao Cao
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Ahmad Parvinian
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Daniel Adamo
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Brian Welch
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Liqiang Ren
- Department of Radiology, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Andrew Missert
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | |
Collapse
|
8
|
Kim K, Cho K, Jang R, Kyung S, Lee S, Ham S, Choi E, Hong GS, Kim N. Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals. Korean J Radiol 2024; 25:224-242. [PMID: 38413108 PMCID: PMC10912493 DOI: 10.3348/kjr.2023.0818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/27/2023] [Accepted: 12/28/2023] [Indexed: 02/29/2024] Open
Abstract
The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.
Collapse
Affiliation(s)
- Kiduk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | | | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Soyoung Lee
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Republic of Korea
| | - Edward Choi
- Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Wang H, Xie Q, Zeng D, Ma J, Meng D, Zheng Y. OSCNet: Orientation-Shared Convolutional Network for CT Metal Artifact Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:489-502. [PMID: 37656650 DOI: 10.1109/tmi.2023.3310987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
X-ray computed tomography (CT) has been broadly adopted in clinical applications for disease diagnosis and image-guided interventions. However, metals within patients always cause unfavorable artifacts in the recovered CT images. Albeit attaining promising reconstruction results for this metal artifact reduction (MAR) task, most of the existing deep-learning-based approaches have some limitations. The critical issue is that most of these methods have not fully exploited the important prior knowledge underlying this specific MAR task. Therefore, in this paper, we carefully investigate the inherent characteristics of metal artifacts which present rotationally symmetrical streaking patterns. Then we specifically propose an orientation-shared convolution representation mechanism to adapt such physical prior structures and utilize Fourier-series-expansion-based filter parametrization for modelling artifacts, which can finely separate metal artifacts from body tissues. By adopting the classical proximal gradient algorithm to solve the model and then utilizing the deep unfolding technique, we easily build the corresponding orientation-shared convolutional network, termed as OSCNet. Furthermore, considering that different sizes and types of metals would lead to different artifact patterns (e.g., intensity of the artifacts), to better improve the flexibility of artifact learning and fully exploit the reconstructed results at iterative stages for information propagation, we design a simple-yet-effective sub-network for the dynamic convolution representation of artifacts. By easily integrating the sub-network into the proposed OSCNet framework, we further construct a more flexible network structure, called OSCNet+, which improves the generalization performance. Through extensive experiments conducted on synthetic and clinical datasets, we comprehensively substantiate the effectiveness of our proposed methods. Code will be released at https://github.com/hongwang01/OSCNet.
Collapse
|
10
|
Liu Z, Fan Y, Lou A, Noble JH. Super-resolution segmentation network for inner-ear tissue segmentation. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING : ... INTERNATIONAL WORKSHOP, SASHIMI ..., HELD IN CONJUNCTION WITH MICCAI ..., PROCEEDINGS. SASHIMI (WORKSHOP) 2023; 14288:11-20. [PMID: 38560492 PMCID: PMC10979466 DOI: 10.1007/978-3-031-44689-4_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.
Collapse
Affiliation(s)
- Ziteng Liu
- Dept. of Computer Science, Vanderbilt University
| | - Yubo Fan
- Dept. of Computer Science, Vanderbilt University
| | - Ange Lou
- Dept. of Computer Science, Vanderbilt University
| | - Jack H Noble
- Dept. of Computer Science, Vanderbilt University
- Dept. of Electrical and Computer Engineering, Vanderbilt University
| |
Collapse
|
11
|
Zhou Y, Tan Z, Liu Y, Cheng H. Fully convolutional neural network and PPG signal for arterial blood pressure waveform estimation. Physiol Meas 2023; 44:075007. [PMID: 37402386 DOI: 10.1088/1361-6579/ace414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 07/04/2023] [Indexed: 07/06/2023]
Abstract
Objective. The quality of the arterial blood pressure (ABP) waveform is crucial for predicting the value of blood pressure. The ABP waveform is predicted through experiments, and then Systolic blood pressure (SBP), Diastolic blood pressure, (DBP), and Mean arterial pressure (MAP) information are estimated from the ABP waveform.Approach. To ensure the quality of the predicted ABP waveform, this paper carefully designs the network structure, input signal, loss function, and structural parameters. A fully convolutional neural network (CNN) MultiResUNet3+ is used as the core architecture of ABP-MultiNet3+. In addition to performing Kalman filtering on the original photoplethysmogram (PPG) signal, its first-order derivative and second-order derivative signals are used as ABP-MultiNet3+ enter. The model's loss function uses a combination of mean absolute error (MAE) and means square error (MSE) loss to ensure that the predicted ABP waveform matches the reference waveform.Main results. The proposed ABP-MultiNet3+ model was tested on the public MIMIC II databases, MAE of MAP, DBP, and SBP was 1.88 mmHg, 3.11 mmHg, and 4.45 mmHg, respectively, indicating a small model error. It experiment fully meets the standards of the AAMI standard and obtains level A in the DBP and MAP prediction standard test under the BHS standard. For SBP prediction, it obtains level B in the BHS standard test. Although it does not reach level A, it has a certain improvement compared with the existing methods.Significance. The results show that this algorithm can achieve sleeveless blood pressure estimation, which may enable mobile medical devices to continuously monitor blood pressure and greatly reduce the harm caused by Cardiovascular disease (CVD).
Collapse
Affiliation(s)
- Yongan Zhou
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, 100044, People's Republic of China
| | - Zhi Tan
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, 100044, People's Republic of China
| | - Yuhong Liu
- College of Pulmonary & Critical Care Medicine, 8th Medical Center, Chinese PLA General Hospital, People's Republic of China
- Beijing IROT Key Laboratory, People's Republic of China
| | - Haibo Cheng
- Jiangsu Future Network Group Co., Ltd, People's Republic of China
| |
Collapse
|
12
|
Wang H, Li Y, Zhang H, Meng D, Zheng Y. InDuDoNet+: A deep unfolding dual domain network for metal artifact reduction in CT images. Med Image Anal 2023; 85:102729. [PMID: 36623381 DOI: 10.1016/j.media.2022.102729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 11/27/2022] [Accepted: 12/09/2022] [Indexed: 12/25/2022]
Abstract
During the computed tomography (CT) imaging process, metallic implants within patients often cause harmful artifacts, which adversely degrade the visual quality of reconstructed CT images and negatively affect the subsequent clinical diagnosis. For the metal artifact reduction (MAR) task, current deep learning based methods have achieved promising performance. However, most of them share two main common limitations: (1) the CT physical imaging geometry constraint is not comprehensively incorporated into deep network structures; (2) the entire framework has weak interpretability for the specific MAR task; hence, the role of each network module is difficult to be evaluated. To alleviate these issues, in the paper, we construct a novel deep unfolding dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded. Concretely, we derive a joint spatial and Radon domain reconstruction model and propose an optimization algorithm with only simple operators for solving it. By unfolding the iterative steps involved in the proposed algorithm into the corresponding network modules, we easily build the InDuDoNet+ with clear interpretability. Furthermore, we analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance. Comprehensive experiments on synthesized data and clinical data substantiate the superiority of the proposed methods as well as the superior generalization performance beyond the current state-of-the-art (SOTA) MAR methods. Code is available at https://github.com/hongwang01/InDuDoNet_plus.
Collapse
Affiliation(s)
| | | | - Haimiao Zhang
- Beijing Information Science and Technology University, Beijing, China
| | - Deyu Meng
- Xi'an Jiaotong University, Xi'an, China; Peng Cheng Laboratory, Shenzhen, China; Macau University of Science and Technology, Taipa, Macao.
| | | |
Collapse
|
13
|
Zhu M, Zhu Q, Song Y, Guo Y, Zeng D, Bian Z, Wang Y, Ma J. Physics-informed sinogram completion for metal artifact reduction in CT imaging. Phys Med Biol 2023; 68. [PMID: 36808913 DOI: 10.1088/1361-6560/acbddf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/21/2023] [Indexed: 02/23/2023]
Abstract
Objective.Metal artifacts in the computed tomography (CT) imaging are unavoidably adverse to the clinical diagnosis and treatment outcomes. Most metal artifact reduction (MAR) methods easily result in the over-smoothing problem and loss of structure details near the metal implants, especially for these metal implants with irregular elongated shapes. To address this problem, we present the physics-informed sinogram completion (PISC) method for MAR in CT imaging, to reduce metal artifacts and recover more structural textures.Approach.Specifically, the original uncorrected sinogram is firstly completed by the normalized linear interpolation algorithm to reduce metal artifacts. Simultaneously, the uncorrected sinogram is also corrected based on the beam-hardening correction physical model, to recover the latent structure information in metal trajectory region by leveraging the attenuation characteristics of different materials. Both corrected sinograms are fused with the pixel-wise adaptive weights, which are manually designed according to the shape and material information of metal implants. To furtherly reduce artifacts and improve the CT image quality, a post-processing frequency split algorithm is adopted to yield the final corrected CT image after reconstructing the fused sinogram.Main results.We qualitatively and quantitatively evaluated the presented PISC method on two simulated datasets and three real datasets. All results demonstrate that the presented PISC method can effectively correct the metal implants with various shapes and materials, in terms of artifact suppression and structure preservation.Significance.We proposed a sinogram-domain MAR method to compensate for the over-smoothing problem existing in most MAR methods by taking advantage of the physical prior knowledge, which has the potential to improve the performance of the deep learning based MAR approaches.
Collapse
Affiliation(s)
- Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Qisen Zhu
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yi Guo
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
14
|
Zhou B, Chen X, Xie H, Zhou SK, Duncan JS, Liu C. DuDoUFNet: Dual-Domain Under-to-Fully-Complete Progressive Restoration Network for Simultaneous Metal Artifact Reduction and Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3587-3599. [PMID: 35816532 PMCID: PMC9812027 DOI: 10.1109/tmi.2022.3189759] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
To reduce the potential risk of radiation to the patient, low-dose computed tomography (LDCT) has been widely adopted in clinical practice for reconstructing cross-sectional images using sinograms with reduced x-ray flux. The LDCT image quality is often degraded by different levels of noise depending on the low-dose protocols. The image quality will be further degraded when the patient has metallic implants, where the image suffers from additional streak artifacts along with further amplified noise levels, thus affecting the medical diagnosis and other CT-related applications. Previous studies mainly focused either on denoising LDCT without considering metallic implants or full-dose CT metal artifact reduction (MAR). Directly applying previous LDCT or MAR approaches to the issue of simultaneous metal artifact reduction and low-dose CT (MARLD) may yield sub-optimal reconstruction results. In this work, we develop a dual-domain under-to-fully-complete progressive restoration network, called DuDoUFNet, for MARLD. Our DuDoUFNet aims to reconstruct images with substantially reduced noise and artifact by progressive sinogram to image domain restoration with a two-stage progressive restoration network design. Our experimental results demonstrate that our method can provide high-quality reconstruction, superior to previous LDCT and MAR methods under various low-dose and metal settings.
Collapse
|
15
|
Khaleghi G, Hosntalab M, Sadeghi M, Reiazi R, Mahdavi SR. Neural Network Performance Evaluation of Simulated and Genuine Head-and-Neck Computed Tomography Images to Reduce Metal Artifacts. JOURNAL OF MEDICAL SIGNALS & SENSORS 2022; 12:269-277. [PMID: 36726421 PMCID: PMC9885504 DOI: 10.4103/jmss.jmss_159_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 11/03/2021] [Accepted: 12/20/2021] [Indexed: 02/03/2023]
Abstract
Background This study evaluated the performances of neural networks in terms of denoizing metal artifacts in computed tomography (CT) images to improve diagnosis based on the CT images of patients. Methods First, head-and-neck phantoms were simulated (with and without dental implants), and CT images of the phantoms were captured. Six types of neural networks were evaluated for their abilities to reduce the number of metal artifacts. In addition, 40 CT patients' images with head-and-neck cancer (with and without teeth artifacts) were captured, and mouth slides were segmented. Finally, simulated noisy and noise-free patient images were generated to provide more input numbers (for training and validating the generative adversarial neural network [GAN]). Results Results showed that the proposed GAN network was successful in denoizing artifacts caused by dental implants, whereas more than 84% improvement was achieved for images with two dental implants after metal artifact reduction (MAR) in patient images. Conclusion The quality of images was affected by the positions and numbers of dental implants. The image quality metrics of all GANs were improved following MAR comparison with other networks.
Collapse
Affiliation(s)
- Goli Khaleghi
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mohammad Hosntalab
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mahdi Sadeghi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran,Address for correspondence: Prof. Mahdi Sadeghi, Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, P.O. Box: 14155-6183, Tehran, Iran. E-mail:
| | - Reza Reiazi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran,Princess Margaret Cancer Center, Toronto, Ontario, Canada
| | - Seied Rabi Mahdavi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
16
|
Li D, Ma L, Li J, Qi S, Yao Y, Teng Y. A comprehensive survey on deep learning techniques in CT image quality improvement. Med Biol Eng Comput 2022; 60:2757-2770. [DOI: 10.1007/s11517-022-02631-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 06/15/2022] [Indexed: 11/28/2022]
|
17
|
Niu C, Cong W, Fan FL, Shan H, Li M, Liang J, Wang G. Low-dimensional Manifold Constrained Disentanglement Network for Metal Artifact Reduction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:656-666. [PMID: 35865007 PMCID: PMC9295822 DOI: 10.1109/trpms.2021.3122071] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2024]
Abstract
Deep neural network based methods have achieved promising results for CT metal artifact reduction (MAR), most of which use many synthesized paired images for supervised learning. As synthesized metal artifacts in CT images may not accurately reflect the clinical counterparts, an artifact disentanglement network (ADN) was proposed with unpaired clinical images directly, producing promising results on clinical datasets. However, as the discriminator can only judge if large regions semantically look artifact-free or artifact-affected, it is difficult for ADN to recover small structural details of artifact-affected CT images based on adversarial losses only without sufficient constraints. To overcome the illposedness of this problem, here we propose a low-dimensional manifold (LDM) constrained disentanglement network (DN), leveraging the image characteristics that the patch manifold of CT images is generally low-dimensional. Specifically, we design an LDM-DN learning algorithm to empower the disentanglement network through optimizing the synergistic loss functions used in ADN while constraining the recovered images to be on a low-dimensional patch manifold. Moreover, learning from both paired and unpaired data, an efficient hybrid optimization scheme is proposed to further improve the MAR performance on clinical datasets. Extensive experiments demonstrate that the proposed LDM-DN approach can consistently improve the MAR performance in paired and/or unpaired learning settings, outperforming competing methods on synthesized and clinical datasets.
Collapse
Affiliation(s)
- Chuang Niu
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Wenxiang Cong
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Feng-Lei Fan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, and now is with the Institute of Science and Technology for Brain-inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China, and also with the Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China
| | - Mengzhou Li
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi'an, Shaanxi 710071 China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| |
Collapse
|
18
|
Wang H, Li Y, He N, Ma K, Meng D, Zheng Y. DICDNet: Deep Interpretable Convolutional Dictionary Network for Metal Artifact Reduction in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:869-880. [PMID: 34752391 DOI: 10.1109/tmi.2021.3127074] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Computed tomography (CT) images are often impaired by unfavorable artifacts caused by metallic implants within patients, which would adversely affect the subsequent clinical diagnosis and treatment. Although the existing deep-learning-based approaches have achieved promising success on metal artifact reduction (MAR) for CT images, most of them treated the task as a general image restoration problem and utilized off-the-shelf network modules for image quality enhancement. Hence, such frameworks always suffer from lack of sufficient model interpretability for the specific task. Besides, the existing MAR techniques largely neglect the intrinsic prior knowledge underlying metal-corrupted CT images which is beneficial for the MAR performance improvement. In this paper, we specifically propose a deep interpretable convolutional dictionary network (DICDNet) for the MAR task. Particularly, we first explore that the metal artifacts always present non-local streaking and star-shape patterns in CT images. Based on such observations, a convolutional dictionary model is deployed to encode the metal artifacts. To solve the model, we propose a novel optimization algorithm based on the proximal gradient technique. With only simple operators, the iterative steps of the proposed algorithm can be easily unfolded into corresponding network modules with specific physical meanings. Comprehensive experiments on synthesized and clinical datasets substantiate the effectiveness of the proposed DICDNet as well as its superior interpretability, compared to current state-of-the-art MAR methods. Code is available at https://github.com/hongwang01/DICDNet.
Collapse
|
19
|
Xu L, Zhou S, Guo J, Tian W, Tang W, Yi Z. Metal artifact reduction for oral and maxillofacial computed tomography images by a generative adversarial network. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02905-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
20
|
Zhu L, Han Y, Xi X, Li L, Yan B. Completion of Metal-Damaged Traces Based on Deep Learning in Sinogram Domain for Metal Artifacts Reduction in CT Images. SENSORS 2021; 21:s21248164. [PMID: 34960258 PMCID: PMC8708215 DOI: 10.3390/s21248164] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/27/2021] [Accepted: 11/29/2021] [Indexed: 11/16/2022]
Abstract
In computed tomography (CT) images, the presence of metal artifacts leads to contaminated object structures. Theoretically, eliminating metal artifacts in the sinogram domain can correct projection deviation and provide reconstructed images that are more real. Contemporary methods that use deep networks for completing metal-damaged sinogram data are limited to discontinuity at the boundaries of traces, which, however, lead to secondary artifacts. This study modifies the traditional U-net and adds two sinogram feature losses of projection images—namely, continuity and consistency of projection data at each angle, improving the accuracy of the complemented sinogram data. Masking the metal traces also ensures the stability and reliability of the unaffected data during metal artifacts reduction. The projection and reconstruction results and various evaluation metrics reveal that the proposed method can accurately repair missing data and reduce metal artifacts in reconstructed CT images.
Collapse
|
21
|
Lee J, Gu J, Ye JC. Unsupervised CT Metal Artifact Learning Using Attention-Guided β-CycleGAN. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3932-3944. [PMID: 34329157 DOI: 10.1109/tmi.2021.3101363] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Metal artifact reduction (MAR) is one of the most important research topics in computed tomography (CT). With the advance of deep learning approaches for image reconstruction, various deep learning methods have been suggested for metal artifact reduction, among which supervised learning methods are most popular. However, matched metal-artifact-free and metal artifact corrupted image pairs are difficult to obtain in real CT acquisition. Recently, a promising unsupervised learning for MAR was proposed using feature disentanglement, but the resulting network architecture is so complicated that it is difficult to handle large size clinical images. To address this, here we propose a simple and effective unsupervised learning method for MAR. The proposed method is based on a novel β -cycleGAN architecture derived from the optimal transport theory for appropriate feature space disentanglement. Moreover, by adding the convolutional block attention module (CBAM) layers in the generator, we show that the metal artifacts can be more focused so that it can be effectively removed. Experimental results confirm that we can achieve improved metal artifact reduction that preserves the detailed texture of the original image.
Collapse
|
22
|
Hu L, Zhou DW, Fu CX, Benkert T, Xiao YF, Wei LM, Zhao JG. Calculation of Apparent Diffusion Coefficients in Prostate Cancer Using Deep Learning Algorithms: A Pilot Study. Front Oncol 2021; 11:697721. [PMID: 34568027 PMCID: PMC8458902 DOI: 10.3389/fonc.2021.697721] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 08/11/2021] [Indexed: 11/29/2022] Open
Abstract
Background Apparent diffusion coefficients (ADCs) obtained with diffusion-weighted imaging (DWI) are highly valuable for the detection and staging of prostate cancer and for assessing the response to treatment. However, DWI suffers from significant anatomic distortions and susceptibility artifacts, resulting in reduced accuracy and reproducibility of the ADC calculations. The current methods for improving the DWI quality are heavily dependent on software, hardware, and additional scan time. Therefore, their clinical application is limited. An accelerated ADC generation method that maintains calculation accuracy and repeatability without heavy dependence on magnetic resonance imaging scanners is of great clinical value. Objectives We aimed to establish and evaluate a supervised learning framework for synthesizing ADC images using generative adversarial networks. Methods This prospective study included 200 patients with suspected prostate cancer (training set: 150 patients; test set #1: 50 patients) and 10 healthy volunteers (test set #2) who underwent both full field-of-view (FOV) diffusion-weighted imaging (f-DWI) and zoomed-FOV DWI (z-DWI) with b-values of 50, 1,000, and 1,500 s/mm2. ADC values based on f-DWI and z-DWI (f-ADC and z-ADC) were calculated. Herein we propose an ADC synthesis method based on generative adversarial networks that uses f-DWI with a single b-value to generate synthesized ADC (s-ADC) values using z-ADC as a reference. The image quality of the s-ADC sets was evaluated using the peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity (SSIM), and feature similarity (FSIM). The distortions of each ADC set were evaluated using the T2-weighted image reference. The calculation reproducibility of the different ADC sets was compared using the intraclass correlation coefficient. The tumor detection and classification abilities of each ADC set were evaluated using a receiver operating characteristic curve analysis and a Spearman correlation coefficient. Results The s-ADCb1000 had a significantly lower RMSE score and higher PSNR, SSIM, and FSIM scores than the s-ADCb50 and s-ADCb1500 (all P < 0.001). Both z-ADC and s-ADCb1000 had less distortion and better quantitative ADC value reproducibility for all the evaluated tissues, and they demonstrated better tumor detection and classification performance than f-ADC. Conclusion The deep learning algorithm might be a feasible method for generating ADC maps, as an alternative to z-ADC maps, without depending on hardware systems and additional scan time requirements.
Collapse
Affiliation(s)
- Lei Hu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Da Wei Zhou
- State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China
| | - Cai Xia Fu
- Magnetic Resonance (MR) Application Development, Siemens Shenzhen Magnetic Resonance Ltd., Shenzhen, China
| | - Thomas Benkert
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Yun Feng Xiao
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Li Ming Wei
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Jun Gong Zhao
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| |
Collapse
|
23
|
Gomi T, Sakai R, Hara H, Watanabe Y, Mizukami S. Usefulness of a Metal Artifact Reduction Algorithm in Digital Tomosynthesis Using a Combination of Hybrid Generative Adversarial Networks. Diagnostics (Basel) 2021; 11:diagnostics11091629. [PMID: 34573971 PMCID: PMC8467368 DOI: 10.3390/diagnostics11091629] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/30/2021] [Accepted: 08/30/2021] [Indexed: 11/22/2022] Open
Abstract
In this study, a novel combination of hybrid generative adversarial networks (GANs) comprising cycle-consistent GAN, pix2pix, and (mask pyramid network) MPN (CGpM-metal artifact reduction [MAR]), was developed using projection data to reduce metal artifacts and the radiation dose during digital tomosynthesis. The CGpM-MAR algorithm was compared with the conventional filtered back projection (FBP) without MAR, FBP with MAR, and convolutional neural network MAR. The MAR rates were compared using the artifact index (AI) and Gumbel distribution of the largest variation analysis using a prosthesis phantom at various radiation doses. The novel CGpM-MAR yielded an adequately effective overall performance in terms of AI. The resulting images yielded good results independently of the type of metal used in the prosthesis phantom (p < 0.05) and good artifact removal at 55% radiation-dose reduction. Furthermore, the CGpM-MAR represented the minimum in the model with the largest variation at 55% radiation-dose reduction. Regarding the AI and Gumbel distribution analysis, the novel CGpM-MAR yielded superior MAR when compared with the conventional reconstruction algorithms with and without MAR at 55% radiation-dose reduction and presented features most similar to the reference FBP. CGpM-MAR presents a promising method for metal artifact and radiation-dose reduction in clinical practice.
Collapse
|
24
|
Wang J, Su D, Fan Y, Chakravorti S, Noble JH, Dawant BM. Atlas-based Segmentation of Intracochlear Anatomy in Metal Artifact Affected CT Images of the Ear with Co-trained Deep Neural Networks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12904:14-23. [PMID: 35360271 PMCID: PMC8964077 DOI: 10.1007/978-3-030-87202-1_2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We propose an atlas-based method to segment the intracochlear anatomy (ICA) in the post-implantation CT (Post-CT) images of cochlear implant (CI) recipients that preserves the point-to-point correspondence between the meshes in the atlas and the segmented volumes. To solve this problem, which is challenging because of the strong artifacts produced by the implant, we use a pair of co-trained deep networks that generate dense deformation fields (DDFs) in opposite directions. One network is tasked with registering an atlas image to the Post-CT images and the other network is tasked with registering the Post-CT images to the atlas image. The networks are trained using loss functions based on voxel-wise labels, image content, fiducial registration error, and cycle-consistency constraint. The segmentation of the ICA in the Post-CT images is subsequently obtained by transferring the predefined segmentation meshes of the ICA in the atlas image to the Post-CT images using the corresponding DDFs generated by the trained registration networks. Our model can learn the underlying geometric features of the ICA even though they are obscured by the metal artifacts. We show that our end-to-end network produces results that are comparable to the current state of the art (SOTA) that relies on a two-steps approach that first uses conditional generative adversarial networks to synthesize artifact-free images from the Post-CT images and then uses an active shape model-based method to segment the ICA in the synthetic images. Our method requires a fraction of the time needed by the SOTA, which is important for end-user acceptance.
Collapse
Affiliation(s)
- Jianing Wang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Dingjie Su
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Yubo Fan
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Srijata Chakravorti
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Jack H Noble
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Benoit M Dawant
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
25
|
Saha M, Guo X, Sharma A. TilGAN: GAN for Facilitating Tumor-Infiltrating Lymphocyte Pathology Image Synthesis With Improved Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:79829-79840. [PMID: 34178560 PMCID: PMC8224465 DOI: 10.1109/access.2021.3084597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tumor-infiltrating lymphocytes (TILs) act as immune cells against cancer tissues. The manual assessment of TILs is usually erroneous, tedious, costly and subject to inter- and intraobserver variability. Machine learning approaches can solve these issues, but they require a large amount of labeled data for model training, which is expensive and not readily available. In this study, we present an efficient generative adversarial network, TilGAN, to generate high-quality synthetic pathology images followed by classification of TIL and non-TIL regions. Our proposed architecture is constructed with a generator network and a discriminator network. The novelty exists in the TilGAN architecture, loss functions, and evaluation techniques. Our TilGAN-generated images achieved a higher Inception score than the real images (2.90 vs. 2.32, respectively). They also achieved a lower kernel Inception distance (1.44) and a lower Fréchet Inception distance (0.312). It also passed the Turing test performed by experienced pathologists and clinicians. We further extended our evaluation studies and used almost one million synthetic data, generated by TilGAN, to train a classification model. Our proposed classification model achieved a 97.83% accuracy, a 97.37% F1-score, and a 97% area under the curve. Our extensive experiments and superior outcomes show the efficiency and effectiveness of our proposed TilGAN architecture. This architecture can also be used for other types of images for image synthesis.
Collapse
Affiliation(s)
- Monjoy Saha
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| | - Xiaoyuan Guo
- Department of Computer Science, Emory University, Atlanta, GA 30332, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
26
|
Zhou Y, Yu K, Wang M, Ma Y, Peng Y, Chen Z, Zhu W, Shi F, Chen X. Speckle Noise Reduction for OCT Images based on Image Style Transfer and Conditional GAN. IEEE J Biomed Health Inform 2021; 26:139-150. [PMID: 33882009 DOI: 10.1109/jbhi.2021.3074852] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Raw optical coherence tomography (OCT) images typically are of low quality because speckle noise blurs retinal structures, severely compromising visual quality and degrading performances of subsequent image analysis tasks. In our previous study, we have developed a Conditional Generative Adversarial Network (cGAN) for speckle noise removal in OCT images collected by several commercial OCT scanners, which we collectively refer to as scanner T. In this paper, we improve the cGAN model and apply it to our in-house OCT scanner (scanner B) for speckle noise suppression. The proposed model consists of two steps: 1) We train a Cycle-Consistent GAN (CycleGAN) to learn style transfer between two OCT image datasets collected by different scanners. The purpose of the CycleGAN is to leverage the ground truth dataset created in our previous study. 2) We train a mini-cGAN model based on the PatchGAN mechanism with the ground truth dataset to suppress speckle noise in OCT images. After training, we first apply the CycleGAN model to convert raw images collected by scanner B to match the style of the images from scanner T, and subsequently use the mini-cGAN model to suppress speckle noise in the style transferred images. We evaluate the proposed method on a dataset collected by scanner B. Experimental results show that the improved model outperforms our previous method and other state-of-the-art models in speckle noise removal, retinal structure preservation and contrast enhancement.
Collapse
|
27
|
Yu L, Zhang Z, Li X, Xing L. Deep Sinogram Completion With Image Prior for Metal Artifact Reduction in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:228-238. [PMID: 32956044 PMCID: PMC7875504 DOI: 10.1109/tmi.2020.3025064] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance. In reality, CT images may be affected adversely in the presence of metallic objects, which could lead to severe metal artifacts and influence clinical diagnosis or dose calculation in radiation therapy. In this article, we propose a generalizable framework for metal artifact reduction (MAR) by simultaneously leveraging the advantages of image domain and sinogram domain-based MAR techniques. We formulate our framework as a sinogram completion problem and train a neural network (SinoNet) to restore the metal-affected projections. To improve the continuity of the completed projections at the boundary of metal trace and thus alleviate new artifacts in the reconstructed CT images, we train another neural network (PriorNet) to generate a good prior image to guide sinogram learning, and further design a novel residual sinogram learning strategy to effectively utilize the prior image information for better sinogram completion. The two networks are jointly trained in an end-to-end fashion with a differentiable forward projection (FP) operation so that the prior image generation and deep sinogram completion procedures can benefit from each other. Finally, the artifact-reduced CT images are reconstructed using the filtered backward projection (FBP) from the completed sinogram. Extensive experiments on simulated and real artifacts data demonstrate that our method produces superior artifact-reduced results while preserving the anatomical structures and outperforms other MAR methods.
Collapse
|
28
|
Peng C, Li B, Liang P, Zheng J, Zhang Y, Qiu B, Chen DZ. A Cross-Domain Metal Trace Restoring Network for Reducing X-Ray CT Metal Artifacts. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3831-3842. [PMID: 32746126 DOI: 10.1109/tmi.2020.3005432] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Metal artifacts commonly appear in computed tomography (CT) images of the patient body with metal implants and can affect disease diagnosis. Known deep learning and traditional metal trace restoring methods did not effectively restore details and sinogram consistency information in X-ray CT sinograms, hence often causing considerable secondary artifacts in CT images. In this paper, we propose a new cross-domain metal trace restoring network which promotes sinogram consistency while reducing metal artifacts and recovering tissue details in CT images. Our new approach includes a cross-domain procedure that ensures information exchange between the image domain and the sinogram domain in order to help them promote and complement each other. Under this cross-domain structure, we develop a hierarchical analytic network (HAN) to recover fine details of metal trace, and utilize the perceptual loss to guide HAN to concentrate on the absorption of sinogram consistency information of metal trace. To allow our entire cross-domain network to be trained end-to-end efficiently and reduce the graphic memory usage and time cost, we propose effective and differentiable forward projection (FP) and filtered back-projection (FBP) layers based on FP and FBP algorithms. We use both simulated and clinical datasets in three different clinical scenarios to evaluate our proposed network's practicality and universality. Both quantitative and qualitative evaluation results show that our new network outperforms state-of-the-art metal artifact reduction methods. In addition, the elapsed time analysis shows that our proposed method meets the clinical time requirement.
Collapse
|
29
|
|
30
|
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) - A Systematic Review. Acad Radiol 2020; 27:1175-1185. [PMID: 32035758 DOI: 10.1016/j.acra.2019.12.024] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/24/2019] [Accepted: 12/27/2019] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applications in radiology. MATERIALS AND METHODS This systematic review followed the PRISMA guidelines. Electronic datasets were searched for studies describing applications of GANs in radiology. We included studies published up-to September 2019. RESULTS Data were extracted from 33 studies published between 2017 and 2019. Eighteen studies focused on CT images generation, ten on MRI, three on PET/MRI and PET/CT, one on ultrasound and one on X-ray. Applications in radiology included image reconstruction and denoising for dose and scan time reduction (fourteen studies), data augmentation (six studies), transfer between modalities (eight studies) and image segmentation (five studies). All studies reported that generated images improved the performance of the developed algorithms. CONCLUSION GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to improve clinical care, education and research.
Collapse
|
31
|
Annala L, Neittaanmaki N, Paoli J, Zaar O, Polonen I. Generating Hyperspectral Skin Cancer Imagery using Generative Adversarial Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1600-1603. [PMID: 33018300 DOI: 10.1109/embc44109.2020.9176292] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study we develop a proof of concept of using generative adversarial neural networks in hyperspectral skin cancer imagery production. Generative adversarial neural network is a neural network, where two neural networks compete. The generator tries to produce data that is similar to the measured data, and the discriminator tries to correctly classify the data as fake or real. This is a reinforcement learning model, where both models get reinforcement based on their performance. In the training of the discriminator we use data measured from skin cancer patients. The aim for the study is to develop a generator for augmenting hyperspectral skin cancer imagery.
Collapse
|
32
|
Peng C, Li B, Li M, Wang H, Zhao Z, Qiu B, Chen DZ. An irregular metal trace inpainting network for x‐ray CT metal artifact reduction. Med Phys 2020; 47:4087-4100. [DOI: 10.1002/mp.14295] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 05/10/2020] [Accepted: 05/11/2020] [Indexed: 01/08/2023] Open
Affiliation(s)
- Chengtao Peng
- Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei 230026 China
- Department of Computer Science and Engineering University of Notre Dame Notre Dame IN 46556 USA
| | - Bin Li
- Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei 230026 China
| | - Ming Li
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and TechnologyChinese Academy of Science Suzhou 215163 China
| | - Hongxiao Wang
- Department of Computer Science and Engineering University of Notre Dame Notre Dame IN 46556 USA
| | - Zhuo Zhao
- Department of Computer Science and Engineering University of Notre Dame Notre Dame IN 46556 USA
| | - Bensheng Qiu
- Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei 230026 China
| | - Danny Z. Chen
- Department of Computer Science and Engineering University of Notre Dame Notre Dame IN 46556 USA
| |
Collapse
|
33
|
Deep learning approach to classification of lung cytological images: Two-step training using actual and synthesized images by progressive growing of generative adversarial networks. PLoS One 2020; 15:e0229951. [PMID: 32134949 PMCID: PMC7058306 DOI: 10.1371/journal.pone.0229951] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 02/18/2020] [Indexed: 02/08/2023] Open
Abstract
Cytology is the first pathological examination performed in the diagnosis of lung cancer. In our previous study, we introduced a deep convolutional neural network (DCNN) to automatically classify cytological images as images with benign or malignant features and achieved an accuracy of 81.0%. To further improve the DCNN's performance, it is necessary to train the network using more images. However, it is difficult to acquire cell images which contain a various cytological features with the use of many manual operations with a microscope. Therefore, in this study, we aim to improve the classification accuracy of a DCNN with the use of actual and synthesized cytological images with a generative adversarial network (GAN). Based on the proposed method, patch images were obtained from a microscopy image. Accordingly, these generated many additional similar images using a GAN. In this study, we introduce progressive growing of GANs (PGGAN), which enables the generation of high-resolution images. The use of these images allowed us to pretrain a DCNN. The DCNN was then fine-tuned using actual patch images. To confirm the effectiveness of the proposed method, we first evaluated the quality of the images which were generated by PGGAN and by a conventional deep convolutional GAN. We then evaluated the classification performance of benign and malignant cells, and confirmed that the generated images had characteristics similar to those of the actual images. Accordingly, we determined that the overall classification accuracy of lung cells was 85.3% which was improved by approximately 4.3% compared to a previously conducted study without pretraining using GAN-generated images. Based on these results, we confirmed that our proposed method will be effective for the classification of cytological images in cases at which only limited data are acquired.
Collapse
|
34
|
Liao H, Lin WA, Zhou SK, Luo J. ADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:634-643. [PMID: 31395543 DOI: 10.1109/tmi.2019.2933425] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Current deep neural network based approaches to computed tomography (CT) metal artifact reduction (MAR) are supervised methods that rely on synthesized metal artifacts for training. However, as synthesized data may not accurately simulate the underlying physical mechanisms of CT imaging, the supervised methods often generalize poorly to clinical applications. To address this problem, we propose, to the best of our knowledge, the first unsupervised learning approach to MAR. Specifically, we introduce a novel artifact disentanglement network that disentangles the metal artifacts from CT images in the latent space. It supports different forms of generations (artifact reduction, artifact transfer, and self-reconstruction, etc.) with specialized loss functions to obviate the need for supervision with synthesized data. Extensive experiments show that when applied to a synthesized dataset, our method addresses metal artifacts significantly better than the existing unsupervised models designed for natural image-to-image translation problems, and achieves comparable performance to existing supervised models for MAR. When applied to clinical datasets, our method demonstrates better generalization ability over the supervised models. The source code of this paper is publicly available at https:// github.com/liaohaofu/adn.
Collapse
|
35
|
Armanious K, Jiang C, Fischer M, Küstner T, Hepp T, Nikolaou K, Gatidis S, Yang B. MedGAN: Medical image translation using GANs. Comput Med Imaging Graph 2020; 79:101684. [DOI: 10.1016/j.compmedimag.2019.101684] [Citation(s) in RCA: 163] [Impact Index Per Article: 32.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 10/02/2019] [Accepted: 11/13/2019] [Indexed: 11/15/2022]
|
36
|
Wei J, Suriawinata A, Liu X, Ren B, Nasir-Moin M, Tomita N, Wei J, Hassanpour S. Difficulty Translation in Histopathology Images. Artif Intell Med 2020. [DOI: 10.1007/978-3-030-59137-3_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
37
|
Wei J, Suriawinata A, Vaickus L, Ren B, Liu X, Wei J, Hassanpour S. Generative Image Translation for Data Augmentation in Colorectal Histopathology Images. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2019; 116:10-24. [PMID: 33912842 PMCID: PMC8076951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We present an image translation approach to generate augmented data for mitigating data imbalances in a dataset of histopathology images of colorectal polyps, adenomatous tumors that can lead to colorectal cancer if left untreated. By applying cycle-consistent generative adversarial networks (CycleGANs) to a source domain of normal colonic mucosa images, we generate synthetic colorectal polyp images that belong to diagnostically less common polyp classes. Generated images maintain the general structure of their source image but exhibit adenomatous features that can be enhanced with our proposed filtration module, called Path-Rank-Filter. We evaluate the quality of generated images through Turing tests with four gastrointestinal pathologists, finding that at least two of the four pathologists could not identify generated images at a statistically significant level. Finally, we demonstrate that using CycleGAN-generated images to augment training data improves the AUC of a convolutional neural network for detecting sessile serrated adenomas by over 10%, suggesting that our approach might warrant further research for other histopathology image classification tasks.
Collapse
Affiliation(s)
| | | | - Louis Vaickus
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Bing Ren
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Xiaoying Liu
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | | | |
Collapse
|
38
|
Wang J, Noble JH, Dawant BM. Metal artifact reduction for the segmentation of the intra cochlear anatomy in CT images of the ear with 3D-conditional GANs. Med Image Anal 2019; 58:101553. [PMID: 31525672 PMCID: PMC6815688 DOI: 10.1016/j.media.2019.101553] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Revised: 06/14/2019] [Accepted: 09/03/2019] [Indexed: 11/20/2022]
Abstract
Cochlear implants (CIs) are surgically implanted neural prosthetic devices that are used to treat severe-to-profound hearing loss. These devices are programmed post implantation and precise knowledge of the implant position with respect to the intra cochlear anatomy (ICA) can help the programming audiologists. Over the years, we have developed algorithms that permit determining the position of implanted electrodes relative to the ICA using pre- and post-implantation CT image pairs. However, these do not extend to CI recipients for whom pre-implantation CT (Pre-CT) images are not available. This is so because post-operative images are affected by strong artifacts introduced by the metallic implant. To overcome this issue, we have proposed two methods to segment the ICA in post-implantation CT (Post-CT) images, but they lead to segmentation errors that are substantially larger than errors obtained with Pre-CT images. Recently, we have proposed an approach that uses 2D-conditional generative adversarial nets (cGANs) to synthesize pre-operative images from post-operative images. This permits to use segmentation algorithms designed to operate on Pre-CT images even when these are not available. We have shown that it substantially and significantly improves the results obtained with methods designed to operate directly on post-CT images. In this article, we expand on our earlier work by moving from a 2D architecture to a 3D architecture. We perform a large validation and comparative study that shows that the 3D architecture improves significantly the quality of the synthetic images measured by the commonly used MSSIM (Mean Structural SIMilarity index). We also show that the segmentation results obtained with the 3D architecture are better than those obtained with the 2D architecture although differences have not reached statistical significance.
Collapse
Affiliation(s)
- Jianing Wang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| | - Jack H Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Benoit M Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
39
|
The Role of Generative Adversarial Networks in Radiation Reduction and Artifact Correction in Medical Imaging. J Am Coll Radiol 2019; 16:1273-1278. [DOI: 10.1016/j.jacr.2019.05.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 05/23/2019] [Indexed: 01/08/2023]
|