51
|
Wagner F, Thies M, Gu M, Huang Y, Pechmann S, Patwari M, Ploner S, Aust O, Uderhardt S, Schett G, Christiansen S, Maier A. Ultra low-parameter denoising: Trainable bilateral filter layers in computed tomography. Med Phys 2022; 49:5107-5120. [PMID: 35583171 DOI: 10.1002/mp.15718] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/25/2022] [Accepted: 05/11/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE Most data-driven denoising techniques are based on deep neural networks and, therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures (SSIM) of 0.7094 and 0.9674 and peak signal-to-noise ratio (PSNR) values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fabian Wagner
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Mareike Thies
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Mingxuan Gu
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Sabrina Pechmann
- Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, 91301, Germany
| | - Mayank Patwari
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Stefan Ploner
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Oliver Aust
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91054, Germany.,University Hospital Erlangen, Erlangen, 91054, Germany
| | - Stefan Uderhardt
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91054, Germany.,University Hospital Erlangen, Erlangen, 91054, Germany
| | - Georg Schett
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91054, Germany.,University Hospital Erlangen, Erlangen, 91054, Germany
| | - Silke Christiansen
- Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, 91301, Germany.,Institute for Nanotechnology and Correlative Microscopy e.V. INAM, Forchheim, 91301, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| |
Collapse
|
52
|
Li S, Li Q, Li R, Wu W, Zhao J, Qiang Y, Tian Y. An adaptive self-guided wavelet convolutional neural network with compound loss for low-dose CT denoising. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
53
|
Bai J, Liu Y, Yang H. Sparse-View CT Reconstruction Based on a Hybrid Domain Model with Multi-Level Wavelet Transform. SENSORS 2022; 22:s22093228. [PMID: 35590918 PMCID: PMC9105730 DOI: 10.3390/s22093228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/04/2022] [Accepted: 04/19/2022] [Indexed: 02/01/2023]
Abstract
The reconstruction of sparsely sampled projection data will generate obvious streaking artifacts, resulting in image quality degradation and affecting medical diagnosis results. Wavelet transform can effectively decompose directional components of image, so the artifact features and edge details with high directionality can be better detected in the wavelet domain. Therefore, a hybrid domain method based on wavelet transform is proposed in this paper for the sparse-view CT reconstruction. The reconstruction model combines wavelet, spatial, and radon domains to restore the projection consistency and enhance image details. In addition, the global distribution of artifacts requires the network to have a large receptive field, so that a multi-level wavelet transform network (MWCNN) is applied to the hybrid domain model. Wavelet transform is used in the encoding part of the network to reduce the size of feature maps instead of pooling operation and inverse wavelet transform is deployed in the decoding part to recover image details. The proposed method can achieve PSNR of 41.049 dB and SSIM of 0.958 with 120 projections of three angular intervals, and obtain the highest values in this paper. Through the results of numerical analysis and reconstructed images, it shows that the hybrid domain method is superior to the single-domain methods. At the same time, the multi-level wavelet transform model is more suitable for CT reconstruction than the single-level wavelet transform.
Collapse
|
54
|
Chen J, Bermejo I, Dekker A, Wee L. Generative models improve radiomics performance in different tasks and different datasets: An experimental study. Phys Med 2022; 98:11-17. [PMID: 35468494 DOI: 10.1016/j.ejmp.2022.04.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 03/11/2022] [Accepted: 04/17/2022] [Indexed: 02/02/2023] Open
Abstract
PURPOSE Radiomics is an active area of research focusing on high throughput feature extraction from medical images with a wide array of applications in clinical practice, such as clinical decision support in oncology. However, noise in low dose computed tomography (CT) scans can impair the accurate extraction of radiomic features. In this article, we investigate the possibility of using deep learning generative models to improve the performance of radiomics from low dose CTs. METHODS We used two datasets of low dose CT scans - NSCLC Radiogenomics and LIDC-IDRI - as test datasets for two tasks - pre-treatment survival prediction and lung cancer diagnosis. We used encoder-decoder networks and conditional generative adversarial networks (CGANs) trained in a previous study as generative models to transform low dose CT images into full dose CT images. Radiomic features extracted from the original and improved CT scans were used to build two classifiers - a support vector machine (SVM) and a deep attention based multiple instance learning model - for survival prediction and lung cancer diagnosis respectively. Finally, we compared the performance of the models derived from the original and improved CT scans. RESULTS Denoising with the encoder-decoder network and the CGAN improved the area under the curve (AUC) of survival prediction from 0.52 to 0.57 (p-value < 0.01). On the other hand, the encoder-decoder network and the CGAN improved the AUC of lung cancer diagnosis from 0.84 to 0.88 and 0.89 respectively (p-value < 0.01). Finally, there are no statistically significant improvements in AUC using encoder-decoder networks and CGAN (p-value = 0.34) when networks trained at 75 and 100 epochs. CONCLUSION Generative models can improve the performance of low dose CT-based radiomics in different tasks. Hence, denoising using generative models seems to be a necessary pre-processing step for calculating radiomic features from low dose CTs.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht 6229 ET, Netherlands.
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht 6229 ET, Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht 6229 ET, Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht 6229 ET, Netherlands
| |
Collapse
|
55
|
Wang H, Zhao X, Liu W, Li LC, Ma J, Guo L. Texture-Aware Dual Domain Mapping Model for Low Dose CT Reconstruction. Med Phys 2022; 49:3860-3873. [PMID: 35297051 DOI: 10.1002/mp.15607] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 03/10/2022] [Accepted: 03/10/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Remarkable progress has been made for low-dose CT reconstruction tasks by applying deep learning techniques. However, establishing an intrinsic link between deep learning techniques and CT texture preservation is still one of the significant challenges for researchers to further improve the effect of low dose CT reconstruction. Purpose Most of the existing deep learning-based low dose CT reconstruction methods are derived from popular frameworks, and most models focus on the image domain. Even few existing methods start with dual domains (sinogram and image) by considering the processing of the data itself, the final performances are limited due to the lack of perception of textures. With this in mind, we propose a method for texture perception on dual domains, so that the reconstruction process can be uniformly driven by visual effects. METHODS The proposed method involves the processing of two domains: the sinogram domain and the image domain. For the sinogram domain, we have designed a novel dilated residual network (S-DRN) which aims to increase the receptive field to obtain multi-scale information. For the image domain, we propose a self attention (SA) residual encoder&decoder network (SRED-Net) as the denoising network for obtaining much acceptable edges and textures. In addition, the composite loss function composed of the feature loss constructed by the proposed boundary and texture feature aware network (BTFAN) and the mean square error (MSE) can obtain a higher image quality while retaining more details and fewer artifacts, thereby obtaining better visual image quality. RESULTS The proposed method was validated using both the AAPM-Mayo clinic low-dose CT datasets and a real clinic data. Experimental results demonstrated that the new method has achieved the state-of-the-art performance on objective indicators and visual metrics in terms of denoising and texture restoration. CONCLUSIONS Compared with single-domain or existing dual-domain processing strategies, the proposed texture-aware dual domain mapping network(TADDM-Net) can much better improve the visual effect of reconstructed CT images. Meantime, we also provide much intuitive evidence in terms of model interpretability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Huafeng Wang
- North China University of Technology, Department of Radiology, Stony Brook University, Rm 067,HSC, T8, New York, 11790, China
| | - Xuemei Zhao
- School of Information Technology, North China University of Technology, Beijing, 100041, China
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou, 510335, China
| | - Lihong C Li
- City University of New York at College of Staten Island, Engineering and Environmental Science, Room 1N-225, 2800 Victory Blvd, Staten Island, New York, NY, 10314, USA
| | - Jianhua Ma
- Southern Medical University, Department of Biomedical Engineering, Shatai Road 1023, BAIYUN, Tonghe 1838, Guangzhou, Guangdong, 510515, China
| | - Lei Guo
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| |
Collapse
|
56
|
Dong G, Zhang C, Deng L, Zhu Y, Dai J, Song L, Meng R, Niu T, Liang X, Xie Y. A deep unsupervised learning framework for the 4D CBCT artifact correction. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac55a5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT. Approach. The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset. Main results. The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation. Significance. The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.
Collapse
|
57
|
Bone and Soft Tissue Tumors. Radiol Clin North Am 2022; 60:339-358. [DOI: 10.1016/j.rcl.2021.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
58
|
Immonen E, Wong J, Nieminen M, Kekkonen L, Roine S, Törnroos S, Lanca L, Guan F, Metsälä E. The use of deep learning towards dose optimization in low-dose computed tomography: A scoping review. Radiography (Lond) 2022; 28:208-214. [PMID: 34325998 DOI: 10.1016/j.radi.2021.07.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Low-dose computed tomography tends to produce lower image quality than normal dose computed tomography (CT) although it can help to reduce radiation hazards of CT scanning. Research has shown that Artificial Intelligence (AI) technologies, especially deep learning can help enhance the image quality of low-dose CT by denoising images. This scoping review aims to create an overview on how AI technologies, especially deep learning, can be used in dose optimisation for low-dose CT. METHODS Literature searches of ProQuest, PubMed, Cinahl, ScienceDirect, EbscoHost Ebook Collection and Ovid were carried out to find research articles published between the years 2015 and 2020. In addition, manual search was conducted in SweMed+, SwePub, NORA, Taylor & Francis Online and Medic. RESULTS Following a systematic search process, the review comprised of 16 articles. Articles were organised according to the effects of the deep learning networks, e.g. image noise reduction, image restoration. Deep learning can be used in multiple ways to facilitate dose optimisation in low-dose CT. Most articles discuss image noise reduction in low-dose CT. CONCLUSION Deep learning can be used in the optimisation of patients' radiation dose. Nevertheless, the image quality is normally lower in low-dose CT (LDCT) than in regular-dose CT scans because of smaller radiation doses. With the help of deep learning, the image quality can be improved to equate the regular-dose computed tomography image quality. IMPLICATIONS TO PRACTICE Lower dose may decrease patients' radiation risk but may affect the image quality of CT scans. Artificial intelligence technologies can be used to improve image quality in low-dose CT scans. Radiologists and radiographers should have proper education and knowledge about the techniques used.
Collapse
Affiliation(s)
- E Immonen
- Metropolia University of Applied Sciences, Finland.
| | - J Wong
- Singapore Institute of Technology (SIT), Singapore.
| | - M Nieminen
- Metropolia University of Applied Sciences, Finland.
| | - L Kekkonen
- Metropolia University of Applied Sciences, Finland.
| | - S Roine
- Metropolia University of Applied Sciences, Finland.
| | - S Törnroos
- Metropolia University of Applied Sciences, Finland.
| | - L Lanca
- Singapore Institute of Technology (SIT), Singapore.
| | - F Guan
- Singapore Institute of Technology (SIT), Singapore.
| | - E Metsälä
- Metropolia University of Applied Sciences, Finland.
| |
Collapse
|
59
|
Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of Generative Model. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2022.3148373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
60
|
Zeng D, Wang L, Geng M, Li S, Deng Y, Xie Q, Li D, Zhang H, Li Y, Xu Z, Meng D, Ma J. Noise-Generating-Mechanism-Driven Unsupervised Learning for Low-Dose CT Sinogram Recovery. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3083361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
61
|
Bührer M, Xu H, Hendriksen AA, Büchi FN, Eller J, Stampanoni M, Marone F. Deep learning based classification of dynamic processes in time-resolved X-ray tomographic microscopy. Sci Rep 2021; 11:24174. [PMID: 34921184 PMCID: PMC8683503 DOI: 10.1038/s41598-021-03546-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/03/2021] [Indexed: 11/12/2022] Open
Abstract
Time-resolved X-ray tomographic microscopy is an invaluable technique to investigate dynamic processes in 3D for extended time periods. Because of the limited signal-to-noise ratio caused by the short exposure times and sparse angular sampling frequency, obtaining quantitative information through post-processing remains challenging and requires intensive manual labor. This severely limits the accessible experimental parameter space and so, prevents fully exploiting the capabilities of the dedicated time-resolved X-ray tomographic stations. Though automatic approaches, often exploiting iterative reconstruction methods, are currently being developed, the required computational costs typically remain high. Here, we propose a highly efficient reconstruction and classification pipeline (SIRT-FBP-MS-D-DIFF) that combines an algebraic filter approximation and machine learning to significantly reduce the computational time. The dynamic features are reconstructed by standard filtered back-projection with an algebraic filter to approximate iterative reconstruction quality in a computationally efficient manner. The raw reconstructions are post-processed with a trained convolutional neural network to extract the dynamic features from the low signal-to-noise ratio reconstructions in a fully automatic manner. The capabilities of the proposed pipeline are demonstrated on three different dynamic fuel cell datasets, one exploited for training and two for testing without network retraining. The proposed approach enables automatic processing of several hundreds of datasets in a single day on a single GPU node readily available at most institutions, so extending the possibilities in future dynamic X-ray tomographic investigations.
Collapse
Affiliation(s)
- Minna Bührer
- Swiss Light Source, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
- Institute for Biomedical Engineering, University and ETH Zürich, 8092, Zurich, Zürich, Switzerland
| | - Hong Xu
- Electrochemistry Laboratory, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
| | - Allard A Hendriksen
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG, Amsterdam, The Netherlands
| | - Felix N Büchi
- Electrochemistry Laboratory, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
| | - Jens Eller
- Electrochemistry Laboratory, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
| | - Marco Stampanoni
- Swiss Light Source, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
- Institute for Biomedical Engineering, University and ETH Zürich, 8092, Zurich, Zürich, Switzerland
| | - Federica Marone
- Swiss Light Source, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland.
| |
Collapse
|
62
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
63
|
Yoo J, Jin KH, Gupta H, Yerly J, Stuber M, Unser M. Time-Dependent Deep Image Prior for Dynamic MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3337-3348. [PMID: 34043506 DOI: 10.1109/tmi.2021.3084288] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a novel unsupervised deep-learning-based algorithm for dynamic magnetic resonance imaging (MRI) reconstruction. Dynamic MRI requires rapid data acquisition for the study of moving organs such as the heart. We introduce a generalized version of the deep-image-prior approach, which optimizes the weights of a reconstruction network to fit a sequence of sparsely acquired dynamic MRI measurements. Our method needs neither prior training nor additional data. In particular, for cardiac images, it does not require the marking of heartbeats or the reordering of spokes. The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k -space. Our method outperforms the state-of-the-art methods quantitatively and qualitatively in both retrospective and real fetal cardiac datasets. To the best of our knowledge, this is the first unsupervised deep-learning-based method that can reconstruct the continuous variation of dynamic MRI sequences with high spatial resolution.
Collapse
|
64
|
Bera S, Biswas PK. Noise Conscious Training of Non Local Neural Network Powered by Self Attentive Spectral Normalized Markovian Patch GAN for Low Dose CT Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3663-3673. [PMID: 34224348 DOI: 10.1109/tmi.2021.3094525] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The explosive rise of the use of Computer tomography (CT) imaging in medical practice has heightened public concern over the patient's associated radiation dose. On the other hand, reducing the radiation dose leads to increased noise and artifacts, which adversely degrades the scan's interpretability. In recent times, the deep learning-based technique has emerged as a promising method for low dose CT(LDCT) denoising. However, some common bottleneck still exists, which hinders deep learning-based techniques from furnishing the best performance. In this study, we attempted to mitigate these problems with three novel accretions. First, we propose a novel convolutional module as the first attempt to utilize neighborhood similarity of CT images for denoising tasks. Our proposed module assisted in boosting the denoising by a significant margin. Next, we moved towards the problem of non-stationarity of CT noise and introduced a new noise aware mean square error loss for LDCT denoising. The loss mentioned above also assisted to alleviate the laborious effort required while training CT denoising network using image patches. Lastly, we propose a novel discriminator function for CT denoising tasks. The conventional vanilla discriminator tends to overlook the fine structural details and focus on the global agreement. Our proposed discriminator leverage self-attention and pixel-wise GANs for restoring the diagnostic quality of LDCT images. Our method validated on a publicly available dataset of the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge performed remarkably better than the existing state of the art method. The corresponding source code is available at: https://github.com/reach2sbera/ldct_nonlocal.
Collapse
|
65
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
66
|
Zhang Y, Hu D, Zhao Q, Quan G, Liu J, Liu Q, Zhang Y, Coatrieux G, Chen Y, Yu H. CLEAR: Comprehensive Learning Enabled Adversarial Reconstruction for Subtle Structure Enhanced Low-Dose CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3089-3101. [PMID: 34270418 DOI: 10.1109/tmi.2021.3097808] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
X-ray computed tomography (CT) is of great clinical significance in medical practice because it can provide anatomical information about the human body without invasion, while its radiation risk has continued to attract public concerns. Reducing the radiation dose may induce noise and artifacts to the reconstructed images, which will interfere with the judgments of radiologists. Previous studies have confirmed that deep learning (DL) is promising for improving low-dose CT imaging. However, almost all the DL-based methods suffer from subtle structure degeneration and blurring effect after aggressive denoising, which has become the general challenging issue. This paper develops the Comprehensive Learning Enabled Adversarial Reconstruction (CLEAR) method to tackle the above problems. CLEAR achieves subtle structure enhanced low-dose CT imaging through a progressive improvement strategy. First, the generator established on the comprehensive domain can extract more features than the one built on degraded CT images and directly map raw projections to high-quality CT images, which is significantly different from the routine GAN practice. Second, a multi-level loss is assigned to the generator to push all the network components to be updated towards high-quality reconstruction, preserving the consistency between generated images and gold-standard images. Finally, following the WGAN-GP modality, CLEAR can migrate the real statistical properties to the generated images to alleviate over-smoothing. Qualitative and quantitative analyses have demonstrated the competitive performance of CLEAR in terms of noise suppression, structural fidelity and visual perception improvement.
Collapse
|
67
|
Zhi S, KachelrieB M, Pan F, Mou X. CycN-Net: A Convolutional Neural Network Specialized for 4D CBCT Images Refinement. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3054-3064. [PMID: 34010129 DOI: 10.1109/tmi.2021.3081824] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Four-dimensional cone-beam computed tomography (4D CBCT) has been developed to provide a sequence of phase-resolved reconstructions in image-guided radiation therapy. However, 4D CBCT images are degraded by severe streaking artifacts and noise because the phase-resolved image is an extremely sparse-view CT procedure wherein a few under-sampled projections are used for the reconstruction of each phase. Aiming at improving the overall quality of 4D CBCT images, we proposed two CNN models, named N-Net and CycN-Net, respectively, by fully excavating the inherent property of 4D CBCT. To be specific, the proposed N-Net incorporates the prior image reconstructed from entire projection data based on U-Net to boost the image quality for each phase-resolved image. Based on N-Net, a temporal correlation among the phase-resolved images is also considered by the proposed CycN-Net. Extensive experiments on both XCAT simulation data and real patient 4D CBCT datasets were carried out to verify the feasibility of the proposed CNNs. Both networks can effectively suppress streaking artifacts and noise while restoring the distinct features simultaneously, compared with the existing CNN models and two state-of-the-art iterative algorithms. Moreover, the proposed method is robust in handling complicated tasks of various patient datasets and imaging devices, which implies its excellent generalization ability.
Collapse
|
68
|
Ye S, Li Z, McCann MT, Long Y, Ravishankar S. Unified Supervised-Unsupervised (SUPER) Learning for X-Ray CT Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2986-3001. [PMID: 34232871 DOI: 10.1109/tmi.2021.3095310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Traditional model-based image reconstruction (MBIR) methods combine forward and noise models with simple object priors. Recent machine learning methods for image reconstruction typically involve supervised learning or unsupervised learning, both of which have their advantages and disadvantages. In this work, we propose a unified supervised-unsupervised (SUPER) learning framework for X-ray computed tomography (CT) image reconstruction. The proposed learning formulation combines both unsupervised learning-based priors (or even simple analytical priors) together with (supervised) deep network-based priors in a unified MBIR framework based on a fixed point iteration analysis. The proposed training algorithm is also an approximate scheme for a bilevel supervised training optimization problem, wherein the network-based regularizer in the lower-level MBIR problem is optimized using an upper-level reconstruction loss. The training problem is optimized by alternating between updating the network weights and iteratively updating the reconstructions based on those weights. We demonstrate the learned SUPER models' efficacy for low-dose CT image reconstruction, for which we use the NIH AAPM Mayo Clinic Low Dose CT Grand Challenge dataset for training and testing. In our experiments, we studied different combinations of supervised deep network priors and unsupervised learning-based or analytical priors. Both numerical and visual results show the superiority of the proposed unified SUPER methods over standalone supervised learning-based methods, iterative MBIR methods, and variations of SUPER obtained via ablation studies. We also show that the proposed algorithm converges rapidly in practice.
Collapse
|
69
|
Lee M, Kim H, Cho HM, Kim HJ. Ultra-Low-Dose Spectral CT Based on a Multi-level Wavelet Convolutional Neural Network. J Digit Imaging 2021; 34:1359-1375. [PMID: 34590198 DOI: 10.1007/s10278-021-00467-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 03/04/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022] Open
Abstract
Spectral computed tomography (CT) based on a photon-counting detector (PCD) is a promising technique with the potential to improve lesion detection, tissue characterization, and material decomposition. PCD-based scanners have several technical issues including operation in the step-and-scan mode and long data acquisition time. One straightforward solution to these issues is to reduce the number of projection views. However, if the projection data are under-sampled or noisy, it would be challenging to produce a correct solution without precise prior information. Recently, deep-learning approaches have demonstrated impressive performance for under-sampled CT reconstruction. In this work, the authors present a multilevel wavelet convolutional neural network (MWCNN) to address the limitations of PCD-based scanners. Data properties of the proposed method in under-sampled spectral CT are analyzed with respect to the proposed deep-running-network-based image reconstruction using two measures: sampling density and data incoherence. This work presents the proposed method and four different methods to restore sparse sampling. We investigate and compare these methods through a simulation and real experiments. In addition, data properties are quantitatively analyzed and compared for the effect of sparse sampling on the image quality. Our results indicate that both sampling density and data incoherence affect the image quality in the studied methods. Among the different methods, the proposed MWCNN shows promising results. Our method shows the highest performance in terms of various evaluation parameters such as the structural similarity, root mean square error, and resolution. Based on the results of imaging and quantitative evaluation, this study confirms that the proposed deep-running network structure shows excellent image reconstruction in sparse-view PCD-based CT. These results demonstrate the feasibility of sparse-view PCD-based CT using the MWCNN. The advantage of sparse view CT is that it can significantly reduce the radiation dose and obtain images with several energy bands by fusing PCDs. These results indicate that the MWCNN possesses great potential for sparse-view PCD-based CT.
Collapse
Affiliation(s)
- Minjae Lee
- Department of Radiation Convergence Engineering, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, Republic of Korea
| | - Hyemi Kim
- Department of Radiological Science, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, Republic of Korea
| | - Hyo-Min Cho
- Korea Research Institute of Standards and Science, Daejoen, Republic of Korea
| | - Hee-Joung Kim
- Department of Radiation Convergence Engineering, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, Republic of Korea.
- Department of Radiological Science, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, Republic of Korea.
| |
Collapse
|
70
|
Low-Dose CT Image Denoising with Improving WGAN and Hybrid Loss Function. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2973108. [PMID: 34484414 PMCID: PMC8416402 DOI: 10.1155/2021/2973108] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/12/2021] [Accepted: 08/12/2021] [Indexed: 11/17/2022]
Abstract
The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.
Collapse
|
71
|
Huang Z, Liu X, Wang R, Chen Z, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Learning a Deep CNN Denoising Approach Using Anatomical Prior Information Implemented With Attention Mechanism for Low-Dose CT Imaging on Clinical Patient Data From Multiple Anatomical Sites. IEEE J Biomed Health Inform 2021; 25:3416-3427. [PMID: 33625991 DOI: 10.1109/jbhi.2021.3061758] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Dose reduction in computed tomography (CT) has gained considerable attention in clinical applications because it decreases radiation risks. However, a lower dose generates noise in low-dose computed tomography (LDCT) images. Previous deep learning (DL)-based works have investigated ways to improve diagnostic performance to address this ill-posed problem. However, most of them disregard the anatomical differences among different human body sites in constructing the mapping function between LDCT images and their high-resolution normal-dose CT (NDCT) counterparts. In this article, we propose a novel deep convolutional neural network (CNN) denoising approach by introducing information of the anatomical prior. Instead of designing multiple networks for each independent human body anatomical site, a unified network framework is employed to process anatomical information. The anatomical prior is represented as a pattern of weights of the features extracted from the corresponding LDCT image in an anatomical prior fusion module. To promote diversity in the contextual information, a spatial attention fusion mechanism is introduced to capture many local regions of interest in the attention fusion module. Although many network parameters are saved, the experimental results demonstrate that our method, which incorporates anatomical prior information, is effective in denoising LDCT images. Furthermore, the anatomical prior fusion module could be conveniently integrated into other DL-based methods and avails the performance improvement on multiple anatomical data.
Collapse
|
72
|
Mohammadinejad P, Mileto A, Yu L, Leng S, Guimaraes LS, Missert AD, Jensen CT, Gong H, McCollough CH, Fletcher JG. CT Noise-Reduction Methods for Lower-Dose Scanning: Strengths and Weaknesses of Iterative Reconstruction Algorithms and New Techniques. Radiographics 2021; 41:1493-1508. [PMID: 34469209 DOI: 10.1148/rg.2021200196] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Iterative reconstruction (IR) algorithms are the most widely used CT noise-reduction method to improve image quality and have greatly facilitated radiation dose reduction within the radiology community. Various IR methods have different strengths and limitations. Because IR algorithms are typically nonlinear, they can modify spatial resolution and image noise texture in different regions of the CT image; hence traditional image-quality metrics are not appropriate to assess the ability of IR to preserve diagnostic accuracy, especially for low-contrast diagnostic tasks. In this review, the authors highlight emerging IR algorithms and CT noise-reduction techniques and summarize how these techniques can be evaluated to help determine the appropriate radiation dose levels for different diagnostic tasks in CT. In addition to advanced IR techniques, we describe novel CT noise-reduction methods based on convolutional neural networks (CNNs). CNN-based noise-reduction techniques may offer the ability to reduce image noise while maintaining high levels of image detail but may have unique drawbacks. Other novel CT noise-reduction methods are being developed to leverage spatial and/or spectral redundancy in multiphase or multienergy CT. Radiologists and medical physicists should be familiar with these different alternatives to adapt available CT technology for different diagnostic tasks. The scope of this article is (a) to review the clinical applications of IR algorithms as well as their strengths, weaknesses, and methods of assessment and (b) to explore new CT image reconstruction and noise-reduction techniques that promise to facilitate radiation dose reduction. ©RSNA, 2021.
Collapse
Affiliation(s)
- Payam Mohammadinejad
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Achille Mileto
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Lifeng Yu
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Shuai Leng
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Luis S Guimaraes
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Andrew D Missert
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Corey T Jensen
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Hao Gong
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Cynthia H McCollough
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| | - Joel G Fletcher
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (P.M., L.Y., S.L., A.D.M., H.G., C.H.M., J.G.F.); Department of Radiology, Harborview Medical Center, Seattle, Wash (A.M.); Department of Medical Imaging, Toronto General Hospital, Toronto, ON, Canada (L.S.G.); and Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, Houston, Tex (C.T.J.)
| |
Collapse
|
73
|
Richardson ML, Garwood ER, Lee Y, Li MD, Lo HS, Nagaraju A, Nguyen XV, Probyn L, Rajiah P, Sin J, Wasnik AP, Xu K. Noninterpretive Uses of Artificial Intelligence in Radiology. Acad Radiol 2021; 28:1225-1235. [PMID: 32059956 DOI: 10.1016/j.acra.2020.01.012] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/08/2020] [Accepted: 01/09/2020] [Indexed: 12/12/2022]
Abstract
We deem a computer to exhibit artificial intelligence (AI) when it performs a task that would normally require intelligent action by a human. Much of the recent excitement about AI in the medical literature has revolved around the ability of AI models to recognize anatomy and detect pathology on medical images, sometimes at the level of expert physicians. However, AI can also be used to solve a wide range of noninterpretive problems that are relevant to radiologists and their patients. This review summarizes some of the newer noninterpretive uses of AI in radiology.
Collapse
Affiliation(s)
| | - Elisabeth R Garwood
- Department of Radiology, University of Massachusetts, Worcester, Massachusetts
| | - Yueh Lee
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina
| | - Matthew D Li
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusets
| | - Hao S Lo
- Department of Radiology, University of Washington, Seattle, Washington
| | - Arun Nagaraju
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Xuan V Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Linda Probyn
- Department of Radiology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario
| | - Prabhakar Rajiah
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jessica Sin
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Ashish P Wasnik
- Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Kali Xu
- Department of Medicine, Santa Clara Valley Medical Center, Santa Clara, California
| |
Collapse
|
74
|
Zhang C, Li Y, Chen GH. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med Phys 2021; 48:5765-5781. [PMID: 34458996 DOI: 10.1002/mp.15183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/09/2021] [Accepted: 08/02/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: (1) limited reconstruction accuracy for individual patients and (2) limited generalizability for patient statistical cohorts. PURPOSE The purpose of this work is to address the previously mentioned challenges in current deep learning methods. METHODS A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. RESULTS Extensive evaluation studies have demonstrated that: (1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS-based methods; (2) the false-positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and (3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. CONCLUSIONS DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.,Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
75
|
Gu J, Yang TS, Ye JC, Yang DH. CycleGAN denoising of extreme low-dose cardiac CT using wavelet-assisted noise disentanglement. Med Image Anal 2021; 74:102209. [PMID: 34450466 DOI: 10.1016/j.media.2021.102209] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 08/02/2021] [Accepted: 08/04/2021] [Indexed: 11/18/2022]
Abstract
In electrocardiography (ECG) gated cardiac CT angiography (CCTA), multiple images covering the entire cardiac cycle are taken continuously, so reduction of the accumulated radiation dose could be an important issue for patient safety. Although ECG-gated dose modulation (so-called ECG pulsing) is used to acquire many phases of CT images at a low dose, the reduction of the radiation dose introduces noise into the image reconstruction. To address this, we developed a high performance unsupervised deep learning method using noise disentanglement that can effectively learn the noise patterns even from extreme low dose CT images. For noise disentanglement, we use a wavelet transform to extract the high-frequency signals that contain the most noise. Since matched low-dose and high-dose cardiac CT data are impossible to obtain in practice, our neural network was trained in an unsupervised manner using cycleGAN for the extracted high frequency signals from the low-dose and unpaired high-dose CT images. Once the network is trained, denoised images are obtained by subtracting the estimated noise components from the input images. Image quality evaluation of the denoised images from only 4% dose CT images was performed by experienced radiologists for several anatomical structures. Visual grading analysis was conducted according to the sharpness level, noise level, and structural visibility. Also, the signal-to-noise ratio was calculated. The evaluation results showed that the quality of the images produced by the proposed method is much improved compared to low-dose CT images and to the baseline cycleGAN results. The proposed noise-disentangled cycleGAN with wavelet transform effectively removed noise from extreme low-dose CT images compared to the existing baseline algorithms. It can be an important denoising platform for low-dose CT.
Collapse
Affiliation(s)
- Jawook Gu
- Bio Imaging, Signal Processing and Learning Laboratory, Department of Bio and Brain Engineering, KAIST, 291, Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
| | - Tae Seong Yang
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea.
| | - Jong Chul Ye
- Bio Imaging, Signal Processing and Learning Laboratory, Department of Bio and Brain Engineering, KAIST, 291, Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
| | - Dong Hyun Yang
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea.
| |
Collapse
|
76
|
Chen J, Zhang C, Traverso A, Zhovannik I, Dekker A, Wee L, Bermejo I. Generative models improve radiomics reproducibility in low dose CTs: a simulation study. Phys Med Biol 2021; 66. [PMID: 34289463 DOI: 10.1088/1361-6560/ac16c0] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 07/21/2021] [Indexed: 11/12/2022]
Abstract
Radiomics is an active area of research in medical image analysis, however poor reproducibility of radiomics has hampered its application in clinical practice. This issue is especially prominent when radiomic features are calculated from noisy images, such as low dose computed tomography (CT) scans. In this article, we investigate the possibility of improving the reproducibility of radiomic features calculated on noisy CTs by using generative models for denoising. Our work concerns two types of generative models-encoder-decoder network (EDN) and conditional generative adversarial network (CGAN). We then compared their performance against a more traditional 'non-local means' denoising algorithm. We added noise to sinograms of full dose CTs to mimic low dose CTs with two levels of noise: low-noise CT and high-noise CT. Models were trained on high-noise CTs and used to denoise low-noise CTs without re-training. We tested the performance of our model in real data, using a dataset of same-day repeated low dose CTs in order to assess the reproducibility of radiomic features in denoised images. EDN and the CGAN achieved similar improvements on the concordance correlation coefficients (CCC) of radiomic features for low-noise images from 0.87 [95%CI, (0.833, 0.901)] to 0.92 [95%CI, (0.909, 0.935)] and for high-noise images from 0.68 [95%CI, (0.617, 0.745)] to 0.92 [95%CI, (0.909, 0.936)], respectively. The EDN and the CGAN improved the test-retest reliability of radiomic features (mean CCC increased from 0.89 [95%CI, (0.881, 0.914)] to 0.94 [95%CI, (0.927, 0.951)]) based on real low dose CTs. These results show that denoising using EDN and CGANs could be used to improve the reproducibility of radiomic features calculated from noisy CTs. Moreover, images at different noise levels can be denoised to improve the reproducibility using the above models without need for re-training, provided the noise intensity is not excessively greater that of the high-noise CTs. To the authors' knowledge, this is the first effort to improve the reproducibility of radiomic features calculated on low dose CT scans by applying generative models.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Chong Zhang
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Alberto Traverso
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Ivan Zhovannik
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands.,Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
77
|
Zhang Z, Liang X, Zhao W, Xing L. Noise2Context: Context-assisted learning 3D thin-layer for low-dose CT. Med Phys 2021; 48:5794-5803. [PMID: 34287948 DOI: 10.1002/mp.15119] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/31/2021] [Accepted: 07/08/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of x-ray radiation exposure attract more and more attention. To lower the x-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data. METHODS In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT. RESULTS Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context. CONCLUSIONS In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
78
|
Kulathilake KASH, Abdullah NA, Sabri AQM, Lai KW. A review on Deep Learning approaches for low-dose Computed Tomography restoration. COMPLEX INTELL SYST 2021; 9:2713-2745. [PMID: 34777967 PMCID: PMC8164834 DOI: 10.1007/s40747-021-00405-x] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/18/2021] [Indexed: 02/08/2023]
Abstract
Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.
Collapse
Affiliation(s)
- K. A. Saneera Hemantha Kulathilake
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Nor Aniza Abdullah
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Aznul Qalid Md Sabri
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
79
|
Al-Masni MA, Kim DH. CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation. Sci Rep 2021; 11:10191. [PMID: 33986375 PMCID: PMC8119726 DOI: 10.1038/s41598-021-89686-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 04/26/2021] [Indexed: 01/20/2023] Open
Abstract
Medical image segmentation of tissue abnormalities, key organs, or blood vascular system is of great significance for any computerized diagnostic system. However, automatic segmentation in medical image analysis is a challenging task since it requires sophisticated knowledge of the target organ anatomy. This paper develops an end-to-end deep learning segmentation method called Contextual Multi-Scale Multi-Level Network (CMM-Net). The main idea is to fuse the global contextual features of multiple spatial scales at every contracting convolutional network level in the U-Net. Also, we re-exploit the dilated convolution module that enables an expansion of the receptive field with different rates depending on the size of feature maps throughout the networks. In addition, an augmented testing scheme referred to as Inversion Recovery (IR) which uses logical "OR" and "AND" operators is developed. The proposed segmentation network is evaluated on three medical imaging datasets, namely ISIC 2017 for skin lesions segmentation from dermoscopy images, DRIVE for retinal blood vessels segmentation from fundus images, and BraTS 2018 for brain gliomas segmentation from MR scans. The experimental results showed superior state-of-the-art performance with overall dice similarity coefficients of 85.78%, 80.27%, and 88.96% on the segmentation of skin lesions, retinal blood vessels, and brain tumors, respectively. The proposed CMM-Net is inherently general and could be efficiently applied as a robust tool for various medical image segmentations.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
80
|
Jung KJ, Mandija S, Kim JH, Ryu K, Jung S, Cui C, Kim SY, Park M, van den Berg CAT, Kim DH. Improving phase-based conductivity reconstruction by means of deep learning-based denoising of B 1 + phase data for 3T MRI. Magn Reson Med 2021; 86:2084-2094. [PMID: 33949721 DOI: 10.1002/mrm.28826] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 03/28/2021] [Accepted: 04/13/2021] [Indexed: 12/24/2022]
Abstract
PURPOSE To denoise B 1 + phase using a deep learning method for phase-based in vivo electrical conductivity reconstruction in a 3T MR system. METHODS For B 1 + phase deep-learning denoising, a convolutional neural network (U-net) was chosen. Training was performed on data sets from 10 healthy volunteers. Input data were the real and imaginary components of single averaged spin-echo data (SNR = 45), which was used to approximate the B 1 + phase. For label data, multiple signal-averaged spin-echo data (SNR = 128) were used. Testing was performed on in silico and in vivo data. Reconstructed conductivity maps were derived using phase-based conductivity reconstructions. Additionally, we investigated the usability of the network to various SNR levels, imaging contrasts, and anatomical sites (ie, T1 , T2 , and proton density-weighted brain images and proton density-weighted breast images. In addition, conductivity reconstructions from deep learning-based denoised data were compared with conventional image filters, which were used for data denoising in electrical properties tomography (ie, the Gaussian filtering and the Savitzky-Golay filtering). RESULTS The proposed deep learning-based denoising approach showed improvement for B 1 + phase for both in silico and in vivo experiments with reduced quantitative error measures compared with other methods. Subsequently, this resulted in an improvement of reconstructed conductivity maps from the denoised B 1 + phase with deep learning. CONCLUSION The results suggest that the proposed approach can be used as an alternative preprocessing method to denoise B 1 + maps for phase-based conductivity reconstruction without relying on image filters or signal averaging.
Collapse
Affiliation(s)
- Kyu-Jin Jung
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Stefano Mandija
- Computational Imaging Group for MR Diagnostic & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, the Netherlands.,Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Jun-Hyeong Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.,Department of Radiology, Stanford University, Stanford, California, USA
| | - Soozy Jung
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Chuanjiang Cui
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Soo-Yeon Kim
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Mina Park
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Cornelis A T van den Berg
- Computational Imaging Group for MR Diagnostic & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, the Netherlands.,Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
81
|
Wang X, Zheng F, Xiao R, Liu Z, Li Y, Li J, Zhang X, Hao X, Zhang X, Guo J, Zhang Y, Xue H, Jin Z. Comparison of image quality and lesion diagnosis in abdominopelvic unenhanced CT between reduced-dose CT using deep learning post-processing and standard-dose CT using iterative reconstruction: A prospective study. Eur J Radiol 2021; 139:109735. [PMID: 33932717 DOI: 10.1016/j.ejrad.2021.109735] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 04/06/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022]
Abstract
PURPOSE To compare image quality and lesion diagnosis between reduced-dose abdominopelvic unenhanced computed tomography (CT) using deep learning (DL) post-processing and standard-dose CT using iterative reconstruction (IR). METHOD Totally 251 patients underwent two consecutive abdominopelvic unenhanced CT scans of the same range, including standard and reduced doses, respectively. In group A, standard-dose data were reconstructed by (blend 30 %) IR. In group B, reduced-dose data were reconstructed by filtered back projection reconstruction to obtain group B1 images, and post-processed using the DL algorithm (NeuAI denosing, Neusoft medical, Shenyang, China) with 50 % and 100 % weights to obtain group B2 and B3 images, respectively. Then, CT values of the liver, the second lumbar vertebral centrum, the erector spinae and abdominal subcutaneous fat were measured. CT values, noise levels, signal-to-noise ratios (SNRs), contrast-to-noise ratios (CNRs), radiation doses and subjective scores of image quality were compared. Subjective evaluations of low-density liver lesions were compared by diagnostic results from enhanced CT or Magnetic Resonance Imaging. RESULTS Groups B3 and B1 showed the lowest and highest noise levels, respectively (P < 0.001). The SNR and CNR in group B3 were highest (P < 0.001). The radiation dose in group B was reduced by 71.5 % on average compared to group A. Subjective scores in groups A and B2 were highest (P < 0.001). Diagnostic sensitivity and confidence for liver metastases in groups A and B2 were highest (P < 0.001). CONCLUSIONS Reduced-dose abdominopelvic unenhanced CT combined with DL post-processing could ensure image quality and satisfy diagnostic needs.
Collapse
Affiliation(s)
- Xiao Wang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Fuling Zheng
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Ran Xiao
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Zhuoheng Liu
- From CT Business Unit, Neusoft Medical System Company, Shenyang, China
| | - Yutong Li
- From CT Business Unit, Neusoft Medical System Company, Shenyang, China
| | - Juan Li
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xi Zhang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xuemin Hao
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xinhu Zhang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jiawu Guo
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Yan Zhang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Huadan Xue
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| | - Zhengyu Jin
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| |
Collapse
|
82
|
Deep learning-based denoising algorithm in comparison to iterative reconstruction and filtered back projection: a 12-reader phantom study. Eur Radiol 2021; 31:8755-8764. [PMID: 33885958 DOI: 10.1007/s00330-021-07810-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 01/02/2021] [Accepted: 02/17/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVES (1) To compare low-contrast detectability of a deep learning-based denoising algorithm (DLA) with ADMIRE and FBP, and (2) to compare image quality parameters of DLA with those of reconstruction methods from two different CT vendors (ADMIRE, IMR, and FBP). MATERIALS AND METHODS Using abdominal CT images of 100 patients reconstructed via ADMIRE and FBP, we trained DLA by feeding FBP images as input and ADMIRE images as the ground truth. To measure the low-contrast detectability, the randomized repeat scans of Catphan® phantom were performed under various conditions of radiation exposures. Twelve radiologists evaluated the presence/absence of a target on a five-point confidence scale. The multi-reader multi-case area under the receiver operating characteristic curve (AUC) was calculated, and non-inferiority tests were performed. Using American College of Radiology CT accreditation phantom, contrast-to-noise ratio, target transfer function, noise magnitude, and detectability index (d') of DLA, ADMIRE, IMR, and FBPs were computed. RESULTS The AUC of DLA in low-contrast detectability was non-inferior to that of ADMIRE (p < .001) and superior to that of FBP (p < .001). DLA improved the image quality in terms of all physical measurements compared to FBPs from both CT vendors and showed profiles of physical measurements similar to those of ADMIRE. CONCLUSIONS The low-contrast detectability of the proposed deep learning-based denoising algorithm was non-inferior to that of ADMIRE and superior to that of FBP. The DLA could successfully improve image quality compared with FBP while showing the similar physical profiles of ADMIRE. KEY POINTS • Low-contrast detectability in the images denoised using the deep learning algorithm was non-inferior to that in the images reconstructed using standard algorithms. • The proposed deep learning algorithm showed similar profiles of physical measurements to advanced iterative reconstruction algorithm (ADMIRE).
Collapse
|
83
|
Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021; 71:102065. [PMID: 33915472 DOI: 10.1016/j.media.2021.102065] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/16/2021] [Accepted: 03/30/2021] [Indexed: 12/12/2022]
Abstract
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
Collapse
|
84
|
Estimating dual-energy CT imaging from single-energy CT data with material decomposition convolutional neural network. Med Image Anal 2021; 70:102001. [PMID: 33640721 DOI: 10.1016/j.media.2021.102001] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 02/06/2021] [Accepted: 02/11/2021] [Indexed: 01/12/2023]
Abstract
Dual-energy computed tomography (DECT) is of great significance for clinical practice due to its huge potential to provide material-specific information. However, DECT scanners are usually more expensive than standard single-energy CT (SECT) scanners and thus are less accessible to undeveloped regions. In this paper, we show that the energy-domain correlation and anatomical consistency between standard DECT images can be harnessed by a deep learning model to provide high-performance DECT imaging from fully-sampled low-energy data together with single-view high-energy data. We demonstrate the feasibility of the approach with two independent cohorts (the first cohort including contrast-enhanced DECT scans of 5753 image slices from 22 patients and the second cohort including spectral CT scans without contrast injection of 2463 image slices from other 22 patients) and show its superior performance on DECT applications. The deep-learning-based approach could be useful to further significantly reduce the radiation dose of current premium DECT scanners and has the potential to simplify the hardware of DECT imaging systems and to enable DECT imaging using standard SECT scanners.
Collapse
|
85
|
Han Y, Jang J, Cha E, Lee J, Chung H, Jeong M, Kim TG, Chae BG, Kim HG, Jun S, Hwang S, Lee E, Ye JC. Deep learning STEM-EDX tomography of nanocrystals. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-020-00289-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
86
|
Hu D, Liu J, Lv T, Zhao Q, Zhang Y, Quan G, Feng J, Chen Y, Luo L. Hybrid-Domain Neural Network Processing for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3011413] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
87
|
Whiteley W, Panin V, Zhou C, Cabello J, Bharkhada D, Gregor J. FastPET: Near Real-Time Reconstruction of PET Histo-Image Data Using a Neural Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3028364] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
88
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
89
|
|
90
|
|
91
|
Sparse-view CT reconstruction based on multi-level wavelet convolution neural network. Phys Med 2020; 80:352-362. [PMID: 33279829 DOI: 10.1016/j.ejmp.2020.11.021] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 10/15/2020] [Accepted: 11/14/2020] [Indexed: 11/20/2022] Open
Abstract
Sparse-view computed tomography (CT) is a recent approach to reducing the radiation dose in patients and speeding up the data acquisition. Consequently, sparse-view CT has been of particular interest among researchers within the CT community. Advanced reconstruction algorithms for sparse-view CT, such as iterative algorithms with total-variation (TV), have been studied along with the problem of increasing computational burden and the blurring of artifacts in the reconstructed images. Studies on deep-learning-based approaches applying U-NET have recently achieved remarkable outcomes in various domains including low-dose CT. In this study, we propose a new method for sparse-view CT reconstruction based on a multi-level wavelet convolutional neural network (MWCNN). First, a filtered backprojection (FBP) was used to reconstruct a sparsely sampled sinogram from 60, 120, and 180 projections. Subsequently, the sparse-view data obtained from FBP were fed to a deep-learning network, i.e., the MWCNN. Our network architecture combines a wavelet transform and modified U-NET without pooling. By replacing the pooling function with the wavelet transform, the receptive field is enlarged to improve the performance. We qualitatively and quantitatively evaluated the interpolation, iterative TV method, and standard U-NET in terms of a reduction in the streaking artifacts and a preservation of the anatomical structures. When compared with other methods, the proposed method showed the highest performance based on various evaluation parameters such as the structural similarity, root mean square error, and resolution. These results indicate that the MWCNN possesses a powerful potential for achieving a sparse-view CT reconstruction.
Collapse
|
92
|
Wang S, Wu W, Feng J, Liu F, Yu H. Low-dose spectral CT reconstruction based on image-gradient L 0-norm and adaptive spectral PICCS. Phys Med Biol 2020; 65:245005. [PMID: 32693399 DOI: 10.1088/1361-6560/aba7cf] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The photon-counting detector based spectral computed tomography (CT) is promising for lesion detection, tissue characterization, and material decomposition. However, the lower signal-to-noise ratio within multi-energy projection dataset can result in poorly reconstructed image quality. Recently, as prior information, a high-quality spectral mean image was introduced into the prior image constrained compressed sensing (PICCS) framework to suppress noise, leading to spectral PICCS (SPICCS). In the original SPICCS model, the image gradient L1-norm is employed, and it can cause blurred edge structures in the reconstructed images. Encouraged by the advantages in edge preservation and finer structure recovering, the image gradient L0-norm was incorporated into the PICCS model. Furthermore, due to the difference of energy spectrum in different channels, a weighting factor is introduced and adaptively adjusted for different channel-wise images, leading to an L0-norm based adaptive SPICCS (L0-ASPICCS) algorithm for low-dose spectral CT reconstruction. The split-Bregman method is employed to minimize the objective function. Extensive numerical simulations and physical phantom experiments are performed to evaluate the proposed method. By comparing with the state-of-the-art algorithms, such as the simultaneous algebraic reconstruction technique, total variation minimization, and SPICCS, the advantages of our proposed method are demonstrated in terms of both qualitative and quantitative evaluation results.
Collapse
Affiliation(s)
- Shaoyu Wang
- Key Lab of Optoelectronic Technology and Systems, Ministry of Education, Chongqing University, Chongqing 400044, People's Republic of China. Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA 01854, United States of America. Engineering Research Center of Industrial Computed Tomography Nondestructive Testing, Ministry of Education, Chongqing University, Chongqing 400044, People's Republic of China
| | | | | | | | | |
Collapse
|
93
|
Fu Z, Tseng HW, Vedantham S, Karellas A, Bilgin A. A residual dense network assisted sparse view reconstruction for breast computed tomography. Sci Rep 2020; 10:21111. [PMID: 33273541 PMCID: PMC7713379 DOI: 10.1038/s41598-020-77923-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 11/18/2020] [Indexed: 12/24/2022] Open
Abstract
To develop and investigate a deep learning approach that uses sparse-view acquisition in dedicated breast computed tomography for radiation dose reduction, we propose a framework that combines 3D sparse-view cone-beam acquisition with a multi-slice residual dense network (MS-RDN) reconstruction. Projection datasets (300 views, full-scan) from 34 women were reconstructed using the FDK algorithm and served as reference. Sparse-view (100 views, full-scan) projection data were reconstructed using the FDK algorithm. The proposed MS-RDN uses the sparse-view and reference FDK reconstructions as input and label, respectively. Our MS-RDN evaluated with respect to fully sampled FDK reference yields superior performance, quantitatively and visually, compared to conventional compressed sensing methods and state-of-the-art deep learning based methods. The proposed deep learning driven framework can potentially enable low dose breast CT imaging.
Collapse
Affiliation(s)
- Zhiyang Fu
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA.,Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA
| | - Hsin Wu Tseng
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Srinivasan Vedantham
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA.,Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA
| | - Andrew Karellas
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Ali Bilgin
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA. .,Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA. .,Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA.
| |
Collapse
|
94
|
Luo Y, Majoe S, Kui J, Qi H, Pushparajah K, Rhode K. Ultra-Dense Denoising Network: Application to Cardiac Catheter-Based X-Ray Procedures. IEEE Trans Biomed Eng 2020; 68:2626-2636. [PMID: 33259291 DOI: 10.1109/tbme.2020.3041571] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Reducing radiation dose in cardiac catheter-based X-ray procedures increases safety but also image noise and artifacts. Excessive noise and artifacts can compromise vital image information, which can affect clinical decision-making. Developing more effective X-ray denoising methodologies will be beneficial to both patients and healthcare professionals by allowing imaging at lower radiation dose without compromising image information. This paper proposes a framework based on a convolutional neural network (CNN), namely Ultra-Dense Denoising Network (UDDN), for low-dose X-ray image denoising. To promote feature extraction, we designed a novel residual block which establishes a solid correlation among multiple-path neural units via abundant cross connections in its representation enhancement section. Experiments on synthetic additive noise X-ray data show that the UDDN achieves statistically significant higher peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) than other comparative methods. We enhanced the clinical adaptability of our framework by training using normally-distributed noise and tested on clinical data taken from procedures at St. Thomas' hospital in London. The performance was assessed by using local SNR and by clinical voting using ten cardiologists. The results show that the UDDN outperforms the other comparative methods and is a promising solution to this challenging but clinically impactful task.
Collapse
|
95
|
Lu J, Millioz F, Garcia D, Salles S, Liu W, Friboulet D. Reconstruction for Diverging-Wave Imaging Using Deep Convolutional Neural Networks. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2481-2492. [PMID: 32286972 DOI: 10.1109/tuffc.2020.2986166] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, diverging wave (DW) ultrasound imaging has become a very promising methodology for cardiovascular imaging due to its high temporal resolution. However, if they are limited in number, DW transmits provide lower image quality compared with classical focused schemes. A conventional reconstruction approach consists in summing series of ultrasound signals coherently, at the expense of frame rate, data volume, and computation time. To deal with this limitation, we propose a convolutional neural network (CNN) architecture, Inception for DW Network (IDNet), for high-quality reconstruction of DW ultrasound images using a small number of transmissions. In order to cope with the specificities induced by the sectorial geometry associated with DW imaging, we adopted the inception model composed of the concatenation of multiscale convolution kernels. Incorporating inception modules aims at capturing different image features with multiscale receptive fields. A mapping between low-quality images and corresponding high-quality compounded reconstruction was learned by training the network using in vitro and in vivo samples. The performance of the proposed approach was evaluated in terms of contrast ratio (CR), contrast-to-noise ratio (CNR), and lateral resolution (LR), and compared with standard compounding method and conventional CNN methods. The results demonstrated that our method could produce high-quality images using only 3 DWs, yielding an image quality equivalent to that obtained with compounding of 31 DWs and outperforming more conventional CNN architectures in terms of complexity, inference time, and image quality.
Collapse
|
96
|
Zhao C, Martin T, Shao X, Alger JR, Duddalwar V, Wang DJJ. Low Dose CT Perfusion With K-Space Weighted Image Average (KWIA). IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3879-3890. [PMID: 32746131 PMCID: PMC7704693 DOI: 10.1109/tmi.2020.3006461] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
CTP (Computed Tomography Perfusion) is widely used in clinical practice for the evaluation of cerebrovascular disorders. However, CTP involves high radiation dose (≥~200mGy) as the X-ray source remains continuously on during the passage of contrast media. The purpose of this study is to present a low dose CTP technique termed K-space Weighted Image Average (KWIA) using a novel projection view-shared averaging algorithm with reduced tube current. KWIA takes advantage of k-space signal property that the image contrast is primarily determined by the k-space center with low spatial frequencies and oversampled projections. KWIA divides each 2D Fourier transform (FT) or k-space CTP data into multiple rings. The outer rings are averaged with neighboring time frames to achieve adequate signal-to-noise ratio (SNR), while the center region of k-space remains unchanged to preserve high temporal resolution. Reduced dose sinogram data were simulated by adding Poisson distributed noise with zero mean on digital phantom and clinical CTP scans. A physical CTP phantom study was also performed with different X-ray tube currents. The sinogram data with simulated and real low doses were then reconstructed with KWIA, and compared with those reconstructed by standard filtered back projection (FBP) and simultaneous algebraic reconstruction with regularization of total variation (SART-TV). Evaluation of image quality and perfusion metrics using parameters including SNR, CNR (contrast-to-noise ratio), AUC (area-under-the-curve), and CBF (cerebral blood flow) demonstrated that KWIA is able to preserve the image quality, spatial and temporal resolution, as well as the accuracy of perfusion quantification of CTP scans with considerable (50-75%) dose-savings.
Collapse
|
97
|
Kawahara D, Saito A, Ozawa S, Nagata Y. Image synthesis with deep convolutional generative adversarial networks for material decomposition in dual-energy CT from a kilovoltage CT. Comput Biol Med 2020; 128:104111. [PMID: 33279790 DOI: 10.1016/j.compbiomed.2020.104111] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 10/17/2020] [Accepted: 11/05/2020] [Indexed: 02/04/2023]
Abstract
Generative Adversarial Networks (GANs) have been widely used and it is expected to use for the clinical examination and image. The objective of the current study was to synthesize material decomposition images of bone-water (bone(water)) and fat-water (fat(water)) reconstructed from dual-energy computed tomography (DECT) using an equivalent kilovoltage-CT (kV-CT) image and a deep conditional GAN. The effective atomic number images were reconstructed using DECT. We used 18,084 images of 28 patients divided into two datasets: the training data for the model included 16,146 images (20 patients) and the test data for evaluation included 1938 images (8 patients). Image prediction frameworks of the equivalent single energy CT images at 120 kVp to the effective atomic number images were created. The image-synthesis framework was based on a CNN with a generator and discriminator. The mean absolute error (MAE), relative mean square error (MSE), relative root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mutual information (MI) were evaluated. The Hounsfield unit (HU) difference between the synthesized and reference material decomposition images of bone(water) and fat(water) were within 5.3 HU and 20.3 HU, respectively. The average MAE, MSE, RMSE, SSIM, and MI of the synthesized and reference material decomposition of the bone(water) images were 0.8, 1.3, 0.9, 0.9, 55.3, and 0.8, respectively. The average MAE, MSE, RMSE, SSIM, and MI of the synthesized and reference material decomposition of the fat(water) images were 0.0, 0.0, 0.1, 0.9, 72.1, and 1.4, respectively. The proposed model can act as a suitable alternative to the existing methods for the reconstruction of material decomposition images of bone(water) and fat(water) reconstructed via DECT from kV-CT.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Hiroshima, 734-8551, Japan.
| | - Akito Saito
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Hiroshima, 734-8551, Japan
| | - Shuichi Ozawa
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Hiroshima, 734-8551, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Hiroshima, 734-8551, Japan; Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
98
|
Yuan N, Zhou J, Qi J. Half2Half: deep neural network based CT image denoising without independent reference data. ACTA ACUST UNITED AC 2020; 65:215020. [DOI: 10.1088/1361-6560/aba939] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
99
|
Han Y, Kim J, Ye JC. Differentiated Backprojection Domain Deep Learning for Conebeam Artifact Removal. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3571-3582. [PMID: 32746105 DOI: 10.1109/tmi.2020.3000341] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Conebeam CT using a circular trajectory is quite often used for various applications due to its relative simple geometry. For conebeam geometry, Feldkamp, Davis and Kress algorithm is regarded as the standard reconstruction method, but this algorithm suffers from so-called conebeam artifacts as the cone angle increases. Various model-based iterative reconstruction methods have been developed to reduce the cone-beam artifacts, but these algorithms usually require multiple applications of computational expensive forward and backprojections. In this paper, we develop a novel deep learning approach for accurate conebeam artifact removal. In particular, our deep network, designed on the differentiated backprojection domain, performs a data-driven inversion of an ill-posed deconvolution problem associated with the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined using a spectral blending technique to minimize the spectral leakage. Experimental results under various conditions confirmed that our method generalizes well and outperforms the existing iterative methods despite significantly reduced runtime complexity.
Collapse
|
100
|
Shin YJ, Chang W, Ye JC, Kang E, Oh DY, Lee YJ, Park JH, Kim YH. Low-Dose Abdominal CT Using a Deep Learning-Based Denoising Algorithm: A Comparison with CT Reconstructed with Filtered Back Projection or Iterative Reconstruction Algorithm. Korean J Radiol 2020; 21:356-364. [PMID: 32090528 PMCID: PMC7039719 DOI: 10.3348/kjr.2019.0413] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Accepted: 11/07/2019] [Indexed: 12/30/2022] Open
Abstract
Objective To compare the image quality of low-dose (LD) computed tomography (CT) obtained using a deep learning-based denoising algorithm (DLA) with LD CT images reconstructed with a filtered back projection (FBP) and advanced modeled iterative reconstruction (ADMIRE). Materials and Methods One hundred routine-dose (RD) abdominal CT studies reconstructed using FBP were used to train the DLA. Simulated CT images were made at dose levels of 13%, 25%, and 50% of the RD (DLA-1, -2, and -3) and reconstructed using FBP. We trained DLAs using the simulated CT images as input data and the RD CT images as ground truth. To test the DLA, the American College of Radiology CT phantom was used together with 18 patients who underwent abdominal LD CT. LD CT images of the phantom and patients were processed using FBP, ADMIRE, and DLAs (LD-FBP, LD-ADMIRE, and LD-DLA images, respectively). To compare the image quality, we measured the noise power spectrum and modulation transfer function (MTF) of phantom images. For patient data, we measured the mean image noise and performed qualitative image analysis. We evaluated the presence of additional artifacts in the LD-DLA images. Results LD-DLAs achieved lower noise levels than LD-FBP and LD-ADMIRE for both phantom and patient data (all p < 0.001). LD-DLAs trained with a lower radiation dose showed less image noise. However, the MTFs of the LD-DLAs were lower than those of LD-ADMIRE and LD-FBP (all p < 0.001) and decreased with decreasing training image dose. In the qualitative image analysis, the overall image quality of LD-DLAs was best for DLA-3 (50% simulated radiation dose) and not significantly different from LD-ADMIRE. There were no additional artifacts in LD-DLA images. Conclusion DLAs achieved less noise than FBP and ADMIRE in LD CT images, but did not maintain spatial resolution. The DLA trained with 50% simulated radiation dose showed the best overall image quality.
Collapse
Affiliation(s)
- Yoon Joo Shin
- Department of Radiology, Konkuk University Medical Center, Seoul, Korea
| | - Won Chang
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea.
| | - Jong Chul Ye
- Bio Imaging and Signal Processing Lab, Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Eunhee Kang
- Bio Imaging and Signal Processing Lab, Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Dong Yul Oh
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Korea
| | - Yoon Jin Lee
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Ji Hoon Park
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Young Hoon Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|