101
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
102
|
Sun W, Symes DR, Brenner CM, Böhnel M, Brown S, Mavrogordato MN, Sinclair I, Salamon M. Review of high energy x-ray computed tomography for non-destructive dimensional metrology of large metallic advanced manufactured components. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2022; 85:016102. [PMID: 35138267 DOI: 10.1088/1361-6633/ac43f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 12/16/2021] [Indexed: 06/14/2023]
Abstract
Advanced manufacturing technologies, led by additive manufacturing, have undergone significant growth in recent years. These technologies enable engineers to design parts with reduced weight while maintaining structural and functional integrity. In particular, metal additive manufacturing parts are increasingly used in application areas such as aerospace, where a failure of a mission-critical part can have dire safety consequences. Therefore, the quality of these components is extremely important. A critical aspect of quality control is dimensional evaluation, where measurements provide quantitative results that are traceable to the standard unit of length, the metre. Dimensional measurements allow designers, manufacturers and users to check product conformity against engineering drawings and enable the same quality standard to be used across the supply chain nationally and internationally. However, there is a lack of development of measurement techniques that provide non-destructive dimensional measurements beyond common non-destructive evaluation focused on defect detection. X-ray computed tomography (XCT) technology has great potential to be used as a non-destructive dimensional evaluation technology. However, technology development is behind the demand and growth for advanced manufactured parts. Both the size and the value of advanced manufactured parts have grown significantly in recent years, leading to new requirements of dimensional measurement technologies. This paper is a cross-disciplinary review of state-of-the-art non-destructive dimensional measuring techniques relevant to advanced manufacturing of metallic parts at larger length scales, especially the use of high energy XCT with source energy of greater than 400 kV to address the need in measuring large advanced manufactured parts. Technologies considered as potential high energy x-ray generators include both conventional x-ray tubes, linear accelerators, and alternative technologies such as inverse Compton scattering sources, synchrotron sources and laser-driven plasma sources. Their technology advances and challenges are elaborated on. The paper also outlines the development of XCT for dimensional metrology and future needs.
Collapse
Affiliation(s)
- Wenjuan Sun
- National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, United Kingdom
| | - Daniel R Symes
- Central Laser Facility, STFC Rutherford Appleton Laboratory, Didcot, OX11 0QX, United Kingdom
| | - Ceri M Brenner
- Central Laser Facility, STFC Rutherford Appleton Laboratory, Didcot, OX11 0QX, United Kingdom
| | - Michael Böhnel
- Fraunhofer-Entwicklungszentrum Röntgentechnik EZRT, Fraunhofer-Institut für Integrierte Schaltungen IIS, Flugplatzstraße 75, 90768 Fürth, Germany
| | - Stephen Brown
- National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, United Kingdom
| | | | - Ian Sinclair
- University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Michael Salamon
- Fraunhofer-Entwicklungszentrum Röntgentechnik EZRT, Fraunhofer-Institut für Integrierte Schaltungen IIS, Flugplatzstraße 75, 90768 Fürth, Germany
| |
Collapse
|
103
|
Yu Z, Rahman MA, Jha AK. Investigating the limited performance of a deep-learning-based SPECT denoising approach: An observer-study-based characterization. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12035:120350D. [PMID: 35847481 PMCID: PMC9286496 DOI: 10.1117/12.2613134] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Abstract
Multiple objective assessment of image-quality (OAIQ)-based studies have reported that several deep-learning (DL)-based denoising methods show limited performance on signal-detection tasks. Our goal was to investigate the reasons for this limited performance. To achieve this goal, we conducted a task-based characterization of a DL-based denoising approach for individual signal properties. We conducted this study in the context of evaluating a DL-based approach for denoising single photon-emission computed tomography (SPECT) images. The training data consisted of signals of different sizes and shapes within a clustered-lumpy background, imaged with a 2D parallel-hole-collimator SPECT system. The projections were generated at normal and 20% low-count level, both of which were reconstructed using an ordered-subset-expectation-maximization (OSEM) algorithm. A convolutional neural network (CNN)-based denoiser was trained to process the low-count images. The performance of this CNN was characterized for five different signal sizes and four different signal-to-background ratio (SBRs) by designing each evaluation as a signal-known-exactly/background-known-statistically (SKE/BKS) signal-detection task. Performance on this task was evaluated using an anthropomorphic channelized Hotelling observer (CHO). As in previous studies, we observed that the DL-based denoising method did not improve performance on signal-detection tasks. Evaluation using the idea of observer-study-based characterization demonstrated that the DL-based denoising approach did not improve performance on the signal-detection task for any of the signal types. Overall, these results provide new insights on the performance of the DL-based denoising approach as a function of signal size and contrast. More generally, the observer study-based characterization provides a mechanism to evaluate the sensitivity of the method to specific object properties, and may be explored as analogous to characterizations such as modulation transfer function for linear systems. Finally, this work underscores the need for objective task-based evaluation of DL-based denoising approaches.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University
in St. Louis, St. Louis, MO, USA
| | - Md Ashequr Rahman
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, MO, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University
in St. Louis, St. Louis, MO, USA
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, MO, USA
| |
Collapse
|
104
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
105
|
Montoya JC, Zhang C, Li Y, Li K, Chen GH. Reconstruction of three-dimensional tomographic patient models for radiation dose modulation in CT from two scout views using deep learning. Med Phys 2022; 49:901-916. [PMID: 34908175 PMCID: PMC9080958 DOI: 10.1002/mp.15414] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 11/11/2021] [Accepted: 11/16/2021] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND A tomographic patient model is essential for radiation dose modulation in x-ray computed tomography (CT). Currently, two-view scout images (also known as topograms) are used to estimate patient models with relatively uniform attenuation coefficients. These patient models do not account for the detailed anatomical variations of human subjects, and thus, may limit the accuracy of intraview or organ-specific dose modulations in emerging CT technologies. PURPOSE The purpose of this work was to show that 3D tomographic patient models can be generated from two-view scout images using deep learning strategies, and the reconstructed 3D patient models indeed enable accurate prescriptions of fluence-field modulated or organ-specific dose delivery in the subsequent CT scans. METHODS CT images and the corresponding two-view scout images were retrospectively collected from 4214 individual CT exams. The collected data were curated for the training of a deep neural network architecture termed ScoutCT-NET to generate 3D tomographic attenuation models from two-view scout images. The trained network was validated using a cohort of 55 136 images from 212 individual patients. To evaluate the accuracy of the reconstructed 3D patient models, radiation delivery plans were generated using ScoutCT-NET 3D patient models and compared with plans prescribed based on true CT images (gold standard) for both fluence-field-modulated CT and organ-specific CT. Radiation dose distributions were estimated using Monte Carlo simulations and were quantitatively evaluated using the Gamma analysis method. Modulated dose profiles were compared against state-of-the-art tube current modulation schemes. Impacts of ScoutCT-NET patient model-based dose modulation schemes on universal-purpose CT acquisitions and organ-specific acquisitions were also compared in terms of overall image appearance, noise magnitude, and noise uniformity. RESULTS The results demonstrate that (1) The end-to-end trained ScoutCT-NET can be used to generate 3D patient attenuation models and demonstrate empirical generalizability. (2) The 3D patient models can be used to accurately estimate the spatial distribution of radiation dose delivered by standard helical CTs prior to the actual CT acquisition; compared to the gold-standard dose distribution, 95.0% of the voxels in the ScoutCT-NET based dose maps have acceptable gamma values for 5 mm distance-to-agreement and 10% dose difference. (3) The 3D patient models also enabled accurate prescription of fluence-field modulated CT to generate a more uniform noise distribution across the patient body compared to tube current-modulated CT. (4) ScoutCT-NET 3D patient models enabled accurate prescription of organ-specific CT to boost image quality for a given body region-of-interest under a given radiation dose constraint. CONCLUSION 3D tomographic attenuation models generated by ScoutCT-NET from two-view scout images can be used to prescribe fluence-field-modulated or organ-specific CT scans with high accuracy for the overall objective of radiation dose reduction or image quality improvement for a given imaging task.
Collapse
Affiliation(s)
- Juan C Montoya
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Ke Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
106
|
Zeng D, Wang L, Geng M, Li S, Deng Y, Xie Q, Li D, Zhang H, Li Y, Xu Z, Meng D, Ma J. Noise-Generating-Mechanism-Driven Unsupervised Learning for Low-Dose CT Sinogram Recovery. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3083361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
107
|
Fan Y, Wang H, Gemmeke H, Hopp T, Hesser J. Model-data-driven image reconstruction with neural networks for ultrasound computed tomography breast imaging. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.09.035] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
108
|
Wu D, Kim K, Li Q. Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med Phys 2021; 48:7657-7672. [PMID: 34791655 PMCID: PMC11216369 DOI: 10.1002/mp.15101] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 06/07/2021] [Accepted: 06/24/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training. METHODS In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction. RESULTS We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training. CONCLUSIONS The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
109
|
Xia W, Lu Z, Huang Y, Shi Z, Liu Y, Chen H, Chen Y, Zhou J, Zhang Y. MAGIC: Manifold and Graph Integrative Convolutional Network for Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3459-3472. [PMID: 34110990 DOI: 10.1109/tmi.2021.3088344] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low-dose computed tomography (LDCT) scans, which can effectively alleviate the radiation problem, will degrade the imaging quality. In this paper, we propose a novel LDCT reconstruction network that unrolls the iterative scheme and performs in both image and manifold spaces. Because patch manifolds of medical images have low-dimensional structures, we can build graphs from the manifolds. Then, we simultaneously leverage the spatial convolution to extract the local pixel-level features from the images and incorporate the graph convolution to analyze the nonlocal topological features in manifold space. The experiments show that our proposed method outperforms both the quantitative and qualitative aspects of state-of-the-art methods. In addition, aided by a projection loss component, our proposed method also demonstrates superior performance for semi-supervised learning. The network can remove most noise while maintaining the details of only 10% (40 slices) of the training data labeled.
Collapse
|
110
|
Lei Y, Zhang J, Shan H. Strided Self-Supervised Low-Dose CT Denoising for Lung Nodule Classification. PHENOMICS (CHAM, SWITZERLAND) 2021; 1:257-268. [PMID: 36939784 PMCID: PMC9590543 DOI: 10.1007/s43657-021-00025-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/04/2021] [Accepted: 09/14/2021] [Indexed: 11/26/2022]
Abstract
Lung nodule classification based on low-dose computed tomography (LDCT) images has attracted major attention thanks to the reduced radiation dose and its potential for early diagnosis of lung cancer from LDCT-based lung cancer screening. However, LDCT images suffer from severe noise, largely influencing the performance of lung nodule classification. Current methods combining denoising and classification tasks typically require the corresponding normal-dose CT (NDCT) images as the supervision for the denoising task, which is impractical in the context of clinical diagnosis using LDCT. To jointly train these two tasks in a unified framework without the NDCT images, this paper introduces a novel self-supervised method, termed strided Noise2Neighbors or SN2N, for blind medical image denoising and lung nodule classification, where the supervision is generated from noisy input images. More specifically, the proposed SN2N can construct the supervision information from its neighbors for LDCT denoising, which does not need NDCT images anymore. The proposed SN2N method enables joint training of LDCT denoising and lung nodule classification tasks by using self-supervised loss for denoising and cross-entropy loss for classification. Extensively experimental results on the Mayo LDCT dataset demonstrate that our SN2N achieves competitive performance compared with the supervised learning methods that have paired NDCT images as supervision. Moreover, our results on the LIDC-IDRI dataset show that the joint training of LDCT denoising and lung nodule classification significantly improves the performance of LDCT-based lung nodule classification.
Collapse
Affiliation(s)
- Yiming Lei
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433 China
| | - Junping Zhang
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433 China
| | - Hongming Shan
- Institute of Science and Technology for Brain-Inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, 200433 China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, 201210 China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, 201210 China
| |
Collapse
|
111
|
Su T, Cui Z, Yang J, Zhang Y, Liu J, Zhu J, Gao X, Fang S, Zheng H, Ge Y, Liang D. Generalized deep iterative reconstruction for sparse-view CT imaging. Phys Med Biol 2021; 67. [PMID: 34847538 DOI: 10.1088/1361-6560/ac3eae] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 11/30/2021] [Indexed: 11/11/2022]
Abstract
Sparse-view CT is a promising approach in reducing the X-ray radiation dose in clinical CT imaging. However, the CT images reconstructed from the conventional filtered backprojection (FBP) algorithm suffer from severe streaking artifacts. Iterative reconstruction (IR) algorithms have been widely adopted to mitigate these streaking artifacts, but they may prolong the CT imaging time due to the intense data-specific computations. Recently, model-driven deep learning (DL) CT image reconstruction method, which unrolls the iterative optimization procedures into the deep neural network, has shown exciting prospect in improving the image quality and shortening the reconstruction time. In this work, we explore the generalized unrolling scheme for such iterative model to further enhance its performance on sparse-view CT imaging. By using it, the iteration parameters, regularizer term, data-fidelity term and even the mathematical operations are all assumed to be learned and optimized via the network training. Results from the numerical and experimental sparse-view CT imaging demonstrate that the newly proposed network with the maximum generalization provides the best reconstruction performance.
Collapse
Affiliation(s)
- Ting Su
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Zhuoxu Cui
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Jiecheng Yang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Yunxin Zhang
- Beijing Jishuitan Hospital, Beijing, Beijing, CHINA
| | - Jian Liu
- Beijing Tiantan Hospital, Beijing, CHINA
| | - Jiongtao Zhu
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Xiang Gao
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Shibo Fang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Hairong Zheng
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Shenzhen Institutes of Advanced Technology, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, P.R.China, Shenzhen, CHINA
| | - Yongshuai Ge
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, 518055, CHINA
| | - Dong Liang
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, P.R.China, Shenzhen, 518055, CHINA
| |
Collapse
|
112
|
Zavala-Mondragon LA, de With PHN, van der Sommen F. Image Noise Reduction Based on a Fixed Wavelet Frame and CNNs Applied to CT. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9386-9401. [PMID: 34757905 DOI: 10.1109/tip.2021.3125489] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Radiation exposure in CT imaging leads to increased patient risk. This motivates the pursuit of reduced-dose scanning protocols, in which noise reduction processing is indispensable to warrant clinically acceptable image quality. Convolutional Neural Networks (CNNs) have received significant attention as an alternative for conventional noise reduction and are able to achieve state-of-the art results. However, the internal signal processing in such networks is often unknown, leading to sub-optimal network architectures. The need for better signal preservation and more transparency motivates the use of Wavelet Shrinkage Networks (WSNs), in which the Encoding-Decoding (ED) path is the fixed wavelet frame known as Overcomplete Haar Wavelet Transform (OHWT) and the noise reduction stage is data-driven. In this work, we considerably extend the WSN framework by focusing on three main improvements. First, we simplify the computation of the OHWT that can be easily reproduced. Second, we update the architecture of the shrinkage stage by further incorporating knowledge of conventional wavelet shrinkage methods. Finally, we extensively test its performance and generalization, by comparing it with the RED and FBPConvNet CNNs. Our results show that the proposed architecture achieves similar performance to the reference in terms of MSSIM (0.667, 0.662 and 0.657 for DHSN2, FBPConvNet and RED, respectively) and achieves excellent quality when visualizing patches of clinically important structures. Furthermore, we demonstrate the enhanced generalization and further advantages of the signal flow, by showing two additional potential applications, in which the new DHSN2 is used as regularizer: (1) iterative reconstruction and (2) ground-truth free training of the proposed noise reduction architecture. The presented results prove that the tight integration of signal processing and deep learning leads to simpler models with improved generalization.
Collapse
|
113
|
Wang S, Chen J, Zhang F, Zhao M, Cui X, Chen S. Accelerating Monte Carlo simulation of light propagation in tissue mimicking turbid medium based on generative adversarial networks. Med Phys 2021; 49:1209-1215. [PMID: 34788482 DOI: 10.1002/mp.15350] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 09/18/2021] [Accepted: 11/02/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Monte Carlo (MC) simulation is the most frequently used method to numerically model the light propagation in biological tissues because of its high flexibility and precision. Although MC simulation is assumed to be capable of achieving any desired precision, larger number of photons are always necessary for more precise simulation, leading to its major limitation of intensive computation. In this work, the authors present a way to adapt generative adversarial networks (GAN) to accelerate MC simulation. METHODS The pix2pix network, a variant of GAN, was investigated to reconstruct precise MC simulation results from the results roughly modeled by small amount of photons, thus the computation time was expected to be significantly saved. The proposed method was tested on single-layer embedded tumor models to derive the absorption distribution maps. RESULTS The results demonstrate that the absorption distribution maps reconstructed from the simulation of only 10 000 photons were very similar to those modeled by using 1 000 000 photons, based on the criterion of peak signal to noise ratio (PSNR) and percentage difference of power coupling efficiencies, and the simulation process was proved to be accelerated by approximately 102 times. CONCLUSIONS For the first time, GAN was adapted to save computation time of MC simulation of light propagation. By achieving MC simulation with acceptable quality, the proposed method can speed up the computation by hundreds of times.
Collapse
Affiliation(s)
- Shiyuan Wang
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| | - Jiaming Chen
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| | - Fengdi Zhang
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| | - Mingyang Zhao
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Shuo Chen
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| |
Collapse
|
114
|
Zhang Y, Hu D, Zhao Q, Quan G, Liu J, Liu Q, Zhang Y, Coatrieux G, Chen Y, Yu H. CLEAR: Comprehensive Learning Enabled Adversarial Reconstruction for Subtle Structure Enhanced Low-Dose CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3089-3101. [PMID: 34270418 DOI: 10.1109/tmi.2021.3097808] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
X-ray computed tomography (CT) is of great clinical significance in medical practice because it can provide anatomical information about the human body without invasion, while its radiation risk has continued to attract public concerns. Reducing the radiation dose may induce noise and artifacts to the reconstructed images, which will interfere with the judgments of radiologists. Previous studies have confirmed that deep learning (DL) is promising for improving low-dose CT imaging. However, almost all the DL-based methods suffer from subtle structure degeneration and blurring effect after aggressive denoising, which has become the general challenging issue. This paper develops the Comprehensive Learning Enabled Adversarial Reconstruction (CLEAR) method to tackle the above problems. CLEAR achieves subtle structure enhanced low-dose CT imaging through a progressive improvement strategy. First, the generator established on the comprehensive domain can extract more features than the one built on degraded CT images and directly map raw projections to high-quality CT images, which is significantly different from the routine GAN practice. Second, a multi-level loss is assigned to the generator to push all the network components to be updated towards high-quality reconstruction, preserving the consistency between generated images and gold-standard images. Finally, following the WGAN-GP modality, CLEAR can migrate the real statistical properties to the generated images to alleviate over-smoothing. Qualitative and quantitative analyses have demonstrated the competitive performance of CLEAR in terms of noise suppression, structural fidelity and visual perception improvement.
Collapse
|
115
|
Xia W, Lu Z, Huang Y, Liu Y, Chen H, Zhou J, Zhang Y. CT Reconstruction With PDF: Parameter-Dependent Framework for Data From Multiple Geometries and Dose Levels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3065-3076. [PMID: 34086564 DOI: 10.1109/tmi.2021.3085839] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The current mainstream computed tomography (CT) reconstruction methods based on deep learning usually need to fix the scanning geometry and dose level, which significantly aggravates the training costs and requires more training data for real clinical applications. In this paper, we propose a parameter-dependent framework (PDF) that trains a reconstruction network with data originating from multiple alternative geometries and dose levels simultaneously. In the proposed PDF, the geometry and dose level are parameterized and fed into two multilayer perceptrons (MLPs). The outputs of the MLPs are used to modulate the feature maps of the CT reconstruction network, which condition the network outputs on different geometries and dose levels. The experiments show that our proposed method can obtain competitive performance compared to the original network trained with either specific or mixed geometry and dose level, which can efficiently save extra training costs for multiple geometries and dose levels.
Collapse
|
116
|
Wu W, Hu D, Niu C, Yu H, Vardhanabhuti V, Wang G. DRONE: Dual-Domain Residual-based Optimization NEtwork for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3002-3014. [PMID: 33956627 PMCID: PMC8591633 DOI: 10.1109/tmi.2021.3078067] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Deep learning has attracted rapidly increasing attention in the field of tomographic image reconstruction, especially for CT, MRI, PET/SPECT, ultrasound and optical imaging. Among various topics, sparse-view CT remains a challenge which targets a decent image reconstruction from very few projections. To address this challenge, in this article we propose a Dual-domain Residual-based Optimization NEtwork (DRONE). DRONE consists of three modules respectively for embedding, refinement, and awareness. In the embedding module, a sparse sinogram is first extended. Then, sparse-view artifacts are effectively suppressed in the image domain. After that, the refinement module recovers image details in the residual data and image domains synergistically. Finally, the results from the embedding and refinement modules in the data and image domains are regularized for optimized image quality in the awareness module, which ensures the consistency between measurements and images with the kernel awareness of compressed sensing. The DRONE network is trained, validated, and tested on preclinical and clinical datasets, demonstrating its merits in edge preservation, feature recovery, and reconstruction accuracy.
Collapse
|
117
|
Tao X, Wang Y, Lin L, Hong Z, Ma J. Learning to Reconstruct CT Images From the VVBP-Tensor. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3030-3041. [PMID: 34138703 DOI: 10.1109/tmi.2021.3090257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning (DL) is bringing a big movement in the field of computed tomography (CT) imaging. In general, DL for CT imaging can be applied by processing the projection or the image data with trained deep neural networks (DNNs), unrolling the iterative reconstruction as a DNN for training, or training a well-designed DNN to directly reconstruct the image from the projection. In all of these applications, the whole or part of the DNNs work in the projection or image domain alone or in combination. In this study, instead of focusing on the projection or image, we train DNNs to reconstruct CT images from the view-by-view backprojection tensor (VVBP-Tensor). The VVBP-Tensor is the 3D data before summation in backprojection. It contains structures of the scanned object after applying a sorting operation. Unlike the image or projection that provides compressed information due to the integration/summation step in forward or back projection, the VVBP-Tensor provides lossless information for processing, allowing the trained DNNs to preserve fine details of the image. We develop a learning strategy by inputting slices of the VVBP-Tensor as feature maps and outputting the image. Such strategy can be viewed as a generalization of the summation step in conventional filtered backprojection reconstruction. Numerous experiments reveal that the proposed VVBP-Tensor domain learning framework obtains significant improvement over the image, projection, and hybrid projection-image domain learning frameworks. We hope the VVBP-Tensor domain learning framework could inspire algorithm development for DL-based CT imaging.
Collapse
|
118
|
Huang Y, Preuhs A, Manhart M, Lauritsch G, Maier A. Data Extrapolation From Learned Prior Images for Truncation Correction in Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3042-3053. [PMID: 33844627 DOI: 10.1109/tmi.2021.3072568] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Data truncation is a common problem in computed tomography (CT). Truncation causes cupping artifacts inside the field-of-view (FOV) and anatomical structures missing outside the FOV. Deep learning has achieved impressive results in CT reconstruction from limited data. However, its robustness is still a concern for clinical applications. Although the image quality of learning-based compensation schemes may be inadequate for clinical diagnosis, they can provide prior information for more accurate extrapolation than conventional heuristic extrapolation methods. With extrapolated projection, a conventional image reconstruction algorithm can be applied to obtain a final reconstruction. In this work, a general plug-and-play (PnP) method for truncation correction is proposed based on this idea, where various deep learning methods and conventional reconstruction algorithms can be plugged in. Such a PnP method integrates data consistency for measured data and learned prior image information for truncated data. This shows to have better robustness and interpretability than deep learning only. To demonstrate the efficacy of the proposed PnP method, two state-of-the-art deep learning methods, FBPConvNet and Pix2pixGAN, are investigated for truncation correction in cone-beam CT in noise-free and noisy cases. Their robustness is evaluated by showing false negative and false positive lesion cases. With our proposed PnP method, false lesion structures are corrected for both deep learning methods. For FBPConvNet, the root-mean-square error (RMSE) inside the FOV can be improved from 92HU to around 30HU by PnP in the noisy case. Pix2pixGAN solely achieves better image quality than FBPConvNet solely for truncation correction in general. PnP further improves the RMSE inside the FOV from 42HU to around 27HU for Pix2pixGAN. The efficacy of PnP is also demonstrated on real clinical head data.
Collapse
|
119
|
Zhang Z, Yu S, Qin W, Liang X, Xie Y, Cao G. Self-supervised CT super-resolution with hybrid model. Comput Biol Med 2021; 138:104775. [PMID: 34666243 DOI: 10.1016/j.compbiomed.2021.104775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/14/2021] [Accepted: 08/17/2021] [Indexed: 12/19/2022]
Abstract
Software-based methods can improve CT spatial resolution without changing the hardware of the scanner or increasing the radiation dose to the object. In this work, we aim to develop a deep learning (DL) based CT super-resolution (SR) method that can reconstruct low-resolution (LR) sinograms into high-resolution (HR) CT images. We mathematically analyzed imaging processes in the CT SR imaging problem and synergistically integrated the SR model in the sinogram domain and the deblur model in the image domain into a hybrid model (SADIR). SADIR incorporates the CT domain knowledge and is unrolled into a DL network (SADIR-Net). The SADIR-Net is a self-supervised network, which can be trained and tested with a single sinogram. SADIR-Net was evaluated through SR CT imaging of a Catphan700 physical phantom and a real porcine phantom, and its performance was compared to the other state-of-the-art (SotA) DL-based CT SR methods. On both phantoms, SADIR-Net obtains the highest information fidelity criterion (IFC), structure similarity index (SSIM), and lowest root-mean-square-error (RMSE). As to the modulation transfer function (MTF), SADIR-Net also obtains the best result and improves the MTF50% by 69.2% and MTF10% by 69.5% compared with FBP. Alternatively, the spatial resolutions at MTF50% and MTF10% from SADIR-Net can reach 91.3% and 89.3% of the counterparts reconstructed from the HR sinogram with FBP. The results show that SADIR-Net can provide performance comparable to the other SotA methods for CT SR reconstruction, especially in the case of extremely limited training data or even no data at all. Thus, the SADIR method could find use in improving CT resolution without changing the hardware of the scanner or increasing the radiation dose to the object.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, 94305-5847, CA, USA; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shaode Yu
- College of Information and Communication Engineering, Communication University of China, Beijing 100024, China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Guohua Cao
- Virginia Polytechnic Institute & State University, Blacksburg, VA 24061, USA.
| |
Collapse
|
120
|
Bai T, Wang B, Nguyen D, Wang B, Dong B, Cong W, Kalra MK, Jiang S. Deep Interactive Denoiser (DID) for X-Ray Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2965-2975. [PMID: 34329156 DOI: 10.1109/tmi.2021.3101241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Low-dose computed tomography (LDCT) is desirable for both diagnostic imaging and image-guided interventions. Denoisers are widely used to improve the quality of LDCT. Deep learning (DL)-based denoisers have shown state-of-the-art performance and are becoming mainstream methods. However, there are two challenges to using DL-based denoisers: 1) a trained model typically does not generate different image candidates with different noise-resolution tradeoffs, which are sometimes needed for different clinical tasks; and 2) the model's generalizability might be an issue when the noise level in the testing images differs from that in the training dataset. To address these two challenges, in this work, we introduce a lightweight optimization process that can run on top of any existing DL-based denoiser during the testing phase to generate multiple image candidates with different noise-resolution tradeoffs suitable for different clinical tasks in real time. Consequently, our method allows users to interact with the denoiser to efficiently review various image candidates and quickly pick the desired one; thus, we termed this method deep interactive denoiser (DID). Experimental results demonstrated that DID can deliver multiple image candidates with different noise-resolution tradeoffs and shows great generalizability across various network architectures, as well as training and testing datasets with various noise levels.
Collapse
|
121
|
Ye S, Li Z, McCann MT, Long Y, Ravishankar S. Unified Supervised-Unsupervised (SUPER) Learning for X-Ray CT Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2986-3001. [PMID: 34232871 DOI: 10.1109/tmi.2021.3095310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Traditional model-based image reconstruction (MBIR) methods combine forward and noise models with simple object priors. Recent machine learning methods for image reconstruction typically involve supervised learning or unsupervised learning, both of which have their advantages and disadvantages. In this work, we propose a unified supervised-unsupervised (SUPER) learning framework for X-ray computed tomography (CT) image reconstruction. The proposed learning formulation combines both unsupervised learning-based priors (or even simple analytical priors) together with (supervised) deep network-based priors in a unified MBIR framework based on a fixed point iteration analysis. The proposed training algorithm is also an approximate scheme for a bilevel supervised training optimization problem, wherein the network-based regularizer in the lower-level MBIR problem is optimized using an upper-level reconstruction loss. The training problem is optimized by alternating between updating the network weights and iteratively updating the reconstructions based on those weights. We demonstrate the learned SUPER models' efficacy for low-dose CT image reconstruction, for which we use the NIH AAPM Mayo Clinic Low Dose CT Grand Challenge dataset for training and testing. In our experiments, we studied different combinations of supervised deep network priors and unsupervised learning-based or analytical priors. Both numerical and visual results show the superiority of the proposed unified SUPER methods over standalone supervised learning-based methods, iterative MBIR methods, and variations of SUPER obtained via ablation studies. We also show that the proposed algorithm converges rapidly in practice.
Collapse
|
122
|
He J, Chen S, Zhang H, Tao X, Lin W, Zhang S, Zeng D, Ma J. Downsampled Imaging Geometric Modeling for Accurate CT Reconstruction via Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2976-2985. [PMID: 33881992 DOI: 10.1109/tmi.2021.3074783] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
X-ray computed tomography (CT) is widely used clinically to diagnose a variety of diseases by reconstructing the tomographic images of a living subject using penetrating X-rays. For accurate CT image reconstruction, a precise imaging geometric model for the radiation attenuation process is usually required to solve the inversion problem of CT scanning, which encodes the subject into a set of intermediate representations in different angular positions. Here, we show that accurate CT image reconstruction can be subsequently achieved by downsampled imaging geometric modeling via deep-learning techniques. Specifically, we first propose a downsampled imaging geometric modeling approach for the data acquisition process and then incorporate it into a hierarchical neural network, which simultaneously combines both geometric modeling knowledge of the CT imaging system and prior knowledge gained from a data-driven training process for accurate CT image reconstruction. The proposed neural network is denoted as DSigNet, i.e., downsampled-imaging-geometry-based network for CT image reconstruction. We demonstrate the feasibility of the proposed DSigNet for accurate CT image reconstruction with clinical patient data. In addition to improving the CT image quality, the proposed DSigNet might help reduce the computational complexity and accelerate the reconstruction speed for modern CT imaging systems.
Collapse
|
123
|
Ketola JHJ, Heino H, Juntunen MAK, Nieminen MT, Siltanen S, Inkinen SI. Generative adversarial networks improve interior computed tomography angiography reconstruction. Biomed Phys Eng Express 2021; 7. [PMID: 34673559 DOI: 10.1088/2057-1976/ac31cb] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 10/21/2021] [Indexed: 11/12/2022]
Abstract
In interior computed tomography (CT), the x-ray beam is collimated to a limited field-of-view (FOV) (e.g. the volume of the heart) to decrease exposure to adjacent organs, but the resulting image has a severe truncation artifact when reconstructed with traditional filtered back-projection (FBP) type algorithms. In some examinations, such as cardiac or dentomaxillofacial imaging, interior CT could be used to achieve further dose reductions. In this work, we describe a deep learning (DL) method to obtain artifact-free images from interior CT angiography. Our method employs the Pix2Pix generative adversarial network (GAN) in a two-stage process: (1) An extended sinogram is computed from a truncated sinogram with one GAN model, and (2) the FBP reconstruction obtained from that extended sinogram is used as an input to another GAN model that improves the quality of the interior reconstruction. Our double GAN (DGAN) model was trained with 10 000 truncated sinograms simulated from real computed tomography angiography slice images. Truncated sinograms (input) were used with original slice images (target) in training to yield an improved reconstruction (output). DGAN performance was compared with the adaptive de-truncation method, total variation regularization, and two reference DL methods: FBPConvNet, and U-Net-based sinogram extension (ES-UNet). Our DGAN method and ES-UNet yielded the best root-mean-squared error (RMSE) (0.03 ± 0.01), and structural similarity index (SSIM) (0.92 ± 0.02) values, and reference DL methods also yielded good results. Furthermore, we performed an extended FOV analysis by increasing the reconstruction area by 10% and 20%. In both cases, the DGAN approach yielded best results at RMSE (0.03 ± 0.01 and 0.04 ± 0.01 for the 10% and 20% cases, respectively), peak signal-to-noise ratio (PSNR) (30.5 ± 2.6 dB and 28.6 ± 2.6 dB), and SSIM (0.90 ± 0.02 and 0.87 ± 0.02). In conclusion, our method was able to not only reconstruct the interior region with improved image quality, but also extend the reconstructed FOV by 20%.
Collapse
Affiliation(s)
- Juuso H J Ketola
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,The South Savo Social and Health Care Authority, Mikkeli Central Hospital, FI-50100, Finland
| | - Helinä Heino
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
| | - Mikael A K Juntunen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland.,Medical Research Center Oulu, University of Oulu and Oulu University Hospital, FI-90014, Finland
| | - Samuli Siltanen
- Department of Mathematics and Statistics, University of Helsinki, Helsinki, FI-00014, Finland
| | - Satu I Inkinen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
| |
Collapse
|
124
|
Abstract
Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel computational framework that approximates the posterior distribution of the unknown image at each query observation. The proposed framework is very flexible: it handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets. Once the network is trained using the conditional variational autoencoder loss, it provides a computationally efficient sampler for the approximate posterior distribution via feed-forward propagation, and the summarizing statistics of the generated samples are used for both point-estimation and uncertainty quantification. We illustrate the proposed framework with extensive numerical experiments on positron emission tomography (with both moderate and low-count levels) showing that the framework generates high-quality samples when compared with state-of-the-art methods.
Collapse
|
125
|
Wu W, Hu D, Niu C, Broeke LV, Butler APH, Cao P, Atlas J, Chernoglazov A, Vardhanabhuti V, Wang G. Deep learning based spectral CT imaging. Neural Netw 2021; 144:342-358. [PMID: 34560584 DOI: 10.1016/j.neunet.2021.08.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 07/14/2021] [Accepted: 08/20/2021] [Indexed: 10/20/2022]
Abstract
Spectral computed tomography (CT) has attracted much attention in radiation dose reduction, metal artifacts removal, tissue quantification and material discrimination. The x-ray energy spectrum is divided into several bins, each energy-bin-specific projection has a low signal-noise-ratio (SNR) than the current-integrating counterpart, which makes image reconstruction a unique challenge. Traditional wisdom is to use prior knowledge based iterative methods. However, this kind of methods demands a great computational cost. Inspired by deep learning, here we first develop a deep learning based reconstruction method; i.e., U-net with Lpp-norm, Total variation, Residual learning, and Anisotropic adaption (ULTRA). Specifically, we emphasize the various multi-scale feature fusion and multichannel filtering enhancement with a denser connection encoding architecture for residual learning and feature fusion. To address the image deblurring problem associated with the L22- loss, we propose a general Lpp-loss, p>0. Furthermore, the images from different energy bins share similar structures of the same object, the regularization characterizing correlations of different energy bins is incorporated into the Lpp- loss function, which helps unify the deep learning based methods with traditional compressed sensing based methods. Finally, the anisotropically weighted total variation is employed to characterize the sparsity in the spatial-spectral domain to regularize the proposed network In particular, we validate our ULTRA networks on three large-scale spectral CT datasets, and obtain excellent results relative to the competing algorithms. In conclusion, our quantitative and qualitative results in numerical simulation and preclinical experiments demonstrate that our proposed approach is accurate, efficient and robust for high-quality spectral CT image reconstruction.
Collapse
Affiliation(s)
- Weiwen Wu
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China; Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Dianlin Hu
- The Laboratory of Image Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Chuang Niu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Lieza Vanden Broeke
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China
| | | | - Peng Cao
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China
| | - James Atlas
- Department of Radiology, University of Otago, Christchurch, New Zealand
| | | | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China.
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
126
|
Low-Dose CT Image Denoising with Improving WGAN and Hybrid Loss Function. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2973108. [PMID: 34484414 PMCID: PMC8416402 DOI: 10.1155/2021/2973108] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/12/2021] [Accepted: 08/12/2021] [Indexed: 11/17/2022]
Abstract
The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.
Collapse
|
127
|
Ma G, Zhang Y, Zhao X, Wang T, Li H. A neural network with encoded visible edge prior for limited-angle computed tomography reconstruction. Med Phys 2021; 48:6464-6481. [PMID: 34482570 DOI: 10.1002/mp.15205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 08/09/2021] [Accepted: 08/27/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Limited-angle computed tomography is a challenging but important task in certain medical and industrial applications for nondestructive testing. The limited-angle reconstruction problem is highly ill-posed and conventional reconstruction algorithms would introduce heavy artifacts. Various models and methods have been proposed to improve the quality of reconstructions by introducing different priors regarding to the projection data or ideal images. However, the assumed priors might not be practically applicable to all limited-angle reconstruction problems. Convolutional neural network (CNN) exhibits great promise in the modeling of data coupling and has recently become an important technique in medical imaging applications. Although existing CNN methods have demonstrated promising results, their robustness is still a concern. In this paper, in light of the theory of visible and invisible boundaries, we propose an alternating edge-preserving diffusion and smoothing neural network (AEDSNN) for limited-angle reconstruction that builds the visible boundaries as priors into its structure. The proposed method generalizes the alternating edge-preserving diffusion and smoothing (AEDS) method for limited-angle reconstruction developed in the literature by replacing its regularization terms by CNNs, by which the piecewise constant assumption assumed by AEDS is effectively relaxed. METHODS The AEDSNN is derived by unrolling the AEDS algorithm. AEDSNN consists of several blocks, and each block corresponds to one iteration of the AEDS algorithm. In each iteration of the AEDS algorithm, three subproblems are sequentially solved. So, each block of AEDSNN possesses three main layers: data matching layer, x -direction regularization layer for visible edges diffusion, and y -direction regularization layer for artifacts suppressing. The data matching layer is implemented by conventional ordered-subset simultaneous algebraic reconstruction technique (OS-SART) reconstruction algorithm, while the two regularization layers are modeled by CNNs for more intelligent and better encoding of priors regarding to the reconstructed images. To further strength the visible edge prior, the attention mechanism and the pooling layers are incorporated into AEDSNN to facilitate the procedure of edge-preserving diffusion from visible edges. RESULTS We have evaluated the performance of AEDSNN by comparing it with popular algorithms for limited-angle reconstruction. Experiments on the medical dataset show that the proposed AEDSNN effectively breaks through the piecewise constant assumption usually assumed by conventional reconstruction algorithms, and works much better for piecewise smooth images with nonsharp edges. Experiments on the printed circuit board (PCB) dataset show that AEDSNN can better encode and utilize the visible edge prior, and its reconstructions are consistently better compared to the competing algorithms. CONCLUSIONS A deep-learning approach for limited-angle reconstruction is proposed in this paper, which significantly outperforms existing methods. The superiority of AEDSNN consists of three aspects. First, by the virtue of CNN, AEDSNN is free of parameter-tuning. This is a great facility compared to conventional reconstruction methods; Second, AEDSNN is quite fast. Conventional reconstruction methods usually need hundreds even thousands of iterations, while AEDSNN just needs three to five iterations (i.e., blocks); Third, the learned regularizer by AEDSNN enjoys a broader application capacity, which could work well with piecewise smooth images and surpass the piecewise constant assumption frequently assumed for computed tomography images.
Collapse
Affiliation(s)
- Genwei Ma
- School of Mathematical Sciences, Capital Normal University, Beijing, China.,Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China
| | - Yinghui Zhang
- School of Mathematical Sciences, Capital Normal University, Beijing, China.,Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China
| | - Xing Zhao
- School of Mathematical Sciences, Capital Normal University, Beijing, China.,Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China
| | - Tong Wang
- School of Mathematical Sciences, Capital Normal University, Beijing, China.,Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China
| | - Hongwei Li
- School of Mathematical Sciences, Capital Normal University, Beijing, China.,Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China
| |
Collapse
|
128
|
Juntunen MAK, Kotiaho AO, Nieminen MT, Inkinen SI. Optimizing iterative reconstruction for quantification of calcium hydroxyapatite with photon counting flat-detector computed tomography: a cardiac phantom study. J Med Imaging (Bellingham) 2021; 8:052102. [PMID: 33718518 PMCID: PMC7946398 DOI: 10.1117/1.jmi.8.5.052102] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 01/28/2021] [Indexed: 11/28/2022] Open
Abstract
Purpose: Coronary artery calcium (CAC) scoring with computed tomography (CT) has been proposed as a screening tool for coronary artery disease, but concerns remain regarding the radiation dose of CT CAC scoring. Photon counting detectors and iterative reconstruction (IR) are promising approaches for patient dose reduction, yet the preservation of CAC scores with IR has been questioned. The purpose of this study was to investigate the applicability of IR for quantification of CAC using a photon counting flat-detector. Approach: We imaged a cardiac rod phantom with calcium hydroxyapatite (CaHA) inserts with different noise levels using an experimental photon counting flat-detector CT setup to simulate the clinical CAC scoring protocol. We applied filtered back projection (FBP) and two IR algorithms with different regularization strengths. We compared the air kerma values, image quality parameters [noise magnitude, noise power spectrum, modulation transfer function (MTF), and contrast-to-noise ratio], and CaHA quantification accuracy between FBP and IR. Results: IR regularization strength influenced CAC scores significantly ( p < 0.05 ). The CAC volumes and scores between FBP and IRs were the most similar when the IR regularization strength was chosen to match the MTF of the FBP reconstruction. Conclusion: When the regularization strength is selected to produce comparable spatial resolution with FBP, IR can yield comparable CAC scores and volumes with FBP. Nonetheless, at the lowest radiation dose setting, FBP produced more accurate CAC volumes and scores compared to IR, and no improved CAC scoring accuracy at low dose was demonstrated with the utilized IR methods.
Collapse
Affiliation(s)
- Mikael A. K. Juntunen
- University of Oulu, Research Unit of Medical Imaging, Physics, and Technology, Oulu, Finland
- Oulu University Hospital, Department of Diagnostic Radiology, Oulu, Finland
| | - Antti O. Kotiaho
- Oulu University Hospital, Department of Diagnostic Radiology, Oulu, Finland
| | - Miika T. Nieminen
- University of Oulu, Research Unit of Medical Imaging, Physics, and Technology, Oulu, Finland
- Oulu University Hospital, Department of Diagnostic Radiology, Oulu, Finland
- Medical Research Center, University of Oulu, Oulu University Hospital, Oulu, Finland
| | - Satu I. Inkinen
- University of Oulu, Research Unit of Medical Imaging, Physics, and Technology, Oulu, Finland
| |
Collapse
|
129
|
Zhang C, Li Y, Chen GH. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med Phys 2021; 48:5765-5781. [PMID: 34458996 DOI: 10.1002/mp.15183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/09/2021] [Accepted: 08/02/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: (1) limited reconstruction accuracy for individual patients and (2) limited generalizability for patient statistical cohorts. PURPOSE The purpose of this work is to address the previously mentioned challenges in current deep learning methods. METHODS A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. RESULTS Extensive evaluation studies have demonstrated that: (1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS-based methods; (2) the false-positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and (3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. CONCLUSIONS DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.,Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
130
|
Zhang Z, Liang X, Zhao W, Xing L. Noise2Context: Context-assisted learning 3D thin-layer for low-dose CT. Med Phys 2021; 48:5794-5803. [PMID: 34287948 DOI: 10.1002/mp.15119] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/31/2021] [Accepted: 07/08/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of x-ray radiation exposure attract more and more attention. To lower the x-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data. METHODS In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT. RESULTS Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context. CONCLUSIONS In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
131
|
Liu J, Kang Y, Qiang J, Wang Y, Hu D, Chen Y. Low-dose CT imaging via cascaded ResUnet with spectrum loss. Methods 2021; 202:78-87. [PMID: 33992773 DOI: 10.1016/j.ymeth.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 04/07/2021] [Accepted: 05/10/2021] [Indexed: 11/29/2022] Open
Abstract
The suppression of artifact noise in computed tomography (CT) with a low-dose scan protocol is challenging. Conventional statistical iterative algorithms can improve reconstruction but cannot substantially eliminate large streaks and strong noise elements. In this paper, we present a 3D cascaded ResUnet neural network (Ca-ResUnet) strategy with modified noise power spectrum loss for reducing artifact noise in low-dose CT imaging. The imaging workflow consists of four components. The first component is filtered backprojection (FBP) reconstruction via a domain transformation module for suppressing artifact noise. The second is a ResUnet neural network that operates on the CT image. The third is an image compensation module that compensates for the loss of tiny structures, and the last is a second ResUnet neural network with modified spectrum loss for fine-tuning the reconstructed image. Verification results based on American Association of Physicists in Medicine (AAPM) and United Image Healthcare (UIH) datasets confirm that the proposed strategy significantly reduces serious artifact noise while retaining desired structures.
Collapse
Affiliation(s)
- Jin Liu
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China; Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China.
| | - Yanqin Kang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China; Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China
| | - Jun Qiang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China
| | - Yong Wang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China
| | - Dianlin Hu
- Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yang Chen
- Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; School of Computer Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
132
|
Xiang J, Dong Y, Yang Y. FISTA-Net: Learning a Fast Iterative Shrinkage Thresholding Network for Inverse Problems in Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1329-1339. [PMID: 33493113 DOI: 10.1109/tmi.2021.3054167] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Inverse problems are essential to imaging applications. In this letter, we propose a model-based deep learning network, named FISTA-Net, by combining the merits of interpretability and generality of the model-based Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) and strong regularization and tuning-free advantages of the data-driven neural network. By unfolding the FISTA into a deep network, the architecture of FISTA-Net consists of multiple gradient descent, proximal mapping, and momentum modules in cascade. Different from FISTA, the gradient matrix in FISTA-Net can be updated during iteration and a proximal operator network is developed for nonlinear thresholding which can be learned through end-to-end training. Key parameters of FISTA-Net including the gradient step size, thresholding value and momentum scalar are tuning-free and learned from training data rather than hand-crafted. We further impose positive and monotonous constraints on these parameters to ensure they converge properly. The experimental results, evaluated both visually and quantitatively, show that the FISTA-Net can optimize parameters for different imaging tasks, i.e. Electromagnetic Tomography (EMT) and X-ray Computational Tomography (X-ray CT). It outperforms the state-of-the-art model-based and deep learning methods and exhibits good generalization ability over other competitive learning-based approaches under different noise levels.
Collapse
|
133
|
Shao W, Rowe SP, Du Y. SPECTnet: a deep learning neural network for SPECT image reconstruction. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:819. [PMID: 34268432 PMCID: PMC8246183 DOI: 10.21037/atm-20-3345] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 07/30/2020] [Indexed: 12/22/2022]
Abstract
Background Single photon emission computed tomography (SPECT) is an important functional tool for clinical diagnosis and scientific research of brain disorders, but suffers from limited spatial resolution and high noise due to hardware design and imaging physics. The present study is to develop a deep learning technique for SPECT image reconstruction that directly converts raw projection data to image with high resolution and low noise, while an efficient training method specifically applicable to medical image reconstruction is presented. Methods A custom software was developed to generate 20,000 2-D brain phantoms, of which 16,000 were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To reduce development difficulty, a two-step training strategy for network design was adopted. We first compressed full-size activity image (128×128 pixels) to a one-D vector consisting of 256×1 pixels, accomplished by an autoencoder (AE) consisting of an encoder and a decoder. The vector is a good representation of the full-size image in a lower-dimensional space and was used as a compact label to develop the second network that maps between the projection-data domain and the vector domain. Since the label had 256 pixels only, the second network was compact and easy to converge. The second network, when successfully developed, was connected to the decoder (a portion of AE) to decompress the vector to a regular 128×128 image. Therefore, a complex network was essentially divided into two compact neural networks trained separately in sequence but eventually connectable. Results A total of 2,000 test examples, a synthetic brain phantom, and de-identified patient data were used to validate SPECTnet. Results obtained from SPECTnet were compared with those obtained from our clinic OS-EM method. Images with lower noise and more accurate information in the uptake areas were obtained by SPECTnet. Conclusions The challenge of developing a complex deep neural network is reduced by training two separate compact connectable networks. The combination of the two networks forms the full version of SPECTnet. Results show that the developed neural network can produce more accurate SPECT images.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Steven P Rowe
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
134
|
Xue Y, Qin W, Luo C, Yang P, Jiang Y, Tsui T, He H, Wang L, Qin J, Xie Y, Niu T. Multi-Material Decomposition for Single Energy CT Using Material Sparsity Constraint. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1303-1318. [PMID: 33460369 DOI: 10.1109/tmi.2021.3051416] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multi-material decomposition (MMD) decomposes CT images into basis material images, and is a promising technique in clinical diagnostic CT to identify material compositions within the human body. MMD could be implemented on measurements obtained from spectral CT protocol, although spectral CT data acquisition is not readily available in most clinical environments. MMD methods using single energy CT (SECT), broadly applied in radiological departments of most hospitals, have been proposed in the literature while challenged by the inferior decomposition accuracy and the limited number of material bases due to the constrained material information in the SECT measurement. In this paper, we propose an image-domain SECT MMD method using material sparsity as an assistance under the condition that each voxel of the CT image contains at most two different elemental materials. L0 norm represents the material sparsity constraint (MSC) and is integrated into the decomposition objective function with a least-square data fidelity term, total variation term, and a sum-to-one constraint of material volume fractions. An accelerated primal-dual (APD) algorithm with line-search scheme is applied to solve the problem. The pixelwise direct inversion method with the two-material assumption (TMA) is applied to estimate the initials. We validate the proposed method on phantom and patient data. Compared with the TMA method, the proposed MSC method increases the volume fraction accuracy (VFA) from 92.0% to 98.5% in the phantom study. In the patient study, the calcification area can be clearly visualized in the virtual non-contrast image generated by the proposed method, and has a similar shape to that in the ground-truth contrast-free CT image. The high decomposition image quality from the proposed method substantially facilitates the SECT-based MMD clinical applications.
Collapse
|
135
|
Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021; 71:102065. [PMID: 33915472 DOI: 10.1016/j.media.2021.102065] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/16/2021] [Accepted: 03/30/2021] [Indexed: 12/12/2022]
Abstract
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
Collapse
|
136
|
Willemink MJ, Varga-Szemes A, Schoepf UJ, Codari M, Nieman K, Fleischmann D, Mastrodicasa D. Emerging methods for the characterization of ischemic heart disease: ultrafast Doppler angiography, micro-CT, photon-counting CT, novel MRI and PET techniques, and artificial intelligence. Eur Radiol Exp 2021; 5:12. [PMID: 33763754 PMCID: PMC7991013 DOI: 10.1186/s41747-021-00207-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 01/22/2021] [Indexed: 12/24/2022] Open
Abstract
After an ischemic event, disruptive changes in the healthy myocardium may gradually develop and may ultimately turn into fibrotic scar. While these structural changes have been described by conventional imaging modalities mostly on a macroscopic scale-i.e., late gadolinium enhancement at magnetic resonance imaging (MRI)-in recent years, novel imaging methods have shown the potential to unveil an even more detailed picture of the postischemic myocardial phenomena. These new methods may bring advances in the understanding of ischemic heart disease with potential major changes in the current clinical practice. In this review article, we provide an overview of the emerging methods for the non-invasive characterization of ischemic heart disease, including coronary ultrafast Doppler angiography, photon-counting computed tomography (CT), micro-CT (for preclinical studies), low-field and ultrahigh-field MRI, and 11C-methionine positron emission tomography. In addition, we discuss new opportunities brought by artificial intelligence, while addressing promising future scenarios and the challenges for the application of artificial intelligence in the field of cardiac imaging.
Collapse
Affiliation(s)
- Martin J Willemink
- Department of Radiology, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94035, USA
| | - Akos Varga-Szemes
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - U Joseph Schoepf
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - Marina Codari
- Department of Radiology, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94035, USA
| | - Koen Nieman
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Cardiovascular Institute, Stanford, CA, 94305, USA
| | - Dominik Fleischmann
- Department of Radiology, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94035, USA
- Stanford Cardiovascular Institute, Stanford, CA, 94305, USA
| | - Domenico Mastrodicasa
- Department of Radiology, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94035, USA.
- Stanford Cardiovascular Institute, Stanford, CA, 94305, USA.
| |
Collapse
|
137
|
Zhang Z, Yu L, Zhao W, Xing L. Modularized data-driven reconstruction framework for nonideal focal spot effect elimination in computed tomography. Med Phys 2021; 48:2245-2257. [PMID: 33595900 DOI: 10.1002/mp.14785] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 01/17/2021] [Accepted: 02/12/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE High-performance computed tomography (CT) plays a vital role in clinical decision-making. However, the performance of CT imaging is adversely affected by the nonideal focal spot size of the x-ray source or degraded by an enlarged focal spot size due to aging. In this work, we aim to develop a deep learning-based strategy to mitigate the problem so that high spatial resolution CT images can be obtained even in the case of a nonideal x-ray source. METHODS To reconstruct high-quality CT images from blurred sinograms via joint image and sinogram learning, a cross-domain hybrid model is formulated via deep learning into a modularized data-driven reconstruction (MDR) framework. The proposed MDR framework comprises several blocks, and all the blocks share the same network architecture and network parameters. In essence, each block utilizes two sub-models to generate an estimated blur kernel and a high-quality CT image simultaneously. In this way, our framework generates not only a final high-quality CT image but also a series of intermediate images with gradually improved anatomical details, enhancing the visual perception for clinicians through the dynamic process. We used simulated training datasets to train our model in an end-to-end manner and tested our model on both simulated and realistic experimental datasets. RESULTS On the simulated testing datasets, our approach increases the information fidelity criterion (IFC) by up to 34.2%, the universal quality index (UQI) by up to 20.3%, the signal-to-noise (SNR) by up to 6.7%, and reduces the root mean square error (RMSE) by up to 10.5% as compared with FBP. Compared with the iterative deconvolution method (NSM), MDR increases IFC by up to 24.7%, UQI by up to 16.7%, SNR by up to 6.0%, and reduces RMSE by up to 9.4%. In the modulation transfer function (MTF) experiment, our method improves the MTF50% by 34.5% and MTF10% by 18.7% as compared with FBP, Similarly remarkably, our method improves MTF50% by 14.3% and MTF10% by 0.9% as compared with NSM. Also, our method shows better imaging results in the edge of bony structures and other tiny structures in the experiments using phantom consisting of ham and a bottle of peanuts. CONCLUSIONS A modularized data-driven CT reconstruction framework is established to mitigate the blurring effect caused by a nonideal x-ray source with relatively large focal spot. The proposed method enables us to obtain high-resolution images with less ideal x-ray source.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
138
|
Chang S, Chen X, Duan J, Mou X. A CNN-Based Hybrid Ring Artifact Reduction Algorithm for CT Images. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2983391] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
139
|
Considering anatomical prior information for low-dose CT image enhancement using attribute-augmented Wasserstein generative adversarial networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.077] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
140
|
Bazrafkan S, Van Nieuwenhove V, Soons J, De Beenhouwer J, Sijbers J. To recurse or not to recurse: a low-dose CT study. PROGRESS IN ARTIFICIAL INTELLIGENCE 2021. [DOI: 10.1007/s13748-020-00224-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
141
|
Hu D, Liu J, Lv T, Zhao Q, Zhang Y, Quan G, Feng J, Chen Y, Luo L. Hybrid-Domain Neural Network Processing for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3011413] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
142
|
Ran M, Xia W, Huang Y, Lu Z, Bao P, Liu Y, Sun H, Zhou J, Zhang Y. MD-Recon-Net: A Parallel Dual-Domain Convolutional Neural Network for Compressed Sensing MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2991877] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
143
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
144
|
Sheng W, Zhao X, Li M. A sequential regularization based image reconstruction method for limited-angle spectral CT. Phys Med Biol 2020; 65:235038. [PMID: 32464621 DOI: 10.1088/1361-6560/ab9771] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
In spectral computed tomography (CT), the object is respectively scanned under different x-ray spectra. Multiple projection data can be collectively used for reconstructing basis images and virtual monochromatic images, which have been used in material decomposition, beam-hardening correction, bone removal, and so on. In practice, projection data may be obtained in a limited scanning angular range. Images reconstructed from limited-angle data by conventional spectral CT reconstruction methods will be deteriorated by limited-angle related artifacts and basis image decomposition errors. Motivated by observations of limited-angle spectral CT, we propose a sequential regularization-based limited-angle spectral CT reconstruction model and its numerical solver. Both simulated and real data experiments validate that our method is capable of suppressing artifacts, preserving edges and reducing decomposition errors.
Collapse
Affiliation(s)
- Wenjuan Sheng
- School of Mathematical Sciences, Capital Normal University, Beijing 100048, People's Republic of China. Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing 100048, People's Republic of China
| | | | | |
Collapse
|
145
|
Zheng A, Gao H, Zhang L, Xing Y. A dual-domain deep learning-based reconstruction method for fully 3D sparse data helical CT. Phys Med Biol 2020; 65:245030. [PMID: 32365345 DOI: 10.1088/1361-6560/ab8fc1] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Helical CT has been widely used in clinical diagnosis. In this work, we focus on a new prototype of helical CT, equipped with sparsely spaced multidetector and multi-slit collimator (MSC) in the axis direction. This type of system can not only lower radiation dose, and suppress scattering by MSC, but also cuts down the manufacturing cost of the detector. The major problem to overcome with such a system, however, is that of insufficient data for reconstruction. Hence, we propose a deep learning-based function optimization method for this ill-posed inverse problem. By incorporating a Radon inverse operator, and disentangling each slice, we significantly simplify the complexity of our network for 3D reconstruction. The network is composed of three subnetworks. Firstly, a convolutional neural network (CNN) in the projection domain is constructed to estimate missing projection data, and to convert helical projection data to 2D fan-beam projection data. This is follwed by the deployment of an analytical linear operator to transfer the data from the projection domain to the image domain. Finally, an additional CNN in the image domain is added for further image refinement. These three steps work collectively, and can be trained end to end. The overall network is trained on a simulated CT dataset based on eight patients from the American Association of Physicists in Medicine (AAPM) Low Dose CT Grand Challenge. We evaluate the trained network on both simulated datasets and clinical datasets. Extensive experimental studies have yielded very encouraging results, based on both visual examination and quantitative evaluation. These results demonstrate the effectiveness of our method and its potential for clinical usage. The proposed method provides us with a new solution for a fully 3D ill-posed problem.
Collapse
Affiliation(s)
- Ao Zheng
- Department of Engineering Physics, Tsinghua University, Beijing 100084, People's Republic of China. Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education, Beijing 100084, People's Republic of China
| | | | | | | |
Collapse
|
146
|
|
147
|
Xie N, Gong K, Guo N, Qin Z, Wu Z, Liu H, Li Q. Penalized-likelihood PET Image Reconstruction Using 3D Structural Convolutional Sparse Coding. IEEE Trans Biomed Eng 2020; 69:4-14. [PMID: 33284746 DOI: 10.1109/tbme.2020.3042907] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Positron emission tomography (PET) is widely used for clinical diagnosis. As PET suffers from low resolution and high noise, numerous efforts try to incorporate anatomical priors into PET image reconstruction, especially with the development of hybrid PET/CT and PET/MRI systems. In this work, we proposed a cube-based 3D structural convolutional sparse coding (CSC) concept for penalized-likelihood PET image reconstruction, named 3D PET-CSC. The proposed 3D PET-CSC takes advantage of the convolutional operation and manages to incorporate anatomical priors without the need of registration or supervised training. As 3D PET-CSC codes the whole 3D PET image, instead of patches, it alleviates the staircase artifacts commonly presented in traditional patch-based sparse coding methods. Compared with traditional coding methods in Fourier domain, the proposed method extends the 3D CSC to a straightforward approach based on the pursuit of localized cubes. Moreover, we developed the residual-image and order-subset mechanisms to further reduce the computational cost and accelerate the convergence for the proposed 3D PET-CSC method. Experiments based on computer simulations and clinical datasets demonstrate the superiority of 3D PET-CSC compared with other reference methods.
Collapse
|
148
|
Knoll F, Murrell T, Sriram A, Yakubova N, Zbontar J, Rabbat M, Defazio A, Muckley MJ, Sodickson DK, Zitnick CL, Recht MP. Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge. Magn Reson Med 2020; 84:3054-3070. [PMID: 32506658 PMCID: PMC7719611 DOI: 10.1002/mrm.28338] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 12/22/2022]
Abstract
PURPOSE To advance research in the field of machine learning for MR image reconstruction with an open challenge. METHODS We provided participants with a dataset of raw k-space data from 1,594 consecutive clinical exams of the knee. The goal of the challenge was to reconstruct images from these data. In order to strike a balance between realistic data and a shallow learning curve for those not already familiar with MR image reconstruction, we ran multiple tracks for multi-coil and single-coil data. We performed a two-stage evaluation based on quantitative image metrics followed by evaluation by a panel of radiologists. The challenge ran from June to December of 2019. RESULTS We received a total of 33 challenge submissions. All participants chose to submit results from supervised machine learning approaches. CONCLUSIONS The challenge led to new developments in machine learning for image reconstruction, provided insight into the current state of the art in the field, and highlighted remaining hurdles for clinical adoption.
Collapse
Affiliation(s)
- Florian Knoll
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| | - Tullie Murrell
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Anuroop Sriram
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | | | - Jure Zbontar
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Michael Rabbat
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Aaron Defazio
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Matthew J. Muckley
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| | - Daniel K. Sodickson
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| | | | - Michael P. Recht
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| |
Collapse
|
149
|
Zhao C, Martin T, Shao X, Alger JR, Duddalwar V, Wang DJJ. Low Dose CT Perfusion With K-Space Weighted Image Average (KWIA). IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3879-3890. [PMID: 32746131 PMCID: PMC7704693 DOI: 10.1109/tmi.2020.3006461] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
CTP (Computed Tomography Perfusion) is widely used in clinical practice for the evaluation of cerebrovascular disorders. However, CTP involves high radiation dose (≥~200mGy) as the X-ray source remains continuously on during the passage of contrast media. The purpose of this study is to present a low dose CTP technique termed K-space Weighted Image Average (KWIA) using a novel projection view-shared averaging algorithm with reduced tube current. KWIA takes advantage of k-space signal property that the image contrast is primarily determined by the k-space center with low spatial frequencies and oversampled projections. KWIA divides each 2D Fourier transform (FT) or k-space CTP data into multiple rings. The outer rings are averaged with neighboring time frames to achieve adequate signal-to-noise ratio (SNR), while the center region of k-space remains unchanged to preserve high temporal resolution. Reduced dose sinogram data were simulated by adding Poisson distributed noise with zero mean on digital phantom and clinical CTP scans. A physical CTP phantom study was also performed with different X-ray tube currents. The sinogram data with simulated and real low doses were then reconstructed with KWIA, and compared with those reconstructed by standard filtered back projection (FBP) and simultaneous algebraic reconstruction with regularization of total variation (SART-TV). Evaluation of image quality and perfusion metrics using parameters including SNR, CNR (contrast-to-noise ratio), AUC (area-under-the-curve), and CBF (cerebral blood flow) demonstrated that KWIA is able to preserve the image quality, spatial and temporal resolution, as well as the accuracy of perfusion quantification of CTP scans with considerable (50-75%) dose-savings.
Collapse
|
150
|
Cong W, Xi Y, Fitzgerald P, De Man B, Wang G. Virtual Monoenergetic CT Imaging via Deep Learning. PATTERNS (NEW YORK, N.Y.) 2020; 1:100128. [PMID: 33294869 PMCID: PMC7691386 DOI: 10.1016/j.patter.2020.100128] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 08/15/2020] [Accepted: 09/22/2020] [Indexed: 01/12/2023]
Abstract
Conventional single-spectrum computed tomography (CT) reconstructs a spectrally integrated attenuation image and reveals tissues morphology without any information about the elemental composition of the tissues. Dual-energy CT (DECT) acquires two spectrally distinct datasets and reconstructs energy-selective (virtual monoenergetic [VM]) and material-selective (material decomposition) images. However, DECT increases system complexity and radiation dose compared with single-spectrum CT. In this paper, a deep learning approach is presented to produce VM images from single-spectrum CT images. Specifically, a modified residual neural network (ResNet) model is developed to map single-spectrum CT images to VM images at pre-specified energy levels. This network is trained on clinical DECT data and shows excellent convergence behavior and image accuracy compared with VM images produced by DECT. The trained model produces high-quality approximations of VM images with a relative error of less than 2%. This method enables multi-material decomposition into three tissue classes, with accuracy comparable with DECT.
Collapse
Affiliation(s)
- Wenxiang Cong
- Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Yan Xi
- Shanghai First-Imaging Tech, Shanghai, China
| | | | - Bruno De Man
- GE Research, One Research Circle, Niskayuna, NY 12309, USA
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| |
Collapse
|