151
|
Singh R, Wu W, Wang G, Kalra MK. Artificial intelligence in image reconstruction: The change is here. Phys Med 2020; 79:113-125. [DOI: 10.1016/j.ejmp.2020.11.012] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/19/2022] Open
|
152
|
Chen G, Zhao Y, Huang Q, Gao H. 4D-AirNet: a temporally-resolved CBCT slice reconstruction method synergizing analytical and iterative method with deep learning. Phys Med Biol 2020; 65:175020. [DOI: 10.1088/1361-6560/ab9f60] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
153
|
An K, Wang J, Zhou R, Liu F, Wu W. Ring-artifacts removal for photon-counting CT. OPTICS EXPRESS 2020; 28:25180-25193. [PMID: 32907045 DOI: 10.1364/oe.400108] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 08/04/2020] [Indexed: 06/11/2023]
Abstract
Ring artifacts usually appear in photon counting computed tomography (PCCT) images, which may compromise image quality and cause non-uniformity bias. This study proposed a fast ring artifacts removal method by exploring the correlation from projections with different views for PCCT. This method has three advantages. First, our method only employ mean projection of current scan to correct projections without additional scans. Second, our method can correct the inconsistency of all detector pixels simultaneously without locating the inconsistent response pixels. Third, it can preserve reconstructed image details well without extra computational cost. Both numerical and preclinical experiments demonstrate the proposed method can suppress the ring artifacts very well than the competitors.
Collapse
|
154
|
Zhang Q, Hu Z, Jiang C, Zheng H, Ge Y, Liang D. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging. Phys Med Biol 2020; 65:155010. [PMID: 32369793 DOI: 10.1088/1361-6560/ab9066] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The suppression of streak artifacts in computed tomography with a limited-angle configuration is challenging. Conventional analytical algorithms, such as filtered backprojection (FBP), are not successful due to incomplete projection data. Moreover, model-based iterative total variation algorithms effectively reduce small streaks but do not work well at eliminating large streaks. In contrast, FBP mapping networks and deep-learning-based postprocessing networks are outstanding at removing large streak artifacts; however, these methods perform processing in separate domains, and the advantages of multiple deep learning algorithms operating in different domains have not been simultaneously explored. In this paper, we present a hybrid-domain convolutional neural network (hdNet) for the reduction of streak artifacts in limited-angle computed tomography. The network consists of three components: the first component is a convolutional neural network operating in the sinogram domain, the second is a domain transformation operation, and the last is a convolutional neural network operating in the CT image domain. After training the network, we can obtain artifact-suppressed CT images directly from the sinogram domain. Verification results based on numerical, experimental and clinical data confirm that the proposed method can significantly reduce serious artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China. Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China. Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | | | | | | | | | | |
Collapse
|
155
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
156
|
Shao W, Du Y. Microwave Imaging by Deep Learning Network: Feasibility and Training Method. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION 2020; 68:5626-5635. [PMID: 34113046 PMCID: PMC8189033 DOI: 10.1109/tap.2020.2978952] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Microwave image reconstruction based on a deep-learning method is investigated in this paper. The neural network is capable of converting measured microwave signals acquired from a 24×24 antenna array at 4 GHz into a 128×128 image. To reduce the training difficulty, we first developed an autoencoder by which high-resolution images (128×128) were represented with 256×1 vectors; then we developed the second neural network which aimed to map microwave signals to the compressed features (256×1 vector). Two neural networks can be combined to a full network to make reconstructions, when both are successfully developed. The present two-stage training method reduces the difficulty in training deep learning networks (DLN) for inverse reconstruction. The developed neural network is validated by simulation examples and experimental data with objects in different shapes/sizes, placed in different locations, and with dielectric constant ranging from 2~6. Comparisons between the imaging results achieved by the present method and two conventional approaches: distorted Born iterative method (DBIM) and phase confocal method (PCM) are also provided.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287 USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287 USA
| |
Collapse
|
157
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
158
|
Yang C, Lan H, Gao F. Accelerated Photoacoustic Tomography Reconstruction via Recurrent Inference Machines. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6371-6374. [PMID: 31947300 DOI: 10.1109/embc.2019.8856290] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accelerated photoacoustic tomography (PAT) reconstruction is important for real-time photoacoustic imaging (PAI) applications. PAT requires a reconstruction algorithm to reconstruct the detected photoacoustic signal in order to obtain the detected image of the tissue, which is usually an inverse problem. Different from the typical method for solving the inverse problems that defines a model and chooses an inference procedure, we propose to use the Recurrent Inference Machines (RIM) as a framework for PAT reconstruction. Our model performs an accelerated iterative reconstruction, and directly learns to solve the inverse problem in PAT using the information from a forward model that is based on k-space methods. As shown in experiments, our method achieves faster high-resolution PAT reconstruction, and outperforms another method based on deep neural network in some respects.
Collapse
|
159
|
Ding Q, Chen G, Zhang X, Huang Q, Ji H, Gao H. Low-dose CT with deep learning regularization via proximal forward-backward splitting. Phys Med Biol 2020; 65:125009. [PMID: 32209742 DOI: 10.1088/1361-6560/ab831a] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Low-dose x-ray computed tomography (LDCT) is desirable for reduced patient dose. This work develops new image reconstruction methods with deep learning (DL) regularization for LDCT. Our methods are based on the unrolling of a proximal forward-backward splitting (PFBS) framework with data-driven image regularization via deep neural networks. In contrast to PFBS-IR, which utilizes standard data fidelity updates via an iterative reconstruction (IR) method, PFBS-AIR involves preconditioned data fidelity updates that fuse the analytical reconstruction (AR) and IR methods in a synergistic way, i.e. fused analytical and iterative reconstruction (AIR). The results suggest that the DL-regularized methods (PFBS-IR and PFBS-AIR) provide better reconstruction quality compared to conventional methods (AR or IR). In addition, owing to the AIR, PFBS-AIR noticeably outperformed PFBS-IR and another DL-based postprocessing method, FBPConvNet.
Collapse
Affiliation(s)
- Qiaoqiao Ding
- Department of Mathematics, National University of Singapore, Singapore 119076, Singapore
| | | | | | | | | | | |
Collapse
|
160
|
Fan F, Shan H, Kalra MK, Singh R, Qian G, Getzin M, Teng Y, Hahn J, Wang G. Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2035-2050. [PMID: 31902758 PMCID: PMC7376975 DOI: 10.1109/tmi.2019.2963248] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Inspired by complexity and diversity of biological neurons, our group proposed quadratic neurons by replacing the inner product in current artificial neurons with a quadratic operation on input data, thereby enhancing the capability of an individual neuron. Along this direction, we are motivated to evaluate the power of quadratic neurons in popular network architectures, simulating human-like learning in the form of "quadratic-neuron-based deep learning". Our prior theoretical studies have shown important merits of quadratic neurons and networks in representation, efficiency, and interpretability. In this paper, we use quadratic neurons to construct an encoder-decoder structure, referred as the quadratic autoencoder, and apply it to low-dose CT denoising. The experimental results on the Mayo low-dose CT dataset demonstrate the utility and robustness of quadratic autoencoder in terms of image denoising and model efficiency. To our best knowledge, this is the first time that the deep learning approach is implemented with a new type of neurons and demonstrates a significant potential in the medical imaging field.
Collapse
Affiliation(s)
- Fenglei Fan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Guhan Qian
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Matthew Getzin
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China, 110169
| | - Juergen Hahn
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
161
|
Shao W, Pomper MG, Du Y. A Learned Reconstruction Network for SPECT Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 5:26-34. [PMID: 33403244 DOI: 10.1109/trpms.2020.2994041] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
A neural network designed specifically for SPECT image reconstruction was developed. The network reconstructed activity images from SPECT projection data directly. Training was performed through a corpus of training data including that derived from digital phantoms generated from custom software and the corresponding projection data obtained from simulation. When using the network to reconstruct images, input projection data were initially fed to two fully connected (FC) layers to perform a basic reconstruction. Then the output of the FC layers and an attenuation map were delivered to five convolutional layers for signal-decay compensation and image optimization. To validate the system, data not used in training, simulated data from the Zubal human brain phantom, and clinical patient data were used to test reconstruction performance. Reconstructed images from the developed network proved closer to the truth with higher resolution and quantitative accuracy than those from conventional OS-EM reconstruction. To understand better the operation of the network for reconstruction, intermediate results from hidden layers were investigated for each step of the processing. The network system was also retrained with noisy projection data and compared with that developed with noise-free data. The retrained network proved even more robust after having learned to filter noise. Finally, we showed that the network still provided sharp images when using reduced view projection data (retrained with reduced view data).
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Martin G Pomper
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| |
Collapse
|
162
|
Deep Learning-based Inaccuracy Compensation in Reconstruction of High Resolution XCT Data. Sci Rep 2020; 10:7682. [PMID: 32376852 PMCID: PMC7203197 DOI: 10.1038/s41598-020-64733-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 04/17/2020] [Indexed: 11/08/2022] Open
Abstract
While X-ray computed tomography (XCT) is pushed further into the micro- and nanoscale, the limitations of various tool components and object motion become more apparent. For high-resolution XCT, it is necessary but practically difficult to align these tool components with sub-micron precision. The aim is to develop a novel reconstruction methodology that considers unavoidable misalignment and object motion during the data acquisition in order to obtain high-quality three-dimensional images and that is applicable for data recovery from incomplete datasets. A reconstruction software empowered by sophisticated correction modules that autonomously estimates and compensates artefacts using gradient descent and deep learning algorithms has been developed and applied. For motion estimation, a novel computer vision methodology coupled with a deep convolutional neural network approach provides estimates for the object motion by tracking features throughout the adjacent projections. The model is trained using the forward projections of simulated phantoms that consist of several simple geometrical features such as sphere, triangle and rectangular. The feature maps extracted by a neural network are used to detect and to classify features done by a support vector machine. For missing data recovery, a novel deep convolutional neural network is used to infer high-quality reconstruction data from incomplete sets of projections. The forward and back projections of simulated geometric shapes from a range of angular ranges are used to train the model. The model is able to learn the angular dependency based on a limited angle coverage and to propose a new set of projections to suppress artefacts. High-quality three-dimensional images demonstrate that it is possible to effectively suppress artefacts caused by thermomechanical instability of tool components and objects resulting in motion, by center of rotation misalignment and by inaccuracy in the detector position without additional computational efforts. Data recovery from incomplete sets of projections result in directly corrected projections instead of suppressing artefacts in the final reconstructed images. The proposed methodology has been proven and is demonstrated for a ball bearing sample. The reconstruction results are compared to prior corrections and benchmarked with a commercially available reconstruction software. Compared to conventional approaches in XCT imaging and data analysis, the proposed methodology for the generation of high-quality three-dimensional X-ray images is fully autonomous. The methodology presented here has been proven for high-resolution micro-XCT and nano-XCT, however, is applicable for all length scales.
Collapse
|
163
|
Gao Y, Tan J, Shi Y, Lu S, Gupta A, Li H, Liang Z. Constructing a tissue-specific texture prior by machine learning from previous full-dose scan for Bayesian reconstruction of current ultralow-dose CT images. J Med Imaging (Bellingham) 2020; 7:032502. [PMID: 32118093 PMCID: PMC7040436 DOI: 10.1117/1.jmi.7.3.032502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/27/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Bayesian theory provides a sound framework for ultralow-dose computed tomography (ULdCT) image reconstruction with two terms for modeling the data statistical property and incorporating a priori knowledge for the image that is to be reconstructed. We investigate the feasibility of using a machine learning (ML) strategy, particularly the convolutional neural network (CNN), to construct a tissue-specific texture prior from previous full-dose computed tomography. Approach: Our study constructs four tissue-specific texture priors, corresponding with lung, bone, fat, and muscle, and integrates the prior with the prelog shift Poisson (SP) data property for Bayesian reconstruction of ULdCT images. The Bayesian reconstruction was implemented by an algorithm called SP-CNN-T and compared with our previous Markov random field (MRF)-based tissue-specific texture prior algorithm called SP-MRF-T. Results: In addition to conventional quantitative measures, mean squared error and peak signal-to-noise ratio, structure similarity index, feature similarity, and texture Haralick features were used to measure the performance difference between SP-CNN-T and SP-MRF-T algorithms in terms of the structure and tissue texture preservation, demonstrating the feasibility and the potential of the investigated ML approach. Conclusions: Both training performance and image reconstruction results showed the feasibility of constructing CNN texture prior model and the potential of improving the structure preservation of the nodule comparing to our previous regional tissue-specific MRF texture prior model.
Collapse
Affiliation(s)
- Yongfeng Gao
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Jiaxing Tan
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Yongyi Shi
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Siming Lu
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| | - Amit Gupta
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Haifang Li
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Zhengrong Liang
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| |
Collapse
|
164
|
Chen G, Hong X, Ding Q, Zhang Y, Chen H, Fu S, Zhao Y, Zhang X, Ji H, Wang G, Huang Q, Gao H. AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse‐data CT. Med Phys 2020; 47:2916-2930. [DOI: 10.1002/mp.14170] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/26/2020] [Accepted: 03/28/2020] [Indexed: 11/06/2022] Open
Affiliation(s)
- Gaoyu Chen
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| | - Xiang Hong
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiaoqiao Ding
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Yi Zhang
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Hu Chen
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Shujun Fu
- School of Mathematics Shandong University Jinan Shandong 250100 China
| | - Yunsong Zhao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
- School of Mathematical Sciences Capital Normal University Beijing 100048 China
| | - Xiaoqun Zhang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hui Ji
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Ge Wang
- Department of Biomedical Engineering Rensselaer Polytechnic Institute Troy NY 12180 USA
| | - Qiu Huang
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hao Gao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| |
Collapse
|
165
|
Ye S, Ravishankar S, Long Y, Fessler JA. SPULTRA: Low-Dose CT Image Reconstruction With Joint Statistical and Learned Image Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:729-741. [PMID: 31425021 PMCID: PMC7170173 DOI: 10.1109/tmi.2019.2934933] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Low-dose CT image reconstruction has been a popular research topic in recent years. A typical reconstruction method based on post-log measurements is called penalized weighted-least squares (PWLS). Due to the underlying limitations of the post-log statistical model, the PWLS reconstruction quality is often degraded in low-dose scans. This paper investigates a shifted-Poisson (SP) model based likelihood function that uses the pre-log raw measurements that better represents the measurement statistics, together with a data-driven regularizer exploiting a Union of Learned TRAnsforms (SPULTRA). Both the SP induced data-fidelity term and the regularizer in the proposed framework are nonconvex. The proposed SPULTRA algorithm uses quadratic surrogate functions for the SP induced data-fidelity term. Each iteration involves a quadratic subproblem for updating the image, and a sparse coding and clustering subproblem that has a closed-form solution. The SPULTRA algorithm has a similar computational cost per iteration as its recent counterpart PWLS-ULTRA that uses post-log measurements, and it provides better image reconstruction quality than PWLS-ULTRA, especially in low-dose scans.
Collapse
|
166
|
Xie H, Shan H, Cong W, Liu C, Zhang X, Liu S, Ning R, Wang GE. Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:196633-196646. [PMID: 33251081 PMCID: PMC7695229 DOI: 10.1109/access.2020.3033795] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.
Collapse
Affiliation(s)
- Huidong Xie
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Hongming Shan
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Wenxiang Cong
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Ruola Ning
- Koning Corporation, West Henrietta, NY USA
| | - G E Wang
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| |
Collapse
|
167
|
Shi L, Liu B, Yu H, Wei C, Wei L, Zeng L, Wang G. Review of CT image reconstruction open source toolkits. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:619-639. [PMID: 32390648 DOI: 10.3233/xst-200666] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Computed tomography (CT) has been widely applied in medical diagnosis, nondestructive evaluation, homeland security, and other science and engineering applications. Image reconstruction is one of the core CT imaging technologies. In this review paper, we systematically reviewed the currently publicly available CT image reconstruction open source toolkits in the aspects of their environments, object models, imaging geometries, and algorithms. In addition to analytic and iterative algorithms, deep learning reconstruction networks and open codes are also reviewed as the third category of reconstruction algorithms. This systematic summary of the publicly available software platforms will help facilitate CT research and development.
Collapse
Affiliation(s)
- Liu Shi
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Baodong Liu
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Hengyong Yu
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Cunfeng Wei
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Long Wei
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Li Zeng
- College of Mathematics and Statistics, Chongqing University, Chongqing, China
- Engineering Research Center of Industrial Computed Tomography Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Ge Wang
- Biomedical Imaging Center, AI-based X-ray Imaging System (AXIS) Lab, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
168
|
Boink YE, Manohar S, Brune C. A Partially-Learned Algorithm for Joint Photo-acoustic Reconstruction and Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:129-139. [PMID: 31180846 DOI: 10.1109/tmi.2019.2922026] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
In an inhomogeneously illuminated photoacoustic image, important information like vascular geometry is not readily available, when only the initial pressure is reconstructed. To obtain the desired information, algorithms for image segmentation are often applied as a post-processing step. In this article, we propose to jointly acquire the photoacoustic reconstruction and segmentation, by modifying a recently developed partially learned algorithm based on a convolutional neural network. We investigate the stability of the algorithm against changes in initial pressures and photoacoustic system settings. These insights are used to develop an algorithm that is robust to input and system settings. Our approach can easily be applied to other imaging modalities and can be modified to perform other high-level tasks different from segmentation. The method is validated on challenging synthetic and experimental photoacoustic tomography data in limited angle and limited view scenarios. It is computationally less expensive than classical iterative methods and enables higher quality reconstructions and segmentations than the state-of-the-art learned and non-learned methods.
Collapse
|
169
|
Ravishankar S, Ye JC, Fessler JA. Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:86-109. [PMID: 32095024 PMCID: PMC7039447 DOI: 10.1109/jproc.2019.2936204] [Citation(s) in RCA: 91] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. A fourth type of methods replaces mathematically designed models of signals and systems with data-driven or adaptive models inspired by the field of machine learning. This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
Collapse
Affiliation(s)
- Saiprasad Ravishankar
- Departments of Computational Mathematics, Science and Engineering, and Biomedical Engineering at Michigan State University, East Lansing, MI, 48824 USA
| | - Jong Chul Ye
- Department of Bio and Brain Engineering and Department of Mathematical Sciences at the Korea Advanced Institute of Science & Technology (KAIST), Daejeon, South Korea
| | - Jeffrey A Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
170
|
Xie H, Shan H, Wang G. Deep Encoder-Decoder Adversarial Reconstruction(DEAR) Network for 3D CT from Few-View Data. Bioengineering (Basel) 2019; 6:E111. [PMID: 31835430 PMCID: PMC6956312 DOI: 10.3390/bioengineering6040111] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 11/20/2019] [Accepted: 12/05/2019] [Indexed: 11/16/2022] Open
Abstract
X-ray computed tomography (CT) is widely used in clinical practice. The involved ionizingX-ray radiation, however, could increase cancer risk. Hence, the reduction of the radiation dosehas been an important topic in recent years. Few-view CT image reconstruction is one of the mainways to minimize radiation dose and potentially allow a stationary CT architecture. In this paper,we propose a deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT imagereconstruction from few-view data. Since the artifacts caused by few-view reconstruction appear in3D instead of 2D geometry, a 3D deep network has a great potential for improving the image qualityin a data driven fashion. More specifically, our proposed DEAR-3D network aims at reconstructing3D volume directly from clinical 3D spiral cone-beam image data. DEAR is validated on a publiclyavailable abdominal CT dataset prepared and authorized by Mayo Clinic. Compared with other2D deep learning methods, the proposed DEAR-3D network can utilize 3D information to producepromising reconstruction results.
Collapse
Affiliation(s)
| | | | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180, USA; (H.X.); (H.S.)
| |
Collapse
|
171
|
Bao P, Xia W, Yang K, Chen W, Chen M, Xi Y, Niu S, Zhou J, Zhang H, Sun H, Wang Z, Zhang Y. Convolutional Sparse Coding for Compressed Sensing CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2607-2619. [PMID: 30908204 DOI: 10.1109/tmi.2019.2906853] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Over the past few years, dictionary learning (DL)-based methods have been successfully used in various image reconstruction problems. However, the traditional DL-based computed tomography (CT) reconstruction methods are patch-based and ignore the consistency of pixels in overlapped patches. In addition, the features learned by these methods always contain shifted versions of the same features. In recent years, convolutional sparse coding (CSC) has been developed to address these problems. In this paper, inspired by several successful applications of CSC in the field of signal processing, we explore the potential of CSC in sparse-view CT reconstruction. By directly working on the whole image, without the necessity of dividing the image into overlapped patches in DL-based methods, the proposed methods can maintain more details and avoid artifacts caused by patch aggregation. With predetermined filters, an alternating scheme is developed to optimize the objective function. Extensive experiments with simulated and real CT data were performed to validate the effectiveness of the proposed methods. The qualitative and quantitative results demonstrate that the proposed methods achieve better performance than the several existing state-of-the-art methods.
Collapse
|
172
|
Syben C, Michen M, Stimpel B, Seitz S, Ploner S, Maier AK. Technical Note: PYRO-NN: Python reconstruction operators in neural networks. Med Phys 2019; 46:5110-5115. [PMID: 31389023 PMCID: PMC6899669 DOI: 10.1002/mp.13753] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 07/23/2019] [Accepted: 07/24/2019] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems. METHODS PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan-, and cone-beam projectors, and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high-level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. RESULTS The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high-level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non-deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open-source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN. CONCLUSIONS PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.
Collapse
Affiliation(s)
- Christopher Syben
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Markus Michen
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Bernhard Stimpel
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Stephan Seitz
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Stefan Ploner
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Andreas K. Maier
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| |
Collapse
|
173
|
Zhu H, Tong D, Zhang L, Wang S, Wu W, Tang H, Chen Y, Luo L, Zhu J, Li B. Temporally downsampled cerebral CT perfusion image restoration using deep residual learning. Int J Comput Assist Radiol Surg 2019; 15:193-201. [DOI: 10.1007/s11548-019-02082-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 10/18/2019] [Indexed: 12/27/2022]
|
174
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|
175
|
Wu D, Kim K, Li Q. Computationally efficient deep neural network for computed tomography image reconstruction. Med Phys 2019; 46:4763-4776. [PMID: 31132144 DOI: 10.1002/mp.13627] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 04/22/2019] [Accepted: 05/14/2019] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Deep neural network-based image reconstruction has demonstrated promising performance in medical imaging for undersampled and low-dose scenarios. However, it requires large amount of memory and extensive time for the training. It is especially challenging to train the reconstruction networks for three-dimensional computed tomography (CT) because of the high resolution of CT images. The purpose of this work is to reduce the memory and time consumption of the training of the reconstruction networks for CT to make it practical for current hardware, while maintaining the quality of the reconstructed images. METHODS We unrolled the proximal gradient descent algorithm for iterative image reconstruction to finite iterations and replaced the terms related to the penalty function with trainable convolutional neural networks (CNN). The network was trained greedily iteration by iteration in the image domain on patches, which requires reasonable amount of memory and time on mainstream graphics processing unit (GPU). To overcome the local-minimum problem caused by greedy learning, we used deep UNet as the CNN and incorporated separable quadratic surrogate with ordered subsets for data fidelity, so that the solution could escape from easy local minimums and achieve better image quality. RESULTS The proposed method achieved comparable image quality with state-of-the-art neural network for CT image reconstruction on two-dimensional (2D) sparse-view and limited-angle problems on the low-dose CT challenge dataset. The difference in root-mean-square-error (RMSE) and structural similarity index (SSIM) was within [-0.23,0.47] HU and [0,0.001], respectively, with 95% confidence level. For three-dimensional (3D) image reconstruction with ordinary-size CT volume, the proposed method only needed 2 GB graphics processing unit (GPU) memory and 0.45 s per training iteration as minimum requirement, whereas existing methods may require 417 GB and 31 min. The proposed method achieved improved performance compared to total variation- and dictionary learning-based iterative reconstruction for both 2D and 3D problems. CONCLUSIONS We proposed a training-time computationally efficient neural network for CT image reconstruction. The proposed method achieved comparable image quality with state-of-the-art neural network for CT reconstruction, with significantly reduced memory and time requirement during training. The proposed method is applicable to 3D image reconstruction problems such as cone-beam CT and tomosynthesis on mainstream GPUs.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| |
Collapse
|
176
|
Chun IY, Fessler JA. Convolutional Analysis Operator Learning: Acceleration and Convergence. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2108-2122. [PMID: 31484120 PMCID: PMC7170176 DOI: 10.1109/tip.2019.2937734] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Convolutional operator learning is gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called patch-domain approaches that extract and store many overlapping patches across training signals. Due to memory demands, patch-domain methods have limitations when learning kernels from large datasets - particularly with multi-layered structures, e.g., convolutional neural networks - or when applying the learned kernels to high-dimensional signal recovery problems. The so-called convolution approach does not store many overlapping patches, and thus overcomes the memory problems particularly with careful algorithmic designs; it has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning. This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, with sharp majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art block proximal gradient (BPG) method. Numerical experiments for sparse-view computational tomography show that a convolutional sparsifying regularizer learned via CAOL significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Using more and wider kernels in a learned regularizer better preserves edges in reconstructed images.
Collapse
|
177
|
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G, Wintermark M. Applications of Deep Learning to Neuro-Imaging Techniques. Front Neurol 2019; 10:869. [PMID: 31474928 PMCID: PMC6702308 DOI: 10.3389/fneur.2019.00869] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/26/2019] [Indexed: 12/12/2022] Open
Abstract
Many clinical applications based on deep learning and pertaining to radiology have been proposed and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even prediction of therapy responses. There are many other innovative applications of AI in various technical aspects of medical imaging, particularly applied to the acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images, improving image quality, lowering radiation and contrast dose, and shortening the duration of imaging studies. This article will address this topic and will seek to present an overview of deep learning applied to neuroimaging techniques.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Wintermark
- Neuroradiology Section, Department of Radiology, Stanford Healthcare, Stanford, CA, United States
| |
Collapse
|
178
|
Liu J, Zhang Y, Zhao Q, Lv T, Wu W, Cai N, Quan G, Yang W, Chen Y, Luo L, Shu H, Coatrieux JL. Deep iterative reconstruction estimation (DIRE): approximate iterative reconstruction estimation for low dose CT imaging. ACTA ACUST UNITED AC 2019; 64:135007. [DOI: 10.1088/1361-6560/ab18db] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
179
|
Gong K, Catana C, Qi J, Li Q. PET Image Reconstruction Using Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1655-1665. [PMID: 30575530 PMCID: PMC6584077 DOI: 10.1109/tmi.2018.2888491] [Citation(s) in RCA: 121] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Recently, deep neural networks have been widely and successfully applied in computer vision tasks and have attracted growing interest in medical imaging. One barrier for the application of deep neural networks to medical imaging is the need for large amounts of prior training pairs, which is not always feasible in clinical practice. This is especially true for medical image reconstruction problems, where raw data are needed. Inspired by the deep image prior framework, in this paper, we proposed a personalized network training method where no prior training pairs are needed, but only the patient's own prior information. The network is updated during the iterative reconstruction process using the patient-specific prior information and measured data. We formulated the maximum-likelihood estimation as a constrained optimization problem and solved it using the alternating direction method of multipliers algorithm. Magnetic resonance imaging guided positron emission tomography reconstruction was employed as an example to demonstrate the effectiveness of the proposed framework. Quantification results based on simulation and real data show that the proposed reconstruction framework can outperform Gaussian post-smoothing and anatomically guided reconstructions using the kernel method or the neural-network penalty.
Collapse
|
180
|
Häggström I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: A deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal 2019; 54:253-262. [PMID: 30954852 DOI: 10.1016/j.media.2019.03.013] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 03/29/2019] [Accepted: 03/30/2019] [Indexed: 01/01/2023]
Abstract
The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder-decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11%/53% lower than ordered subset expectation maximization (OSEM)/filtered back-projection (FBP), structural similarity index (1%/11% higher than OSEM/FBP), and peak signal-to-noise ratio (1.1/3.8 dB higher than OSEM/FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder-decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.
Collapse
Affiliation(s)
- Ida Häggström
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States.
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Gabriele Campanella
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| | - Thomas J Fuchs
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| |
Collapse
|
181
|
Mardani M, Gong E, Cheng JY, Vasanawala SS, Zaharchuk G, Xing L, Pauly JM. Deep Generative Adversarial Neural Networks for Compressive Sensing MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:167-179. [PMID: 30040634 PMCID: PMC6542360 DOI: 10.1109/tmi.2018.2858752] [Citation(s) in RCA: 240] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Undersampled magnetic resonance image (MRI) reconstruction is typically an ill-posed linear inverse task. The time and resource intensive computations require tradeoffs between accuracy and speed. In addition, state-of-the-art compressed sensing (CS) analytics are not cognizant of the image diagnostic quality. To address these challenges, we propose a novel CS framework that uses generative adversarial networks (GAN) to model the (low-dimensional) manifold of high-quality MR images. Leveraging a mixture of least-squares (LS) GANs and pixel-wise l1/l2 cost, a deep residual network with skip connections is trained as the generator that learns to remove the aliasing artifacts by projecting onto the image manifold. The LSGAN learns the texture details, while the l1/l2 cost suppresses high-frequency noise. A discriminator network, which is a multilayer convolutional neural network (CNN), plays the role of a perceptual cost that is then jointly trained based on high-quality MR images to score the quality of retrieved images. In the operational phase, an initial aliased estimate (e.g., simply obtained by zero-filling) is propagated into the trained generator to output the desired reconstruction. This demands a very low computational overhead. Extensive evaluations are performed on a large contrast-enhanced MR dataset of pediatric patients. Images rated by expert radiologists corroborate that GANCS retrieves higher quality images with improved fine texture details compared with conventional Wavelet-based and dictionary-learning-based CS schemes as well as with deep-learning-based schemes using pixel-wise training. In addition, it offers reconstruction times of under a few milliseconds, which are two orders of magnitude faster than the current state-of-the-art CS-MRI schemes.
Collapse
|
182
|
Huang Y, Preuhs A, Lauritsch G, Manhart M, Huang X, Maier A. Data Consistent Artifact Reduction for Limited Angle Tomography with Deep Learning Prior. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2019. [DOI: 10.1007/978-3-030-33843-5_10] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
183
|
Maier J, Eulig E, Vöth T, Knaup M, Kuntz J, Sawall S, Kachelrieß M. Real-time scatter estimation for medical CT using the deep scatter estimation: Method and robustness analysis with respect to different anatomies, dose levels, tube voltages, and data truncation. Med Phys 2018; 46:238-249. [DOI: 10.1002/mp.13274] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 10/01/2018] [Accepted: 10/29/2018] [Indexed: 01/02/2023] Open
Affiliation(s)
- Joscha Maier
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Department of Physics and Astronomy; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 226 69120 Heidelberg Germany
| | - Elias Eulig
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Department of Physics and Astronomy; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 226 69120 Heidelberg Germany
| | - Tim Vöth
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Department of Physics and Astronomy; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 226 69120 Heidelberg Germany
| | - Michael Knaup
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
| | - Jan Kuntz
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
| | - Stefan Sawall
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Medical Faculty; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 672 69120 Heidelberg Germany
| | - Marc Kachelrieß
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Medical Faculty; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 672 69120 Heidelberg Germany
| |
Collapse
|
184
|
Hauptmann A, Lucka F, Betcke M, Huynh N, Adler J, Cox B, Beard P, Ourselin S, Arridge S. Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1382-1393. [PMID: 29870367 PMCID: PMC7613684 DOI: 10.1109/tmi.2018.2820382] [Citation(s) in RCA: 135] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data.
Collapse
Affiliation(s)
- Andreas Hauptmann
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| | - Felix Lucka
- the Department of Computer Science, University College London, London WC1E 6BT, U.K., and also with the Centrum Wiskunde & Informatica, 1098 XG Amsterdam, The Netherlands
| | - Marta Betcke
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| | - Nam Huynh
- the Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, U.K
| | - Jonas Adler
- the Department of Mathematics, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden, and also with Elekta, 103 93 Stockholm, Sweden
| | - Ben Cox
- the Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, U.K
| | - Paul Beard
- the Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, U.K
| | - Sebastien Ourselin
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| | - Simon Arridge
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| |
Collapse
|
185
|
Hauptmann A, Lucka F, Betcke M, Huynh N, Adler J, Cox B, Beard P, Ourselin S, Arridge S. Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1382-1393. [PMID: 29870367 DOI: 10.1109/tmi.42] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data.
Collapse
|
186
|
Zhang Y, Yu H. Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1370-1381. [PMID: 29870366 PMCID: PMC5998663 DOI: 10.1109/tmi.2018.2823083] [Citation(s) in RCA: 207] [Impact Index Per Article: 29.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
In the presence of metal implants, metal artifacts are introduced to x-ray computed tomography CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past decades, MAR is still one of the major problems in clinical x-ray CT. In this paper, we develop a convolutional neural network (CNN)-based open MAR framework, which fuses the information from the original and corrected images to suppress artifacts. The proposed approach consists of two phases. In the CNN training phase, we build a database consisting of metal-free, metal-inserted and pre-corrected CT images, and image patches are extracted and used for CNN training. In the MAR phase, the uncorrected and pre-corrected images are used as the input of the trained CNN to generate a CNN image with reduced artifacts. To further reduce the remaining artifacts, water equivalent tissues in a CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal-affected projections, followed by the FBP reconstruction. The effectiveness of the proposed method is validated on both simulated and real data. Experimental results demonstrate the superior MAR capability of the proposed method to its competitors in terms of artifact suppression and preservation of anatomical structures in the vicinity of metal implants.
Collapse
|
187
|
|