1
|
Kazimierczak W, Wajer R, Komisarek O, Dyszkiewicz-Konwińska M, Wajer A, Kazimierczak N, Janiszewska-Olszowska J, Serafin Z. Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT. Diagnostics (Basel) 2024; 14:2410. [PMID: 39518377 PMCID: PMC11545169 DOI: 10.3390/diagnostics14212410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Revised: 10/21/2024] [Accepted: 10/22/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND/OBJECTIVES To assess the impact of a vendor-agnostic deep learning model (DLM) on image quality parameters and noise reduction in dental cone-beam computed tomography (CBCT) reconstructions. METHODS This retrospective study was conducted on CBCT scans of 93 patients (41 males and 52 females, mean age 41.2 years, SD 15.8 years) from a single center using the inclusion criteria of standard radiation dose protocol images. Objective and subjective image quality was assessed in three predefined landmarks through contrast-to-noise ratio (CNR) measurements and visual assessment using a 5-point scale by three experienced readers. The inter-reader reliability and repeatability were calculated. RESULTS Eighty patients (30 males and 50 females; mean age 41.5 years, SD 15.94 years) were included in this study. The CNR in DLM reconstructions was significantly greater than in native reconstructions, and the mean CNR in regions of interest 1-3 (ROI1-3) in DLM images was 11.12 ± 9.29, while in the case of native reconstructions, it was 7.64 ± 4.33 (p < 0.001). The noise level in native reconstructions was significantly higher than in the DLM reconstructions, and the mean noise level in ROI1-3 in native images was 45.83 ± 25.89, while in the case of DLM reconstructions, it was 35.61 ± 24.28 (p < 0.05). Subjective image quality assessment revealed no statistically significant differences between native and DLM reconstructions. CONCLUSIONS The use of deep learning-based image reconstruction algorithms for CBCT imaging of the oral cavity can improve image quality by enhancing the CNR and lowering the noise.
Collapse
Affiliation(s)
- Wojciech Kazimierczak
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Róża Wajer
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland
| | - Oskar Komisarek
- Department of Otolaryngology, Audiology and Phoniatrics, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | | | - Adrian Wajer
- Dental Primus, Poznańska 18, 88-100 Inowrocław, Poland
| | - Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Joanna Janiszewska-Olszowska
- Department of Interdisciplinary Dentistry, Pomeranian Medical University in Szczecin, Al. Powstańców Wlkp. 72, 70-111 Szczecin, Poland
| | - Zbigniew Serafin
- Faculty of Medicine, Bydgoszcz University of Science and Technology, Kaliskiego 7, 85-796 Bydgoszcz, Poland
| |
Collapse
|
2
|
Kazimierczak W, Kędziora K, Janiszewska-Olszowska J, Kazimierczak N, Serafin Z. Noise-Optimized CBCT Imaging of Temporomandibular Joints-The Impact of AI on Image Quality. J Clin Med 2024; 13:1502. [PMID: 38592413 PMCID: PMC10932444 DOI: 10.3390/jcm13051502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 02/28/2024] [Accepted: 03/04/2024] [Indexed: 04/10/2024] Open
Abstract
Background: Temporomandibular joint disorder (TMD) is a common medical condition. Cone beam computed tomography (CBCT) is effective in assessing TMD-related bone changes, but image noise may impair diagnosis. Emerging deep learning reconstruction algorithms (DLRs) could minimize noise and improve CBCT image clarity. This study compares standard and deep learning-enhanced CBCT images for image quality in detecting osteoarthritis-related degeneration in TMJs (temporomandibular joints). This study analyzed CBCT images of patients with suspected temporomandibular joint degenerative joint disease (TMJ DJD). Methods: The DLM reconstructions were performed with ClariCT.AI software. Image quality was evaluated objectively via CNR in target areas and subjectively by two experts using a five-point scale. Both readers also assessed TMJ DJD lesions. The study involved 50 patients with a mean age of 28.29 years. Results: Objective analysis revealed a significantly better image quality in DLM reconstructions (CNR levels; p < 0.001). Subjective assessment showed high inter-reader agreement (κ = 0.805) but no significant difference in image quality between the reconstruction types (p = 0.055). Lesion counts were not significantly correlated with the reconstruction type (p > 0.05). Conclusions: The analyzed DLM reconstruction notably enhanced the objective image quality in TMJ CBCT images but did not significantly alter the subjective quality or DJD lesion diagnosis. However, the readers favored DLM images, indicating the potential for better TMD diagnosis with CBCT, meriting more study.
Collapse
Affiliation(s)
- Wojciech Kazimierczak
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
- Department of Interdisciplinary Dentistry, Pomeranian Medical University in Szczecin, 70-111 Szczecin, Poland
| | - Kamila Kędziora
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | | | - Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| |
Collapse
|
3
|
Tan XI, Liu X, Xiang K, Wang J, Tan S. Deep Filtered Back Projection for CT Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:20962-20972. [PMID: 39211346 PMCID: PMC11361368 DOI: 10.1109/access.2024.3357355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Filtered back projection (FBP) is a classic analytical algorithm for computed tomography (CT) reconstruction, with high computational efficiency. However, images reconstructed by FBP often suffer from excessive noise and artifacts. The original FBP algorithm uses a window function to smooth signals and a linear interpolation to estimate projection values at un-sampled locations. In this study, we propose a novel framework named DeepFBP in which an optimized filter and an optimized nonlinear interpolation operator are learned with neural networks. Specifically, the learned filter can be considered as the product of an optimized window function and the ramp filter, and the learned interpolation can be considered as an optimized way to utilize projection information of nearby locations through nonlinear combination. The proposed method remains the high computational efficiency of the original FBP and achieves much better reconstruction quality at different noise levels. It also outperforms the TV-based statistical iterative algorithm, with computational time being reduced in an order of two, and state-of-the-art post-processing deep learning methods that have deeper and more complicated network structures.
Collapse
Affiliation(s)
- X I Tan
- College of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou 80305, China
| | - Xuan Liu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Kai Xiang
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Shan Tan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
4
|
Zhu J, Su T, Zhang X, Cui H, Tan Y, Zheng H, Liang D, Guo J, Ge Y. Super-resolution dual-layer CBCT imaging with model-guided deep learning. Phys Med Biol 2023; 69:015016. [PMID: 38048627 DOI: 10.1088/1361-6560/ad1211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 12/04/2023] [Indexed: 12/06/2023]
Abstract
Objective.This study aims at investigating a novel super resolution CBCT imaging approach with a dual-layer flat panel detector (DL-FPD).Approach.With DL-FPD, the low-energy and high-energy projections acquired from the top and bottom detector layers contain over-sampled spatial information, from which super-resolution CT images can be reconstructed. A simple mathematical model is proposed to explain the signal formation procedure in DL-FPD, and a dedicated recurrent neural network, named suRi-Net, is developed based upon the above imaging model to nonlinearly retrieve the high-resolution dual-energy information. Physical benchtop experiments are conducted to validate the performance of this newly developed super-resolution CBCT imaging method.Main Results.The results demonstrate that the proposed suRi-Net can accurately retrieve high spatial resolution information from the low-energy and high-energy projections of low spatial resolution. Quantitatively, the spatial resolution of the reconstructed CBCT images from the top and bottom detector layers is increased by about 45% and 54%, respectively.Significance.In the future, suRi-Net will provide a new approach to perform high spatial resolution dual-energy imaging in DL-FPD-based CBCT systems.
Collapse
Affiliation(s)
- Jiongtao Zhu
- Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, People's Republic of China
| | - Ting Su
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Xin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Han Cui
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Yuhang Tan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Dong Liang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Jinchuan Guo
- Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, People's Republic of China
| | - Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| |
Collapse
|
5
|
Moriakov N, Sonke JJ, Teuwen J. End-to-end memory-efficient reconstruction for cone beam CT. Med Phys 2023; 50:7579-7593. [PMID: 37846969 DOI: 10.1002/mp.16779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 07/28/2023] [Accepted: 08/08/2023] [Indexed: 10/18/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) plays an important role in many medical fields nowadays. Unfortunately, the potential of this imaging modality is hampered by lower image quality compared to the conventional CT, and producing accurate reconstructions remains challenging. A lot of recent research has been directed towards reconstruction methods relying on deep learning, which have shown great promise for various imaging modalities. However, practical application of deep learning to CBCT reconstruction is complicated by several issues, such as exceedingly high memory costs of deep learning methods when working with fully 3D data. Additionally, deep learning methods proposed in the literature are often trained and evaluated only on data from a specific region of interest, thus raising concerns about possible lack of generalization to other regions. PURPOSE In this work, we aim to address these limitations and propose LIRE: a learned invertible primal-dual iterative scheme for CBCT reconstruction. METHODS LIRE is a learned invertible primal-dual iterative scheme for CBCT reconstruction, wherein we employ a U-Net architecture in each primal block and a residual convolutional neural network (CNN) architecture in each dual block. Memory requirements of the network are substantially reduced while preserving its expressive power through a combination of invertible residual primal-dual blocks and patch-wise computations inside each of the blocks during both forward and backward pass. These techniques enable us to train on data with isotropic 2 mm voxel spacing, clinically-relevant projection count and detector panel resolution on current hardware with 24 GB video random access memory (VRAM). RESULTS Two LIRE models for small and for large field-of-view (FoV) setting were trained and validated on a set of 260 + 22 thorax CT scans and tested using a set of 142 thorax CT scans plus an out-of-distribution dataset of 79 head and neck CT scans. For both settings, our method surpasses the classical methods and the deep learning baselines on both test sets. On the thorax CT set, our method achieves peak signal-to-noise ratio (PSNR) of 33.84 ± 2.28 for the small FoV setting and 35.14 ± 2.69 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. On the head and neck CT set, our method achieves PSNR of 39.35 ± 1.75 for the small FoV setting and 41.21 ± 1.41 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. Additionally, we demonstrate that LIRE can be finetuned to reconstruct high-resolution CBCT data with the same geometry but 1 mm voxel spacing and higher detector panel resolution, where it outperforms the U-Net baseline as well. CONCLUSIONS Learned invertible primal-dual schemes with additional memory optimizations can be trained to reconstruct CBCT volumes directly from the projection data with clinically-relevant geometry and resolution. Such methods can offer better reconstruction quality and generalization compared to classical deep learning baselines.
Collapse
Affiliation(s)
- Nikita Moriakov
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Jonas Teuwen
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| |
Collapse
|
6
|
Gao Y, Tan J, Shi Y, Zhang H, Lu S, Gupta A, Li H, Reiter M, Liang Z. Machine Learned Texture Prior From Full-Dose CT Database via Multi-Modality Feature Selection for Bayesian Reconstruction of Low-Dose CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3129-3139. [PMID: 34968178 PMCID: PMC9243192 DOI: 10.1109/tmi.2021.3139533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In our earlier study, we proposed a regional Markov random field type tissue-specific texture prior from previous full-dose computed tomography (FdCT) scan for current low-dose CT (LdCT) imaging, which showed clinical benefits through task-based evaluation. Nevertheless, two assumptions were made for early study. One assumption is that the center pixel has a linear relationship with its nearby neighbors and the other is previous FdCT scans of the same subject are available. To eliminate the two assumptions, we proposed a database assisted end-to-end LdCT reconstruction framework which includes a deep learning texture prior model and a multi-modality feature based candidate selection model. A convolutional neural network-based texture prior is proposed to eliminate the linear relationship assumption. And for scenarios in which the concerned subject has no previous FdCT scans, we propose to select one proper prior candidate from the FdCT database using multi-modality features. Features from three modalities are used including the subjects' physiological factors, the CT scan protocol, and a novel feature named Lung Mark which is deliberately proposed to reflect the z-axial property of human anatomy. Moreover, a majority vote strategy is designed to overcome the noise effect from LdCT scans. Experimental results showed the effectiveness of Lung Mark. The selection model has accuracy of 84% testing on 1,470 images from 49 subjects. The learned texture prior from FdCT database provided reconstruction comparable to the subjects having corresponding FdCT. This study demonstrated the feasibility of bringing clinically relevant textures from available FdCT database to perform Bayesian reconstruction of any current LdCT scan.
Collapse
|
7
|
Liu X, Liang X, Deng L, Tan S, Xie Y. Learning low-dose CT degradation from unpaired data with flow-based model. Med Phys 2022; 49:7516-7530. [PMID: 35880375 DOI: 10.1002/mp.15886] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND There has been growing interest in low-dose computed tomography (LDCT) for reducing the X-ray radiation to patients. However, LDCT always suffers from complex noise in reconstructed images. Although deep learning-based methods have shown their strong performance in LDCT denoising, most of them require a large number of paired training data of normal-dose CT (NDCT) images and LDCT images, which are hard to acquire in the clinic. Lack of paired training data significantly undermines the practicability of supervised deep learning-based methods. To alleviate this problem, unsupervised or weakly supervised deep learning-based methods are required. PURPOSE We aimed to propose a method that achieves LDCT denoising without training pairs. Specifically, we first trained a neural network in a weakly supervised manner to simulate LDCT images from NDCT images. Then, simulated training pairs could be used for supervised deep denoising networks. METHODS We proposed a weakly supervised method to learn the degradation of LDCT from unpaired LDCT and NDCT images. Concretely, LDCT and normal-dose images were fed into one shared flow-based model and projected to the latent space. Then, the degradation between low-dose and normal-dose images was modeled in the latent space. Finally, the model was trained by minimizing the negative log-likelihood loss with no requirement of paired training data. After training, an NDCT image can be input to the trained flow-based model to generate the corresponding LDCT image. The simulated image pairs of NDCT and LDCT can be further used to train supervised denoising neural networks for test. RESULTS Our method achieved much better performance on LDCT image simulation compared with the most widely used image-to-image translation method, CycleGAN, according to the radial noise power spectrum. The simulated image pairs could be used for any supervised LDCT denoising neural networks. We validated the effectiveness of our generated image pairs on a classic convolutional neural network, REDCNN, and a novel transformer-based model, TransCT. Our method achieved mean peak signal-to-noise ratio (PSNR) of 24.43dB, mean structural similarity (SSIM) of 0.785 on an abdomen CT dataset, mean PSNR of 33.88dB, mean SSIM of 0.797 on a chest CT dataset, which outperformed several traditional CT denoising methods, the same network trained by CycleGAN-generated data, and a novel transfer learning method. Besides, our method was on par with the supervised networks in terms of visual effects. CONCLUSION We proposed a flow-based method to learn LDCT degradation from only unpaired training data. It achieved impressive performance on LDCT synthesis. Next, we could train neural networks with the generated paired data for LDCT denoising. The denoising results are better than traditional and weakly supervised methods, comparable to supervised deep learning methods.
Collapse
Affiliation(s)
- Xuan Liu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaokun Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Deng
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Shan Tan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Yaoqin Xie
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
8
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|
9
|
Xu S, Yang B, Xu C, Tian J, Liu Y, Yin L, Liu S, Zheng W, Liu C. Sparse Angle CBCT Reconstruction Based on Guided Image Filtering. Front Oncol 2022; 12:832037. [PMID: 35574417 PMCID: PMC9093219 DOI: 10.3389/fonc.2022.832037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 03/21/2022] [Indexed: 11/13/2022] Open
Abstract
Cone-beam Computerized Tomography (CBCT) has the advantages of high ray utilization and detection efficiency, short scan time, high spatial and isotropic resolution. However, the X-rays emitted by CBCT examination are harmful to the human body, so reducing the radiation dose without damaging the reconstruction quality is the key to the reconstruction of CBCT. In this paper, we propose a sparse angle CBCT reconstruction algorithm based on Guided Image FilteringGIF, which combines the classic Simultaneous Algebra Reconstruction Technique(SART) and the Total p-Variation (TpV) minimization. Due to the good edge-preserving ability of SART and noise suppression ability of TpV minimization, the proposed method can suppress noise and artifacts while preserving edge and texture information in reconstructed images. Experimental results based on simulated and real-measured CBCT datasets show the advantages of the proposed method.
Collapse
Affiliation(s)
- Siyuan Xu
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Bo Yang
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Congcong Xu
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiawei Tian
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Yan Liu
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Lirong Yin
- Department of Geography and Anthropology, Louisiana State University, Baton Rouge, LA, United States
| | - Shan Liu
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Wenfeng Zheng
- School of Automation, University of Electronic Science and Technology of China, Chengdu, China
| | - Chao Liu
- Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), Unité Mixte de Recherche (UMR) 5506, French National Center for Scientific Research (CNRS) - University of Montpellier (UM), Montpellier, France
| |
Collapse
|
10
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
11
|
Rabbani H, Teyfouri N, Jabbari I. Low-dose cone-beam computed tomography reconstruction through a fast three-dimensional compressed sensing method based on the three-dimensional pseudo-polar fourier transform. JOURNAL OF MEDICAL SIGNALS & SENSORS 2022; 12:8-24. [PMID: 35265461 PMCID: PMC8804585 DOI: 10.4103/jmss.jmss_114_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 04/24/2021] [Accepted: 08/20/2021] [Indexed: 12/02/2022]
Abstract
Background: Reconstruction of high quality two dimensional images from fan beam computed tomography (CT) with a limited number of projections is already feasible through Fourier based iterative reconstruction method. However, this article is focused on a more complicated reconstruction of three dimensional (3D) images in a sparse view cone beam computed tomography (CBCT) by utilizing Compressive Sensing (CS) based on 3D pseudo polar Fourier transform (PPFT). Method: In comparison with the prevalent Cartesian grid, PPFT re gridding is potent to remove rebinning and interpolation errors. Furthermore, using PPFT based radon transform as the measurement matrix, reduced the computational complexity. Results: In order to show the computational efficiency of the proposed method, we compare it with an algebraic reconstruction technique and a CS type algorithm. We observed convergence in <20 iterations in our algorithm while others would need at least 50 iterations for reconstructing a qualified phantom image. Furthermore, using a fast composite splitting algorithm solver in each iteration makes it a fast CBCT reconstruction algorithm. The algorithm will minimize a linear combination of three terms corresponding to a least square data fitting, Hessian (HS) Penalty and l1 norm wavelet regularization. We named it PP-based compressed sensing-HS-W. In the reconstruction range of 120 projections around the 360° rotation, the image quality is visually similar to reconstructed images by Feldkamp-Davis-Kress algorithm using 720 projections. This represents a high dose reduction. Conclusion: The main achievements of this work are to reduce the radiation dose without degrading the image quality. Its ability in removing the staircase effect, preserving edges and regions with smooth intensity transition, and producing high-resolution, low-noise reconstruction results in low-dose level are also shown.
Collapse
|
12
|
Zeng D, Wang L, Geng M, Li S, Deng Y, Xie Q, Li D, Zhang H, Li Y, Xu Z, Meng D, Ma J. Noise-Generating-Mechanism-Driven Unsupervised Learning for Low-Dose CT Sinogram Recovery. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3083361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Tao X, Wang Y, Lin L, Hong Z, Ma J. Learning to Reconstruct CT Images From the VVBP-Tensor. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3030-3041. [PMID: 34138703 DOI: 10.1109/tmi.2021.3090257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning (DL) is bringing a big movement in the field of computed tomography (CT) imaging. In general, DL for CT imaging can be applied by processing the projection or the image data with trained deep neural networks (DNNs), unrolling the iterative reconstruction as a DNN for training, or training a well-designed DNN to directly reconstruct the image from the projection. In all of these applications, the whole or part of the DNNs work in the projection or image domain alone or in combination. In this study, instead of focusing on the projection or image, we train DNNs to reconstruct CT images from the view-by-view backprojection tensor (VVBP-Tensor). The VVBP-Tensor is the 3D data before summation in backprojection. It contains structures of the scanned object after applying a sorting operation. Unlike the image or projection that provides compressed information due to the integration/summation step in forward or back projection, the VVBP-Tensor provides lossless information for processing, allowing the trained DNNs to preserve fine details of the image. We develop a learning strategy by inputting slices of the VVBP-Tensor as feature maps and outputting the image. Such strategy can be viewed as a generalization of the summation step in conventional filtered backprojection reconstruction. Numerous experiments reveal that the proposed VVBP-Tensor domain learning framework obtains significant improvement over the image, projection, and hybrid projection-image domain learning frameworks. We hope the VVBP-Tensor domain learning framework could inspire algorithm development for DL-based CT imaging.
Collapse
|
14
|
Zhang C, Li Y, Chen GH. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med Phys 2021; 48:5765-5781. [PMID: 34458996 DOI: 10.1002/mp.15183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/09/2021] [Accepted: 08/02/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: (1) limited reconstruction accuracy for individual patients and (2) limited generalizability for patient statistical cohorts. PURPOSE The purpose of this work is to address the previously mentioned challenges in current deep learning methods. METHODS A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. RESULTS Extensive evaluation studies have demonstrated that: (1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS-based methods; (2) the false-positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and (3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. CONCLUSIONS DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.,Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
15
|
Whiteley W, Luk WK, Gregor J. DirectPET: full-size neural network PET reconstruction from sinogram data. J Med Imaging (Bellingham) 2020; 7:032503. [PMID: 32206686 PMCID: PMC7048204 DOI: 10.1117/1.jmi.7.3.032503] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 02/10/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose: Neural network image reconstruction directly from measurement data is a relatively new field of research, which until now has been limited to producing small single-slice images (e.g., 1 × 128 × 128 ). We proposed a more efficient network design for positron emission tomography called DirectPET, which is capable of reconstructing multislice image volumes (i.e., 16 × 400 × 400 ) from sinograms. Approach: Large-scale direct neural network reconstruction is accomplished by addressing the associated memory space challenge through the introduction of a specially designed Radon inversion layer. Using patient data, we compare the proposed method to the benchmark ordered subsets expectation maximization (OSEM) algorithm using signal-to-noise ratio, bias, mean absolute error, and structural similarity measures. In addition, line profiles and full-width half-maximum measurements are provided for a sample of lesions. Results: DirectPET is shown capable of producing images that are quantitatively and qualitatively similar to the OSEM target images in a fraction of the time. We also report on an experiment where DirectPET is trained to map low-count raw data to normal count target images, demonstrating the method's ability to maintain image quality under a low-dose scenario. Conclusion: The ability of DirectPET to quickly reconstruct high-quality, multislice image volumes suggests potential clinical viability of the method. However, design parameters and performance boundaries need to be fully established before adoption can be considered.
Collapse
Affiliation(s)
- William Whiteley
- The University of Tennessee, Department of Electrical Engineering and Computer Science, Knoxville, Tennessee, United States
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, United States
| | - Wing K. Luk
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, United States
| | - Jens Gregor
- The University of Tennessee, Department of Electrical Engineering and Computer Science, Knoxville, Tennessee, United States
| |
Collapse
|
16
|
Yang F, Zhang D, Zhang H, Huang K, Du Y, Teng M. Streaking artifacts suppression for cone-beam computed tomography with the residual learning in neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.087] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
17
|
Tschauner S, Marterer R, Nagy E, Singer G, Riccabona M, Sorantin E. Experiences with image quality and radiation dose of cone beam computed tomography (CBCT) and multidetector computed tomography (MDCT) in pediatric extremity trauma. Skeletal Radiol 2020; 49:1939-1949. [PMID: 32535775 PMCID: PMC7652807 DOI: 10.1007/s00256-020-03506-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 06/03/2020] [Accepted: 06/04/2020] [Indexed: 02/02/2023]
Abstract
INTRODUCTION Novel dedicated extremity cone beam computed tomography (CBCT) devices, recently introduced to the market, raised attention as a possible alternative in advanced diagnostic pediatric trauma imaging, today usually performed by multidetector computed tomography (MDCT). This work aimed to compare image quality and radiation dose of CBCT and MDCT. MATERIALS AND METHODS Fifty-four CBCT-MDCT examination pairs, containing nine MDCTs acquired in parallel prospectively and 45 MDCTs matched in retrospect, were included in this study. Image quality was analyzed semi-objectively by measuring noise, contrast-to-noise ratio (CNR), and signal-to-noise ratios (SNR) and subjectively by performing image impression ratings. CT dose records were readout. RESULTS Image noise was significantly lower in CBCT compared with MDCT, both semi-objectively and subjectively (both p < 0.001). CNR and SNRs were also in favor of CBCT, though CBCT examinations exhibited significantly more beam hardening artifacts that diminished the advantages of the superior semi-objective image quality. These artifacts were believed to occur more often in children due to numerous bone-cartilage transitions in open growth plates and may have led to a better subjective diagnostic certainty rating (p = 0.001). Motion artifacts were infrequently, but exclusively observed in CBCT. CT dose index (CTDIvol) was substantially lower in CBCT (p < 0.001). CONCLUSION Dedicated extremity CBCT could be an alternative low-dose modality in the diagnostic pathway of pediatric fractures. At lower doses compared with MDCT and commonly affected by beam hardening artifacts, semi-objective CBCT image quality parameters were generally better than in MDCT.
Collapse
Affiliation(s)
- Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 34, 8036, Graz, Austria.
| | - Robert Marterer
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 34, 8036, Graz, Austria
| | - Eszter Nagy
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 34, 8036, Graz, Austria
| | - Georg Singer
- Department of Paediatric and Adolescent Surgery, Medical University of Graz, Auenbruggerplatz 34, Graz, 8036, Austria
| | - Michael Riccabona
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 34, 8036, Graz, Austria
| | - Erich Sorantin
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 34, 8036, Graz, Austria
| |
Collapse
|
18
|
Jiang Z, Chen Y, Zhang Y, Ge Y, Yin FF, Ren L. Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2705-2715. [PMID: 31021791 PMCID: PMC6812588 DOI: 10.1109/tmi.2019.2912791] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Edges tend to be over-smoothed in total variation (TV) regularized under-sampled images. In this paper, symmetric residual convolutional neural network (SR-CNN), a deep learning based model, was proposed to enhance the sharpness of edges and detailed anatomical structures in under-sampled cone-beam computed tomography (CBCT). For training, CBCT images were reconstructed using TV-based method from limited projections simulated from the ground truth CT, and were fed into SR-CNN, which was trained to learn a restoring pattern from under-sampled images to the ground truth. For testing, under-sampled CBCT was reconstructed using TV regularization and was then augmented by SR-CNN. Performance of SR-CNN was evaluated using phantom and patient images of various disease sites acquired at different institutions both qualitatively and quantitatively using structure similarity (SSIM) and peak signal-to-noise ratio (PSNR). SR-CNN substantially enhanced image details in the TV-based CBCT across all experiments. In the patient study using real projections, SR-CNN augmented CBCT images reconstructed from as low as 120 half-fan projections to image quality comparable to the reference fully-sampled FDK reconstruction using 900 projections. In the tumor localization study, improvements in the tumor localization accuracy were made by the SR-CNN augmented images compared with the conventional FDK and TV-based images. SR-CNN demonstrated robustness against noise levels and projection number reductions and generalization for various disease sites and datasets from different institutions. Overall, the SR-CNN-based image augmentation technique was efficient and effective in considerably enhancing edges and anatomical structures in under-sampled 3D/4D-CBCT, which can be very valuable for image-guided radiotherapy.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, China
| | - Yingxuan Chen
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, China
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| |
Collapse
|
19
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|
20
|
Häggström I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: A deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal 2019; 54:253-262. [PMID: 30954852 DOI: 10.1016/j.media.2019.03.013] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 03/29/2019] [Accepted: 03/30/2019] [Indexed: 01/01/2023]
Abstract
The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder-decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11%/53% lower than ordered subset expectation maximization (OSEM)/filtered back-projection (FBP), structural similarity index (1%/11% higher than OSEM/FBP), and peak signal-to-noise ratio (1.1/3.8 dB higher than OSEM/FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder-decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.
Collapse
Affiliation(s)
- Ida Häggström
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States.
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Gabriele Campanella
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| | - Thomas J Fuchs
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| |
Collapse
|
21
|
Potential of a machine-learning model for dose optimization in CT quality assurance. Eur Radiol 2019; 29:3705-3713. [DOI: 10.1007/s00330-019-6013-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Accepted: 01/17/2019] [Indexed: 11/25/2022]
|