1
|
Zhang R, Szczykutowicz TP, Toia GV. Artificial Intelligence in Computed Tomography Image Reconstruction: A Review of Recent Advances. J Comput Assist Tomogr 2025:00004728-990000000-00429. [PMID: 40008975 DOI: 10.1097/rct.0000000000001734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 01/07/2025] [Indexed: 02/27/2025]
Abstract
The development of novel image reconstruction algorithms has been pivotal in enhancing image quality and reducing radiation dose in computed tomography (CT) imaging. Traditional techniques like filtered back projection perform well under ideal conditions but fail to generate high-quality images under low-dose, sparse-view, and limited-angle conditions. Iterative reconstruction methods improve upon filtered back projection by incorporating system models and assumptions about the patient, yet they can suffer from patchy image textures. The emergence of artificial intelligence (AI), particularly deep learning, has further advanced CT reconstruction. AI techniques have demonstrated great potential in reducing radiation dose while preserving image quality and noise texture. Moreover, AI has exhibited unprecedented performance in addressing challenging CT reconstruction problems, including low-dose CT, sparse-view CT, limited-angle CT, and interior tomography. This review focuses on the latest advances in AI-based CT reconstruction under these challenging conditions.
Collapse
Affiliation(s)
- Ran Zhang
- Departments of Radiology and Medical Physics, University of Wisconsin, Madison, WI
| | | | | |
Collapse
|
2
|
Li G, Deng Z, Ge Y, Luo S. HEAL: High-Frequency Enhanced and Attention-Guided Learning Network for Sparse-View CT Reconstruction. Bioengineering (Basel) 2024; 11:646. [PMID: 39061728 PMCID: PMC11273693 DOI: 10.3390/bioengineering11070646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/08/2024] [Accepted: 06/18/2024] [Indexed: 07/28/2024] Open
Abstract
X-ray computed tomography (CT) imaging technology has become an indispensable diagnostic tool in clinical examination. However, it poses a risk of ionizing radiation, making the reduction of radiation dose one of the current research hotspots in CT imaging. Sparse-view imaging, as one of the main methods for reducing radiation dose, has made significant progress in recent years. In particular, sparse-view reconstruction methods based on deep learning have shown promising results. Nevertheless, efficiently recovering image details under ultra-sparse conditions remains a challenge. To address this challenge, this paper proposes a high-frequency enhanced and attention-guided learning Network (HEAL). HEAL includes three optimization strategies to achieve detail enhancement: Firstly, we introduce a dual-domain progressive enhancement module, which leverages fidelity constraints within each domain and consistency constraints across domains to effectively narrow the solution space. Secondly, we incorporate both channel and spatial attention mechanisms to improve the network's feature-scaling process. Finally, we propose a high-frequency component enhancement regularization term that integrates residual learning with direction-weighted total variation, utilizing directional cues to effectively distinguish between noise and textures. The HEAL network is trained, validated and tested under different ultra-sparse configurations of 60 views and 30 views, demonstrating its advantages in reconstruction accuracy and detail enhancement.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (G.L.); (Z.D.)
| | - Zhenhao Deng
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (G.L.); (Z.D.)
| | - Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (G.L.); (Z.D.)
| |
Collapse
|
3
|
Kim S, Kim B, Lee J, Baek J. Sparsier2Sparse: Self-supervised convolutional neural network-based streak artifacts reduction in sparse-view CT images. Med Phys 2023; 50:7731-7747. [PMID: 37303108 DOI: 10.1002/mp.16552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/13/2023] Open
Abstract
BACKGROUND Sparse-view computed tomography (CT) has attracted a lot of attention for reducing both scanning time and radiation dose. However, sparsely-sampled projection data generate severe streak artifacts in the reconstructed images. In recent decades, many sparse-view CT reconstruction techniques based on fully-supervised learning have been proposed and have shown promising results. However, it is not feasible to acquire pairs of full-view and sparse-view CT images in real clinical practice. PURPOSE In this study, we propose a novel self-supervised convolutional neural network (CNN) method to reduce streak artifacts in sparse-view CT images. METHODS We generate the training dataset using only sparse-view CT data and train CNN based on self-supervised learning. Since the streak artifacts can be estimated using prior images under the same CT geometry system, we acquire prior images by iteratively applying the trained network to given sparse-view CT images. We then subtract the estimated steak artifacts from given sparse-view CT images to produce the final results. RESULTS We validated the imaging performance of the proposed method using extended cardiac-torso (XCAT) and the 2016 AAPM Low-Dose CT Grand Challenge dataset from Mayo Clinic. From the results of visual inspection and modulation transfer function (MTF), the proposed method preserved the anatomical structures effectively and showed higher image resolution compared to the various streak artifacts reduction methods for all projection views. CONCLUSIONS We propose a new framework for streak artifacts reduction when only the sparse-view CT data are given. Although we do not use any information of full-view CT data for CNN training, the proposed method achieved the highest performance in preserving fine details. By overcoming the limitation of dataset requirements on fully-supervised-based methods, we expect that our framework can be utilized in the medical imaging field.
Collapse
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Byeongjoon Kim
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
| | - Jooho Lee
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
- Bareunex Imaging, Inc., Seoul, South Korea
| |
Collapse
|
4
|
Chan Y, Liu X, Wang T, Dai J, Xie Y, Liang X. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction. Comput Biol Med 2023; 161:106888. [DOI: 10.1016/j.compbiomed.2023.106888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/06/2023] [Accepted: 04/01/2023] [Indexed: 04/05/2023]
|
5
|
Kim B, Shim H, Baek J. A streak artifact reduction algorithm in sparse-view CT using a self-supervised neural representation. Med Phys 2022; 49:7497-7515. [PMID: 35880806 DOI: 10.1002/mp.15885] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
PURPOSE Sparse-view computed tomography (CT) has been attracting attention for its reduced radiation dose and scanning time. However, analytical image reconstruction methods suffer from streak artifacts due to insufficient projection views. Recently, various deep learning-based methods have been developed to solve this ill-posed inverse problem. Despite their promising results, they are easily overfitted to the training data, showing limited generalizability to unseen systems and patients. In this work, we propose a novel streak artifact reduction algorithm that provides a system- and patient-specific solution. METHODS Motivated by the fact that streak artifacts are deterministic errors, we regenerate the same artifacts from a prior CT image under the same system geometry. This prior image need not be perfect but should contain patient-specific information and be consistent with full-view projection data for accurate regeneration of the artifacts. To this end, we use a coordinate-based neural representation that often causes image blur but can greatly suppress the streak artifacts while having multiview consistency. By employing techniques in neural radiance fields originally proposed for scene representations, the neural representation is optimized to the measured sparse-view projection data via self-supervised learning. Then, we subtract the regenerated artifacts from the analytically reconstructed original image to obtain the final corrected image. RESULTS To validate the proposed method, we used simulated data of extended cardiac-torso phantoms and the 2016 NIH-AAPM-Mayo Clinic Low-Dose CT Grand Challenge and experimental data of physical pediatric and head phantoms. The performance of the proposed method was compared with a total variation-based iterative reconstruction method, naive application of the neural representation, and a convolutional neural network-based method. In visual inspection, it was observed that the small anatomical features were best preserved by the proposed method. The proposed method also achieved the best scores in the visual information fidelity, modulation transfer function, and lung nodule segmentation. CONCLUSIONS The results on both simulated and experimental data suggest that the proposed method can effectively reduce the streak artifacts while preserving small anatomical structures that are easily blurred or replaced with misleading features by the existing methods. Since the proposed method does not require any additional training datasets, it would be useful in clinical practice where the large datasets cannot be collected.
Collapse
Affiliation(s)
- Byeongjoon Kim
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Hyunjung Shim
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Jongduk Baek
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| |
Collapse
|
6
|
Minnema J, Ernst A, van Eijnatten M, Pauwels R, Forouzanfar T, Batenburg KJ, Wolff J. A review on the application of deep learning for CT reconstruction, bone segmentation and surgical planning in oral and maxillofacial surgery. Dentomaxillofac Radiol 2022; 51:20210437. [PMID: 35532946 PMCID: PMC9522976 DOI: 10.1259/dmfr.20210437] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 12/11/2022] Open
Abstract
Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.
Collapse
Affiliation(s)
- Jordi Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Anne Ernst
- Institute for Medical Systems Biology, University Hospital Hamburg-Eppendorf, Hamburg, Germany
| | - Maureen van Eijnatten
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Ruben Pauwels
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
| | - Tymour Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Kees Joost Batenburg
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Jan Wolff
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard, Aarhus, Denmark
| |
Collapse
|
7
|
Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network‐based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme. Med Phys 2022; 49:6253-6277. [DOI: 10.1002/mp.15884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 11/08/2022] Open
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Junhyun Ahn
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Byeongjoon Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering Convergence IT Engineering, Mechanical Engineering School of Interdisciplinary Bioscience and Bioengineering, and Medical Device Innovation Center Pohang University of Science and Technology Pohang 37673 South Korea
| | - Jongduk Baek
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| |
Collapse
|
8
|
The use of deep learning methods in low-dose computed tomography image reconstruction: a systematic review. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00724-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractConventional reconstruction techniques, such as filtered back projection (FBP) and iterative reconstruction (IR), which have been utilised widely in the image reconstruction process of computed tomography (CT) are not suitable in the case of low-dose CT applications, because of the unsatisfying quality of the reconstructed image and inefficient reconstruction time. Therefore, as the demand for CT radiation dose reduction continues to increase, the use of artificial intelligence (AI) in image reconstruction has become a trend that attracts more and more attention. This systematic review examined various deep learning methods to determine their characteristics, availability, intended use and expected outputs concerning low-dose CT image reconstruction. Utilising the methodology of Kitchenham and Charter, we performed a systematic search of the literature from 2016 to 2021 in Springer, Science Direct, arXiv, PubMed, ACM, IEEE, and Scopus. This review showed that algorithms using deep learning technology are superior to traditional IR methods in noise suppression, artifact reduction and structure preservation, in terms of improving the image quality of low-dose reconstructed images. In conclusion, we provided an overview of the use of deep learning approaches in low-dose CT image reconstruction together with their benefits, limitations, and opportunities for improvement.
Collapse
|
9
|
Okamoto T, Kumakiri T, Haneishi H. Patch-based artifact reduction for three-dimensional volume projection data of sparse-view micro-computed tomography. Radiol Phys Technol 2022; 15:206-223. [PMID: 35622229 DOI: 10.1007/s12194-022-00661-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 04/27/2022] [Accepted: 04/28/2022] [Indexed: 11/27/2022]
Abstract
Micro-computed tomography (micro-CT) enables the non-destructive acquisition of three-dimensional (3D) morphological structures at the micrometer scale. Although it is expected to be used in pathology and histology to analyze the 3D microstructure of tissues, micro-CT imaging of tissue specimens requires a long scan time. A high-speed imaging method, sparse-view CT, can reduce the total scan time and radiation dose; however, it causes severe streak artifacts on tomographic images reconstructed with analytical algorithms due to insufficient sampling. In this paper, we propose an artifact reduction method for 3D volume projection data from sparse-view micro-CT. Specifically, we developed a patch-based lightweight fully convolutional network to estimate full-view 3D volume projection data from sparse-view 3D volume projection data. We evaluated the effectiveness of the proposed method using physically acquired datasets. The qualitative and quantitative results showed that the proposed method achieved high estimation accuracy and suppressed streak artifacts in the reconstructed images. In addition, we confirmed that the proposed method requires both short training and prediction times. Our study demonstrates that the proposed method has great potential for artifact reduction for 3D volume projection data under sparse-view conditions.
Collapse
Affiliation(s)
- Takayuki Okamoto
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
| | - Toshio Kumakiri
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, Chiba, 263-8522, Japan
| |
Collapse
|
10
|
Bai J, Liu Y, Yang H. Sparse-View CT Reconstruction Based on a Hybrid Domain Model with Multi-Level Wavelet Transform. SENSORS 2022; 22:s22093228. [PMID: 35590918 PMCID: PMC9105730 DOI: 10.3390/s22093228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/04/2022] [Accepted: 04/19/2022] [Indexed: 02/01/2023]
Abstract
The reconstruction of sparsely sampled projection data will generate obvious streaking artifacts, resulting in image quality degradation and affecting medical diagnosis results. Wavelet transform can effectively decompose directional components of image, so the artifact features and edge details with high directionality can be better detected in the wavelet domain. Therefore, a hybrid domain method based on wavelet transform is proposed in this paper for the sparse-view CT reconstruction. The reconstruction model combines wavelet, spatial, and radon domains to restore the projection consistency and enhance image details. In addition, the global distribution of artifacts requires the network to have a large receptive field, so that a multi-level wavelet transform network (MWCNN) is applied to the hybrid domain model. Wavelet transform is used in the encoding part of the network to reduce the size of feature maps instead of pooling operation and inverse wavelet transform is deployed in the decoding part to recover image details. The proposed method can achieve PSNR of 41.049 dB and SSIM of 0.958 with 120 projections of three angular intervals, and obtain the highest values in this paper. Through the results of numerical analysis and reconstructed images, it shows that the hybrid domain method is superior to the single-domain methods. At the same time, the multi-level wavelet transform model is more suitable for CT reconstruction than the single-level wavelet transform.
Collapse
|
11
|
Emerging and future use of intra-surgical volumetric X-ray imaging and adjuvant tools for decision support in breast-conserving surgery. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2022; 22. [DOI: 10.1016/j.cobme.2022.100382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
12
|
Deng K, Sun C, Gong W, Liu Y, Yang H. A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation. SENSORS 2022; 22:s22041446. [PMID: 35214348 PMCID: PMC8875841 DOI: 10.3390/s22041446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/07/2022] [Accepted: 02/09/2022] [Indexed: 02/04/2023]
Abstract
Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which turns out to be a major issue in the low dose protocol. Thus, how to exploit the limited prior information to obtain high-quality CT images becomes a crucial issue. We notice that almost all existing methods solely focus on a single CT image while neglecting the solid fact that, the scanned objects are always highly spatially correlated. Consequently, there lies bountiful spatial information between these acquired consecutive CT images, which is still largely left to be exploited. In this paper, we propose a novel hybrid-domain structure composed of fully convolutional networks that groundbreakingly explores the three-dimensional neighborhood and works in a “coarse-to-fine” manner. We first conduct data completion in the Radon domain, and transform the obtained full-view Radon data into images through FBP. Subsequently, we employ the spatial correlation between continuous CT images to productively restore them and then refine the image texture to finally receive the ideal high-quality CT images, achieving PSNR of 40.209 and SSIM of 0.943. Besides, unlike other current limited-view CT reconstruction methods, we adopt FBP (and implement it on GPUs) instead of SART-TV to significantly accelerate the overall procedure and realize it in an end-to-end manner.
Collapse
|
13
|
Kim H, Yoon H, Thakur N, Hwang G, Lee EJ, Kim C, Chong Y. Deep learning-based histopathological segmentation for whole slide images of colorectal cancer in a compressed domain. Sci Rep 2021; 11:22520. [PMID: 34795365 PMCID: PMC8602325 DOI: 10.1038/s41598-021-01905-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 10/28/2021] [Indexed: 02/06/2023] Open
Abstract
Automatic pattern recognition using deep learning techniques has become increasingly important. Unfortunately, due to limited system memory, general preprocessing methods for high-resolution images in the spatial domain can lose important data information such as high-frequency information and the region of interest. To overcome these limitations, we propose an image segmentation approach in the compressed domain based on principal component analysis (PCA) and discrete wavelet transform (DWT). After inference for each tile using neural networks, a whole prediction image was reconstructed by wavelet weighted ensemble (WWE) based on inverse discrete wavelet transform (IDWT). The training and validation were performed using 351 colorectal biopsy specimens, which were pathologically confirmed by two pathologists. For 39 test datasets, the average Dice score, the pixel accuracy, and the Jaccard score were 0.804 ± 0.125, 0.957 ± 0.025, and 0.690 ± 0.174, respectively. We can train the networks for the high-resolution image with the large region of interest compared to the result in the low-resolution and the small region of interest in the spatial domain. The average Dice score, pixel accuracy, and Jaccard score are significantly increased by 2.7%, 0.9%, and 2.7%, respectively. We believe that our approach has great potential for accurate diagnosis.
Collapse
Affiliation(s)
- Hyeongsub Kim
- Departments of Electrical Engineering, Creative IT Engineering, Mechanical Engineering, School of Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, and Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, 37674, South Korea
- Deepnoid Inc., Seoul, 08376, South Korea
| | | | - Nishant Thakur
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Uijeongbu St. Mary's Hospital, Seoul, South Korea
| | - Gyoyeon Hwang
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Yeouido St. Mary's Hospital, Seoul, South Korea
| | - Eun Jung Lee
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Yeouido St. Mary's Hospital, Seoul, South Korea
- Department of Pathology, Shinwon Medical Foundation, Gwangmyeong-si, Gyeonggi-do, South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering, Creative IT Engineering, Mechanical Engineering, School of Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, and Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, 37674, South Korea.
| | - Yosep Chong
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Uijeongbu St. Mary's Hospital, Seoul, South Korea.
| |
Collapse
|
14
|
Park SB. Advances in deep learning for computed tomography denoising. World J Clin Cases 2021; 9:7614-7619. [PMID: 34621813 PMCID: PMC8462260 DOI: 10.12998/wjcc.v9.i26.7614] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/12/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Abstract
Computed tomography (CT) has seen a rapid increase in use in recent years. Radiation from CT accounts for a significant proportion of total medical radiation. However, given the known harmful impact of radiation exposure to the human body, the excessive use of CT in medical environments raises concerns. Concerns over increasing CT use and its associated radiation burden have prompted efforts to reduce radiation dose during the procedure. Therefore, low-dose CT has attracted major attention in the radiology, since CT-associated x-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Therefore, several denoising methods have been developed and applied to image processing technologies with the goal of reducing image noise. Recently, deep learning applications that improve image quality by reducing the noise and artifacts have become commercially available for diagnostic imaging. Deep learning image reconstruction shows great potential as an advanced reconstruction method to improve the quality of clinical CT images. These improvements can provide significant benefit to patients regardless of their disease, and further advances are expected in the near future.
Collapse
Affiliation(s)
- Sung Bin Park
- Department of Radiology, Chung-Ang University Hospital, Seoul 06973, South Korea
| |
Collapse
|
15
|
Addressing signal alterations induced in CT images by deep learning processing: A preliminary phantom study. Phys Med 2021; 83:88-100. [DOI: 10.1016/j.ejmp.2021.02.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 12/13/2022] Open
|
16
|
Machine Learning and Deep Neural Networks: Applications in Patient and Scan Preparation, Contrast Medium, and Radiation Dose Optimization. J Thorac Imaging 2021; 35 Suppl 1:S17-S20. [PMID: 32079904 DOI: 10.1097/rti.0000000000000482] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Artificial intelligence (AI) algorithms are dependent on a high amount of robust data and the application of appropriate computational power and software. AI offers the potential for major changes in cardiothoracic imaging. Beyond image processing, machine learning and deep learning have the potential to support the image acquisition process. AI applications may improve patient care through superior image quality and have the potential to lower radiation dose with AI-driven reconstruction algorithms and may help avoid overscanning. This review summarizes recent promising applications of AI in patient and scan preparation as well as contrast medium and radiation dose optimization.
Collapse
|
17
|
Enjilela E, Lee TY, Wisenberg G, Teefy P, Bagur R, Islam A, Hsieh J, So A. Cubic-Spline Interpolation for Sparse-View CT Image Reconstruction With Filtered Backprojection in Dynamic Myocardial Perfusion Imaging. ACTA ACUST UNITED AC 2020; 5:300-307. [PMID: 31572791 PMCID: PMC6752292 DOI: 10.18383/j.tom.2019.00013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
We investigated a projection interpolation method for reconstructing dynamic contrast-enhanced (DCE) heart images from undersampled x-ray projections with filtered backprojecton (FBP). This method may facilitate the application of sparse-view dynamic acquisition for ultralow-dose quantitative computed tomography (CT) myocardial perfusion (MP) imaging. We conducted CT perfusion studies on 5 pigs with a standard full-view acquisition protocol (984 projections). We reconstructed DCE heart images with FBP from all and a quarter of the measured projections evenly distributed over 360°. We interpolated the sparse-view (quarter) projections to a full-view setting using a cubic-spline interpolation method before applying FBP to reconstruct the DCE heart images (synthesized full-view). To generate MP maps, we used 3 sets of DCE heart images, and compared mean MP values and biases among the 3 protocols. Compared with synthesized full-view DCE images, sparse-view DCE images were more affected by streak artifacts arising from projection undersampling. Relative to the full-view protocol, mean bias in MP measurement associated with the sparse-view protocol was 10.0 mL/min/100 g (95%CI: −8.9 to 28.9), which was >3 times higher than that associated with the synthesized full-view protocol (3.3 mL/min/100 g, 95% CI: −6.7 to 13.2). The cubic-spline-view interpolation method improved MP measurement from DCE heart images reconstructed from only a quarter of the full projection set. This method can be used with the industry-standard FBP algorithm to reconstruct DCE images of the heart, and it can reduce the radiation dose of a whole-heart quantitative CT MP study to <2 mSv (at 8-cm coverage).
Collapse
Affiliation(s)
- Esmaeil Enjilela
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Ting-Yim Lee
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, ON, Canada.,Imaging Program, Lawson Health Research Institute, London, ON, Canada
| | - Gerald Wisenberg
- Department of Cardiology, London Health Sciences Centre, London, ON, Canada
| | - Patrick Teefy
- Department of Cardiology, London Health Sciences Centre, London, ON, Canada
| | - Rodrigo Bagur
- Department of Cardiology, London Health Sciences Centre, London, ON, Canada
| | - Ali Islam
- Department of Radiology, St. Joseph's Healthcare London, London, ON, Canada; and
| | - Jiang Hsieh
- Department of Molecular Imaging & Computed Tomography, GE Healthcare, Waukesha WI
| | - Aaron So
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, ON, Canada.,Imaging Program, Lawson Health Research Institute, London, ON, Canada
| |
Collapse
|
18
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
19
|
Zeng GL. Sparse-view tomography via displacement function interpolation. Vis Comput Ind Biomed Art 2019; 2:13. [PMID: 32240401 PMCID: PMC7099552 DOI: 10.1186/s42492-019-0024-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Accepted: 10/09/2019] [Indexed: 11/10/2022] Open
Abstract
Sparse-view tomography has many applications such as in low-dose computed tomography (CT). Using under-sampled data, a perfect image is not expected. The goal of this paper is to obtain a tomographic image that is better than the naïve filtered backprojection (FBP) reconstruction that uses linear interpolation to complete the measurements. This paper proposes a method to estimate the un-measured projections by displacement function interpolation. Displacement function estimation is a non-linear procedure and the linear interpolation is performed on the displacement function (instead of, on the sinogram itself). As a result, the estimated measurements are not the linear transformation of the measured data. The proposed method is compared with the linear interpolation methods, and the proposed method shows superior performance.
Collapse
|
20
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
21
|
Lee D, Park C, Lim Y, Cho H. A Metal Artifact Reduction Method Using a Fully Convolutional Network in the Sinogram and Image Domains for Dental Computed Tomography. J Digit Imaging 2019; 33:538-546. [PMID: 31720891 DOI: 10.1007/s10278-019-00297-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
The reconstruction quality of dental computed tomography (DCT) is vulnerable to metal implants because the presence of dense metallic objects causes beam hardening and streak artifacts in the reconstructed images. These metal artifacts degrade the images and decrease the clinical usefulness of DCT. Although interpolation-based metal artifact reduction (MAR) methods have been introduced, they may not be efficient in DCT because teeth as well as metallic objects have high X-ray attenuation. In this study, we investigated an effective MAR method based on a fully convolutional network (FCN) in both sinogram and image domains. The method consisted of three main steps: (1) segmentation of the metal trace, (2) FCN-based restoration in the sinogram domain, and (3) FCN-based restoration in image domain followed by metal insertion. We performed a computational simulation and an experiment to investigate the image quality and evaluated the effectiveness of the proposed method. The results of the proposed method were compared with those obtained by the normalized MAR method and the deep learning-based MAR algorithm in the sinogram domain with respect to the root-mean-square error and the structural similarity. Our results indicate that the proposed MAR method significantly reduced the presence of metal artifacts in DCT images and demonstrated better image performance than those of the other algorithms in reducing the streak artifacts without introducing any contrast anomaly.
Collapse
Affiliation(s)
- Dongyeon Lee
- Department of Radiation Convergence Engineering, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, South Korea
| | - Chulkyu Park
- Department of Radiation Convergence Engineering, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, South Korea
| | - Younghwan Lim
- Department of Radiation Convergence Engineering, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, South Korea
| | - Hyosung Cho
- Department of Radiation Convergence Engineering, Yonsei University, 1 Yonseidae-gil, Wonju, 26493, South Korea.
| |
Collapse
|
22
|
Zhu H, Tong D, Zhang L, Wang S, Wu W, Tang H, Chen Y, Luo L, Zhu J, Li B. Temporally downsampled cerebral CT perfusion image restoration using deep residual learning. Int J Comput Assist Radiol Surg 2019; 15:193-201. [DOI: 10.1007/s11548-019-02082-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 10/18/2019] [Indexed: 12/27/2022]
|
23
|
Fu J, Dong J, Zhao F. A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography With Incomplete Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2190-2202. [PMID: 31647435 DOI: 10.1109/tip.2019.2947790] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Differential phase-contrast computed tomography (DPC-CT) is a powerful analysis tool for soft-tissue and low-atomic-number samples. Limited by the implementation conditions, DPC-CT with incomplete projections happens quite often. Conventional reconstruction algorithms face difficulty when given incomplete data. They usually involve complicated parameter selection operations, which are also sensitive to noise and are time-consuming. In this paper, we report a new deep learning reconstruction framework for incomplete data DPC-CT. It involves the tight coupling of the deep learning neural network and DPC-CT reconstruction algorithm in the domain of DPC projection sinograms. The estimated result is not an artifact caused by the incomplete data, but a complete phase-contrast projection sinogram. After training, this framework is determined and can be used to reconstruct the final DPC-CT images for a given incomplete projection sinogram. Taking the sparse-view, limited-view and missing-view DPC-CT as examples, this framework is validated and demonstrated with synthetic and experimental data sets. Compared with other methods, our framework can achieve the best imaging quality at a faster speed and with fewer parameters. This work supports the application of the state-of-the-art deep learning theory in the field of DPC-CT.
Collapse
|
24
|
Lee D, Kim H, Choi B, Kim HJ. Development of a deep neural network for generating synthetic dual-energy chest x-ray images with single x-ray exposure. Phys Med Biol 2019; 64:115017. [PMID: 31026841 DOI: 10.1088/1361-6560/ab1cee] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Dual-energy chest radiography (DECR) is a medical imaging technology that can improve diagnostic accuracy. This technique can decompose single-energy chest radiography (SECR) images into separate bone- and soft tissue-only images. This can, however, double the radiation exposure to the patient. To address this limitation, we developed an algorithm for the synthesis of DECR from a SECR through deep learning. To predict high resolution images, we developed a novel deep learning architecture by modifying a conventional U-net to take advantage of the high frequency-dominant information that propagates from the encoding part to the decoding part. In addition, we used the anticorrelated relationship (ACR) of DECR for improving the quality of the predicted images. For training data, 300 pairs of SECR and their corresponding DECR images were used. To test the trained model, 50 DECR images from Yonsei University Severance Hospital and 662 publicly accessible SECRs were used. To evaluate the performance of the proposed method, we compared DECR and predicted images using a structural similarity approach (SSIM). In addition, we quantitatively evaluated image quality calculating the modulation transfer function and coefficient of variation. The proposed model selectively predicted the bone- and soft tissue-only CR images from an SECR image. The strategy for improving the spatial resolution by ACR was effective. Quantitative evaluation showed that the proposed method with ACR showed relatively high SSIM (over 0.85). In addition, predicted images with the proposed ACR model achieved better image quality measures than those of U-net. In conclusion, the proposed method can obtain high-quality bone- and soft tissue-only CR images without the need for additional hardware for double x-ray exposures in clinical practice.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Radiation Convergence Engineering, Research Institute of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, Republic of Korea
| | | | | | | |
Collapse
|