1
|
Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, Acharya R. Deep learning techniques in PET/CT imaging: A comprehensive review from sinogram to image space. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107880. [PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/16/2023] [Accepted: 10/21/2023] [Indexed: 11/06/2023]
Abstract
Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Prabal Datta Barua
- School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | | | - Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| |
Collapse
|
2
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
3
|
Hossen L, Wechalekar K. Motion correction for diagnosis of cardiac sarcoidosis-do we have all the answers? J Nucl Cardiol 2023; 30:1886-1889. [PMID: 37491509 DOI: 10.1007/s12350-023-03330-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 05/26/2023] [Indexed: 07/27/2023]
Affiliation(s)
- Lucy Hossen
- Department of Nuclear Medicine, Royal Brompton and Harefield Hospitals, Part of Guy's and St Thomas' NHS Foundation Trust, London, UK.
| | - Kshama Wechalekar
- Department of Nuclear Medicine, Royal Brompton and Harefield Hospitals, Part of Guy's and St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
4
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
5
|
Park J, Kang SK, Hwang D, Choi H, Ha S, Seo JM, Eo JS, Lee JS. Automatic Lung Cancer Segmentation in [ 18F]FDG PET/CT Using a Two-Stage Deep Learning Approach. Nucl Med Mol Imaging 2023; 57:86-93. [PMID: 36998591 PMCID: PMC10043063 DOI: 10.1007/s13139-022-00745-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 10/18/2022] Open
Abstract
Purpose Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
| | - Donghwi Hwang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seunggyun Ha
- Division of Nuclear Medicine, Department of Radiology, Seoul St Mary’s Hospital, The Catholic University of Korea, Seoul, 06591 Korea
| | - Jong Mo Seo
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
| | - Jae Seon Eo
- Department of Nuclear Medicine, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul, 08308 Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 Korea
| |
Collapse
|
6
|
Kim KM, Lee MS, Suh MS, Cheon GJ, Lee JS. Voxel-Based Internal Dosimetry for 177Lu-Labeled Radiopharmaceutical Therapy Using Deep Residual Learning. Nucl Med Mol Imaging 2023; 57:94-102. [PMID: 36998593 PMCID: PMC10043146 DOI: 10.1007/s13139-022-00769-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 06/28/2022] [Accepted: 08/05/2022] [Indexed: 11/26/2022] Open
Abstract
Purpose In this study, we propose a deep learning (DL)-based voxel-based dosimetry method in which dose maps acquired using the multiple voxel S-value (VSV) approach were used for residual learning. Methods Twenty-two SPECT/CT datasets from seven patients who underwent 177Lu-DOTATATE treatment were used in this study. The dose maps generated from Monte Carlo (MC) simulations were used as the reference approach and target images for network training. The multiple VSV approach was used for residual learning and compared with dose maps generated from deep learning. The conventional 3D U-Net network was modified for residual learning. The absorbed doses in the organs were calculated as the mass-weighted average of the volume of interest (VOI). Results The DL approach provided a slightly more accurate estimation than the multiple-VSV approach, but the results were not statistically significant. The single-VSV approach yielded a relatively inaccurate estimation. No significant difference was noted between the multiple VSV and DL approach on the dose maps. However, this difference was prominent in the error maps. The multiple VSV and DL approach showed a similar correlation. In contrast, the multiple VSV approach underestimated doses in the low-dose range, but it accounted for the underestimation when the DL approach was applied. Conclusion Dose estimation using the deep learning-based approach was approximately equal to that in the MC simulation. Accordingly, the proposed deep learning network is useful for accurate and fast dosimetry after radiation therapy using 177Lu-labeled radiopharmaceuticals.
Collapse
Affiliation(s)
- Keon Min Kim
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul, 03080 South Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, 03080 South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon, 34057 Korea
| | - Min Seok Suh
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| | - Gi Jeong Cheon
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| | - Jae Sung Lee
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul, 03080 South Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, 03080 South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| |
Collapse
|
7
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
8
|
Leynes AP, Ahn S, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, Larson PEZ. Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:678-689. [PMID: 38223528 PMCID: PMC10785227 DOI: 10.1109/trpms.2021.3118325] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
Collapse
Affiliation(s)
- Andrew P Leynes
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Sangtae Ahn
- Biology and Physics Department, GE Research, Niskayuna, NY 12309 USA
| | | | - Sandeep S Kaushik
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
- Department of Computer Science, Technical University of Munich, 80333 Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, 8057 Zurich, Switzerland
| | - Florian Wiesinger
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA, USA
- Department of Radiology, San Francisco VA Medical Center, San Francisco, CA 94121 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
9
|
Hwang D, Kang SK, Kim KY, Choi H, Lee JS. Comparison of deep learning-based emission-only attenuation correction methods for positron emission tomography. Eur J Nucl Med Mol Imaging 2021; 49:1833-1842. [PMID: 34882262 DOI: 10.1007/s00259-021-05637-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 11/24/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (μ) of the annihilation photons in PET. METHODS One of the approaches uses a CNN to generate μ-maps from the non-attenuation-corrected (NAC) PET images (μ-CNNNAC). In the other method, CNN is used to improve the accuracy of μ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (μ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (μ-CNNMLAA+NAC) and the suitability of μ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS The error of the attenuation correction factors estimated using μ-CT and μ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from μ-CNNNAC. However, CNNNAC provided less accurate bone structures in the μ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the μ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using μ-CNNMLAA and μ-CNNMLAA+NAC were superior to those corrected using μ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.
Collapse
Affiliation(s)
- Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Kyeong Yun Kim
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea.
- Brightonix Imaging Inc., Seoul, South Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea.
| |
Collapse
|
10
|
Lee JS, Kim KM, Choi Y, Kim HJ. A Brief History of Nuclear Medicine Physics, Instrumentation, and Data Sciences in Korea. Nucl Med Mol Imaging 2021; 55:265-284. [PMID: 34868376 DOI: 10.1007/s13139-021-00721-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/14/2021] [Accepted: 10/18/2021] [Indexed: 10/19/2022] Open
Abstract
We review the history of nuclear medicine physics, instrumentation, and data sciences in Korea to commemorate the 60th anniversary of the Korean Society of Nuclear Medicine. In the 1970s and 1980s, the development of SPECT, nuclear stethoscope, and bone densitometry systems, as well as kidney and cardiac image analysis technology, marked the beginning of nuclear medicine physics and engineering in Korea. With the introduction of PET and cyclotron in Korea in 1994, nuclear medicine imaging research was further activated. With the support of large-scale government projects, the development of gamma camera, SPECT, and PET systems was carried out. Exploiting the use of PET scanners in conjunction with cyclotrons, extensive studies on myocardial blood flow quantification and brain image analysis were also actively pursued. In 2005, Korea's first domestic cyclotron succeeded in producing radioactive isotopes, and the cyclotron was provided to six universities and university hospitals, thereby facilitating the nationwide supply of PET radiopharmaceuticals. Since the late 2000s, research on PET/MRI has been actively conducted, and the advanced research results of Korean scientists in the fields of silicon photomultiplier PET and simultaneous PET/MRI have attracted significant attention from the academic community. Currently, Korean researchers are actively involved in endeavors to solve a variety of complex problems in nuclear medicine using artificial intelligence and deep learning technologies.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Kyeong Min Kim
- Department of Isotopic Drug Development, Korea Radioisotope Center for Pharmaceuticals, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Yong Choi
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Hee-Joung Kim
- Department of Radiological Science, Yonsei University, Wonju, Korea
| |
Collapse
|