1
|
Ying C, Chen Y, Yan Y, Flores S, Laforest R, Benzinger TLS, An H. Accuracy and Longitudinal Consistency of PET/MR Attenuation Correction in Amyloid PET Imaging amid Software and Hardware Upgrades. AJNR Am J Neuroradiol 2025; 46:635-642. [PMID: 39251256 PMCID: PMC11979810 DOI: 10.3174/ajnr.a8490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 09/04/2024] [Indexed: 09/11/2024]
Abstract
BACKGROUND AND PURPOSE Integrated PET/MR allows the simultaneous acquisition of PET biomarkers and structural and functional MRI to study Alzheimer disease (AD). Attenuation correction (AC), crucial for PET quantification, can be performed by using a deep learning approach, DL-Dixon, based on standard Dixon images. Longitudinal amyloid PET imaging, which provides important information about disease progression or treatment responses in AD, is usually acquired over several years. Hardware and software upgrades often occur during a multiple-year study period, resulting in data variability. This study aims to harmonize PET/MR DL-Dixon AC amid software and head coil updates and evaluate its accuracy and longitudinal consistency. MATERIALS AND METHODS Tri-modality PET/MR and CT images were obtained from 329 participants, with a subset of 38 undergoing tri-modality scans twice within approximately 3 years. Transfer learning was used to fine-tune DL-Dixon models on images from 2 scanner software versions (VB20P and VE11P) and 2 head coils (16-channel and 32-channel coils). The accuracy and longitudinal consistency of the DL-Dixon AC were evaluated. Power analyses were performed to estimate the sample size needed to detect various levels of longitudinal changes in the PET standardized uptake value ratio (SUVR). RESULTS The DL-Dixon method demonstrated high accuracy across all data, irrespective of scanner software versions and head coils. More than 95.6% of brain voxels showed less than 10% PET relative absolute error in all participants. The median [interquartile range] PET mean relative absolute error was 1.10% [0.93%, 1.26%], 1.24% [1.03%, 1.54%], 0.99% [0.86%, 1.13%] in the cortical summary region, and 1.04% [0.83%, 1.36%], 1.08% [0.84%, 1.34%], 1.05% [0.72%, 1.32%] in cerebellum by using the DL-Dixon models for the VB20P 16-channel coil, VE11P 16-channel coil, and VE11P 32-channel coil data, respectively. The within-subject coefficient of variation and intraclass correlation coefficient of PET SUVR in the cortical regions were comparable between the DL-Dixon and CT AC. Power analysis indicated that similar numbers of participants would be needed to detect the same level of PET changes by using DL-Dixon and CT AC. CONCLUSIONS DL-Dixon exhibited excellent accuracy and longitudinal consistency across the 2 software versions and head coils, demonstrating its robustness for longitudinal PET/MR neuroimaging studies in AD.
Collapse
Affiliation(s)
- Chunwei Ying
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
| | - Yasheng Chen
- Department of Neurology (Y.C., H.A.), Washington University School of Medicine, St. Louis, Missouri
| | - Yan Yan
- Department of Surgery (Y.Y., T.L.S.B.), Washington University School of Medicine, St. Louis, Missouri
| | - Shaney Flores
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
| | - Richard Laforest
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
| | - Tammie L S Benzinger
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
- Knight Alzheimer Disease Research Center (T.L.S.B.), Washington University School of Medicine, St. Louis, Missouri
- Department of Neurosurgery (T.L.S.B.), Washington University School of Medicine, St. Louis, Missouri
| | - Hongyu An
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
- Department of Neurology (Y.C., H.A.), Washington University School of Medicine, St. Louis, Missouri
| |
Collapse
|
2
|
Miller RJH, Slomka PJ. Artificial Intelligence in Nuclear Cardiology: An Update and Future Trends. Semin Nucl Med 2024; 54:648-657. [PMID: 38521708 DOI: 10.1053/j.semnuclmed.2024.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 02/19/2024] [Indexed: 03/25/2024]
Abstract
Myocardial perfusion imaging (MPI), using either single photon emission computed tomography (SPECT) or positron emission tomography (PET), is one of the most commonly ordered cardiac imaging tests, with prominent clinical roles for disease diagnosis and risk prediction. Artificial intelligence (AI) could potentially play a role in many steps along the typical MPI workflow, from image acquisition through to clinical reporting and risk estimation. AI can be utilized to improve image quality, reducing radiation exposure and image acquisition times. Once images are acquired, AI can help optimize motion correction and image registration during image reconstruction or provide direct image attenuation correction. Utilizing these image sets, AI can segment a number of anatomic features from associated computed tomographic imaging or even generate synthetic attenuation imaging. Lastly, AI may play an important role in disease diagnosis or risk prediction by combining the large number of potentially important clinical, stress, and imaging-related variables. This review will focus on the most recent developments in the field, providing clinicians and researchers with a timely update on the field. Additionally, it will discuss future trends including applications of AI during multiple points of the typical MPI workflow to maximize clinical utility and methods to maximize the information that can be obtained from hybrid imaging.
Collapse
Affiliation(s)
- Robert J H Miller
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA; Department of Cardiac Sciences, University of Calgary, Calgary, AB, Canada
| | - Piotr J Slomka
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA.
| |
Collapse
|
3
|
Miller RJH, Slomka PJ. Current status and future directions in artificial intelligence for nuclear cardiology. Expert Rev Cardiovasc Ther 2024; 22:367-378. [PMID: 39001698 DOI: 10.1080/14779072.2024.2380764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 07/12/2024] [Indexed: 07/18/2024]
Abstract
INTRODUCTION Myocardial perfusion imaging (MPI) is one of the most commonly ordered cardiac imaging tests. Accurate motion correction, image registration, and reconstruction are critical for high-quality imaging, but this can be technically challenging and has traditionally relied on expert manual processing. With accurate processing, there is a rich variety of clinical, stress, functional, and anatomic data that can be integrated to guide patient management. AREAS COVERED PubMed and Google Scholar were reviewed for articles related to artificial intelligence in nuclear cardiology published between 2020 and 2024. We will outline the prominent roles for artificial intelligence (AI) solutions to provide motion correction, image registration, and reconstruction. We will review the role for AI in extracting anatomic data for hybrid MPI which is otherwise neglected. Lastly, we will discuss AI methods to integrate the wealth of data to improve disease diagnosis or risk stratification. EXPERT OPINION There is growing evidence that AI will transform the performance of MPI by automating and improving on aspects of image acquisition and reconstruction. Physicians and researchers will need to understand the potential strengths of AI in order to benefit from the full clinical utility of MPI.
Collapse
Affiliation(s)
- Robert J H Miller
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Cardiac Sciences, University of Calgary, Calgary, Canada
| | - Piotr J Slomka
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| |
Collapse
|
4
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
5
|
Li X, Johnson JM, Strigel RM, Bancroft LCH, Hurley SA, Estakhraji SIZ, Kumar M, Fowler AM, McMillan AB. Attenuation correction and truncation completion for breast PET/MR imaging using deep learning. Phys Med Biol 2024; 69:045031. [PMID: 38252969 DOI: 10.1088/1361-6560/ad2126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/22/2024] [Indexed: 01/24/2024]
Abstract
Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.
Collapse
Affiliation(s)
- Xue Li
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Jacob M Johnson
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Roberta M Strigel
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Leah C Henze Bancroft
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Samuel A Hurley
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - S Iman Zare Estakhraji
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Manoj Kumar
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- ICTR Graduate Program in Clinical Investigation, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Amy M Fowler
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Alan B McMillan
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| |
Collapse
|
6
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
7
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
8
|
Ghezzo S, Bezzi C, Neri I, Mapelli P, Presotto L, Gajate AMS, Bettinardi V, Garibotto V, De Cobelli F, Scifo P, Picchio M. Radiomics and artificial intelligence. CLINICAL PET/MRI 2023:365-401. [DOI: 10.1016/b978-0-323-88537-9.00002-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
9
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
10
|
Ahangari S, Beck Olin A, Kinggård Federspiel M, Jakoby B, Andersen TL, Hansen AE, Fischer BM, Littrup Andersen F. A deep learning-based whole-body solution for PET/MRI attenuation correction. EJNMMI Phys 2022; 9:55. [PMID: 35978211 PMCID: PMC9385907 DOI: 10.1186/s40658-022-00486-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 08/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Deep convolutional neural networks have demonstrated robust and reliable PET attenuation correction (AC) as an alternative to conventional AC methods in integrated PET/MRI systems. However, its whole-body implementation is still challenging due to anatomical variations and the limited MRI field of view. The aim of this study is to investigate a deep learning (DL) method to generate voxel-based synthetic CT (sCT) from Dixon MRI and use it as a whole-body solution for PET AC in a PET/MRI system. MATERIALS AND METHODS Fifteen patients underwent PET/CT followed by PET/MRI with whole-body coverage from skull to feet. We performed MRI truncation correction and employed co-registered MRI and CT images for training and leave-one-out cross-validation. The network was pretrained with region-specific images. The accuracy of the AC maps and reconstructed PET images were assessed by performing a voxel-wise analysis and calculating the quantification error in SUV obtained using DL-based sCT (PETsCT) and a vendor-provided atlas-based method (PETAtlas), with the CT-based reconstruction (PETCT) serving as the reference. In addition, region-specific analysis was performed to compare the performances of the methods in brain, lung, liver, spine, pelvic bone, and aorta. RESULTS Our DL-based method resulted in better estimates of AC maps with a mean absolute error of 62 HU, compared to 109 HU for the atlas-based method. We found an excellent voxel-by-voxel correlation between PETCT and PETsCT (R2 = 0.98). The absolute percentage difference in PET quantification for the entire image was 6.1% for PETsCT and 11.2% for PETAtlas. The regional analysis showed that the average errors and the variability for PETsCT were lower than PETAtlas in all regions. The largest errors were observed in the lung, while the smallest biases were observed in the brain and liver. CONCLUSIONS Experimental results demonstrated that a DL approach for whole-body PET AC in PET/MRI is feasible and allows for more accurate results compared with conventional methods. Further evaluation using a larger training cohort is required for more accurate and robust performance.
Collapse
Affiliation(s)
- Sahar Ahangari
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.
| | - Anders Beck Olin
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | | | | | - Thomas Lund Andersen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark.,Department of Diagnostic Radiology, Rigshospitalet, Copenhagen, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
11
|
Leynes AP, Ahn S, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, Larson PEZ. Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:678-689. [PMID: 38223528 PMCID: PMC10785227 DOI: 10.1109/trpms.2021.3118325] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
Collapse
Affiliation(s)
- Andrew P Leynes
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Sangtae Ahn
- Biology and Physics Department, GE Research, Niskayuna, NY 12309 USA
| | | | - Sandeep S Kaushik
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
- Department of Computer Science, Technical University of Munich, 80333 Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, 8057 Zurich, Switzerland
| | - Florian Wiesinger
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA, USA
- Department of Radiology, San Francisco VA Medical Center, San Francisco, CA 94121 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
12
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
13
|
Usui K, Ogawa K, Goto M, Sakano Y, Kyougoku S, Daida H. A cycle generative adversarial network for improving the quality of four-dimensional cone-beam computed tomography images. Radiat Oncol 2022; 17:69. [PMID: 35392947 PMCID: PMC8991563 DOI: 10.1186/s13014-022-02042-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 03/23/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Four-dimensional cone-beam computed tomography (4D-CBCT) can visualize moving tumors, thus adaptive radiation therapy (ART) could be improved if 4D-CBCT were used. However, 4D-CBCT images suffer from severe imaging artifacts. The aim of this study is to investigate the use of synthetic 4D-CBCT (sCT) images created by a cycle generative adversarial network (CycleGAN) for ART for lung cancer. METHODS Unpaired thoracic 4D-CBCT images and four-dimensional multislice computed tomography (4D-MSCT) images of 20 lung-cancer patients were used for training. High-quality sCT lung images generated by the CycleGAN model were tested on another 10 cases. The mean and mean absolute errors were calculated to assess changes in the computed tomography number. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used to compare the sCT and original 4D-CBCT images. Moreover, a volumetric modulation arc therapy plan with a dose of 48 Gy in four fractions was recalculated using the sCT images and compared with ideal dose distributions observed in 4D-MSCT images. RESULTS The generated sCT images had fewer artifacts, and lung tumor regions were clearly observed in the sCT images. The mean and mean absolute errors were near 0 Hounsfield units in all organ regions. The SSIM and PSNR results were significantly improved in the sCT images by approximately 51% and 18%, respectively. Moreover, the results of gamma analysis were significantly improved; the pass rate reached over 90% in the doses recalculated using the sCT images. Moreover, each organ dose index of the sCT images agreed well with those of the 4D-MSCT images and were within approximately 5%. CONCLUSIONS The proposed CycleGAN enhances the quality of 4D-CBCT images, making them comparable to 4D-MSCT images. Thus, clinical implementation of sCT-based ART for lung cancer is feasible.
Collapse
Affiliation(s)
- Keisuke Usui
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.
| | - Koichi Ogawa
- Department of Applied Informatics, Faculty of Science and Engineering, Hosei University, 3-7-3, Kajino, Koganei, Tokyo, 184-8584, Japan
| | - Masami Goto
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Yasuaki Sakano
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shinsuke Kyougoku
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Hiroyuki Daida
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| |
Collapse
|
14
|
Generative Adversarial Networks in Brain Imaging: A Narrative Review. J Imaging 2022; 8:jimaging8040083. [PMID: 35448210 PMCID: PMC9028488 DOI: 10.3390/jimaging8040083] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/08/2022] [Accepted: 03/15/2022] [Indexed: 02/04/2023] Open
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.
Collapse
|
15
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
16
|
Decuyper M, Maebe J, Van Holen R, Vandenberghe S. Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys 2021; 8:81. [PMID: 34897550 PMCID: PMC8665861 DOI: 10.1186/s40658-021-00426-y] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 11/19/2021] [Indexed: 12/19/2022] Open
Abstract
The use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging.
Collapse
Affiliation(s)
- Milan Decuyper
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Jens Maebe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Roel Van Holen
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Stefaan Vandenberghe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| |
Collapse
|
17
|
Optical to Planar X-ray Mouse Image Mapping in Preclinical Nuclear Medicine Using Conditional Adversarial Networks. J Imaging 2021; 7:jimaging7120262. [PMID: 34940729 PMCID: PMC8704599 DOI: 10.3390/jimaging7120262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 11/29/2021] [Accepted: 11/30/2021] [Indexed: 11/24/2022] Open
Abstract
In the current work, a pix2pix conditional generative adversarial network has been evaluated as a potential solution for generating adequately accurate synthesized morphological X-ray images by translating standard photographic images of mice. Such an approach will benefit 2D functional molecular imaging techniques, such as planar radioisotope and/or fluorescence/bioluminescence imaging, by providing high-resolution information for anatomical mapping, but not for diagnosis, using conventional photographic sensors. Planar functional imaging offers an efficient alternative to biodistribution ex vivo studies and/or 3D high-end molecular imaging systems since it can be effectively used to track new tracers and study the accumulation from zero point in time post-injection. The superimposition of functional information with an artificially produced X-ray image may enhance overall image information in such systems without added complexity and cost. The network has been trained in 700 input (photography)/ground truth (X-ray) paired mouse images and evaluated using a test dataset composed of 80 photographic images and 80 ground truth X-ray images. Performance metrics such as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and Fréchet inception distance (FID) were used to quantitatively evaluate the proposed approach in the acquired dataset.
Collapse
|
18
|
McMillan AB, Bradshaw TJ. Artificial Intelligence-Based Data Corrections for Attenuation and Scatter in Position Emission Tomography and Single-Photon Emission Computed Tomography. PET Clin 2021; 16:543-552. [PMID: 34364816 PMCID: PMC10562009 DOI: 10.1016/j.cpet.2021.06.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Recent developments in artificial intelligence (AI) technology have enabled new developments that can improve attenuation and scatter correction in PET and single-photon emission computed tomography (SPECT). These technologies will enable the use of accurate and quantitative imaging without the need to acquire a computed tomography image, greatly expanding the capability of PET/MR imaging, PET-only, and SPECT-only scanners. The use of AI to aid in scatter correction will lead to improvements in image reconstruction speed, and improve patient throughput. This article outlines the use of these new tools, surveys contemporary implementation, and discusses their limitations.
Collapse
Affiliation(s)
- Alan B McMillan
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA.
| | - Tyler J Bradshaw
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA. https://twitter.com/tybradshaw11
| |
Collapse
|
19
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 111] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
20
|
Sari H, Reaungamornrat J, Catalano O, Vera-Olmos J, Izquierdo-Garcia D, Morales MA, Torrado-Carvajal A, Ng SCT, Malpica N, Kamen A, Catana C. Evaluation of Deep Learning-based Approaches to Segment Bowel Air Pockets and Generate Pelvis Attenuation Maps from CAIPIRINHA-accelerated Dixon MR Images. J Nucl Med 2021; 63:468-475. [PMID: 34301782 DOI: 10.2967/jnumed.120.261032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 06/06/2021] [Indexed: 11/16/2022] Open
Abstract
Attenuation correction (AC) remains a challenge in pelvis PET/MR imaging. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvis attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, which can introduce bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3D CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semi-automated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model- and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semi-automated segmentations, with a mean Dice similarity coefficient of 0.75. Volumetric similarity score between two segmentations was 0.85 ± 0.14. The mean absolute relative change (RCs) with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning and model-based μ-maps, respectively. The average RC between PET images reconstructed with deep learning and CT-based μ-maps was 2.6%. Conclusion: We presented a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images with comparable accuracy to semi-automatic segmentations. We also showed that the μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images are more accurate than those generated with the model-based approach available on integrated PET/MRI scanner.
Collapse
Affiliation(s)
- Hasan Sari
- Athinoula A. Martinos Center for Biomedical Imaging, United States
| | | | - Onofrio Catalano
- Athinoula A. Martinos Center for Biomedical Imaging, United States
| | | | | | | | | | | | | | - Ali Kamen
- Siemens Corporate Research, United States
| | - Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, United States
| |
Collapse
|
21
|
Kläser K, Varsavsky T, Markiewicz P, Vercauteren T, Hammers A, Atkinson D, Thielemans K, Hutton B, Cardoso MJ, Ourselin S. Imitation learning for improved 3D PET/MR attenuation correction. Med Image Anal 2021; 71:102079. [PMID: 33951598 PMCID: PMC7611431 DOI: 10.1016/j.media.2021.102079] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 04/01/2021] [Accepted: 04/06/2021] [Indexed: 12/24/2022]
Abstract
The assessment of the quality of synthesised/pseudo Computed Tomography (pCT) images is commonly measured by an intensity-wise similarity between the ground truth CT and the pCT. However, when using the pCT as an attenuation map (μ-map) for PET reconstruction in Positron Emission Tomography Magnetic Resonance Imaging (PET/MRI) minimising the error between pCT and CT neglects the main objective of predicting a pCT that when used as μ-map reconstructs a pseudo PET (pPET) which is as similar as possible to the gold standard CT-derived PET reconstruction. This observation motivated us to propose a novel multi-hypothesis deep learning framework explicitly aimed at PET reconstruction application. A convolutional neural network (CNN) synthesises pCTs by minimising a combination of the pixel-wise error between pCT and CT and a novel metric-loss that itself is defined by a CNN and aims to minimise consequent PET residuals. Training is performed on a database of twenty 3D MR/CT/PET brain image pairs. Quantitative results on a fully independent dataset of twenty-three 3D MR/CT/PET image pairs show that the network is able to synthesise more accurate pCTs. The Mean Absolute Error on the pCT (110.98 HU ± 19.22 HU) compared to a baseline CNN (172.12 HU ± 19.61 HU) and a multi-atlas propagation approach (153.40 HU ± 18.68 HU), and subsequently lead to a significant improvement in the PET reconstruction error (4.74% ± 1.52% compared to baseline 13.72% ± 2.48% and multi-atlas propagation 6.68% ± 2.06%).
Collapse
Affiliation(s)
- Kerstin Kläser
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK.
| | - Thomas Varsavsky
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Pawel Markiewicz
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Alexander Hammers
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK; Kings College London & GSTT PET Centre, St. Thomas Hospital, London, UK
| | - David Atkinson
- Centre for Medical Imaging, University College London, London W1W 7TS, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London NW1 2BU, UK
| | - Brian Hutton
- Institute of Nuclear Medicine, University College London, London NW1 2BU, UK
| | - M J Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| |
Collapse
|
22
|
Gong K, Han PK, Johnson KA, El Fakhri G, Ma C, Li Q. Attenuation correction using deep Learning and integrated UTE/multi-echo Dixon sequence: evaluation in amyloid and tau PET imaging. Eur J Nucl Med Mol Imaging 2021; 48:1351-1361. [PMID: 33108475 PMCID: PMC8411350 DOI: 10.1007/s00259-020-05061-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 09/30/2020] [Indexed: 02/07/2023]
Abstract
PURPOSE PET measures of amyloid and tau pathologies are powerful biomarkers for the diagnosis and monitoring of Alzheimer's disease (AD). Because cortical regions are close to bone, quantitation accuracy of amyloid and tau PET imaging can be significantly influenced by errors of attenuation correction (AC). This work presents an MR-based AC method that combines deep learning with a novel ultrashort time-to-echo (UTE)/multi-echo Dixon (mUTE) sequence for amyloid and tau imaging. METHODS Thirty-five subjects that underwent both 11C-PiB and 18F-MK6240 scans were included in this study. The proposed method was compared with Dixon-based atlas method as well as magnetization-prepared rapid acquisition with gradient echo (MPRAGE)- or Dixon-based deep learning methods. The Dice coefficient and validation loss of the generated pseudo-CT images were used for comparison. PET error images regarding standardized uptake value ratio (SUVR) were quantified through regional and surface analysis to evaluate the final AC accuracy. RESULTS The Dice coefficients of the deep learning methods based on MPRAGE, Dixon, and mUTE images were 0.84 (0.91), 0.84 (0.92), and 0.87 (0.94) for the whole-brain (above-eye) bone regions, respectively, higher than the atlas method of 0.52 (0.64). The regional SUVR error for the atlas method was around 6%, higher than the regional SUV error. The regional SUV and SUVR errors for all deep learning methods were below 2%, with mUTE-based deep learning method performing the best. As for the surface analysis, the atlas method showed the largest error (> 10%) near vertices inside superior frontal, lateral occipital, superior parietal, and inferior temporal cortices. The mUTE-based deep learning method resulted in the least number of regions with error higher than 1%, with the largest error (> 5%) showing up near the inferior temporal and medial orbitofrontal cortices. CONCLUSION Deep learning with mUTE can generate accurate AC for amyloid and tau imaging in PET/MR.
Collapse
Affiliation(s)
- Kuang Gong
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| | - Paul Kyu Han
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| | - Keith A Johnson
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
- Center for Alzheimer Research and Treatment, Department of Neurology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, 02115, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| | - Quanzheng Li
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA.
| |
Collapse
|
23
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
24
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
25
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|