1
|
Shang C, Sakurai K, Nihashi T, Arahata Y, Takeda A, Ishii K, Ishii K, Matsuda H, Ito K, Kato T, Toyama H, Nakamura A. Comparison of consistency in centiloid scale among different analytical methods in amyloid PET: the CapAIBL, VIZCalc, and Amyquant methods. Ann Nucl Med 2024; 38:460-467. [PMID: 38512444 PMCID: PMC11108942 DOI: 10.1007/s12149-024-01919-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/28/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE The Centiloid (CL) scale is a standardized measure for quantifying amyloid deposition in amyloid positron emission tomography (PET) imaging. We aimed to assess the agreement among 3 CL calculation methods: CapAIBL, VIZCalc, and Amyquant. METHODS This study included 192 participants (mean age: 71.5 years, range: 50-87 years), comprising 55 with Alzheimer's disease, 65 with mild cognitive impairment, 13 with non-Alzheimer's dementia, and 59 cognitively normal participants. All the participants were assessed using the three CL calculation methods. Spearman's rank correlation, linear regression, Friedman tests, Wilcoxon signed-rank tests, and Bland-Altman analysis were employed to assess data correlations, linear associations, method differences, and systematic bias, respectively. RESULTS Strong correlations (rho = 0.99, p < .001) were observed among the CL values calculated using the three methods. Scatter plots and regression lines visually confirmed these strong correlations and met the validation criteria. Despite the robust correlations, a significant difference in CL value between CapAIBL and Amyquant was observed (36.1 ± 39.7 vs. 34.9 ± 39.4; p < .001). In contrast, no significant differences were found between CapAIBL and VIZCalc or between VIZCalc and Amyquant. The Bland-Altman analysis showed no observable systematic bias between the methods. CONCLUSIONS The study demonstrated strong agreement among the three methods for calculating CL values. Despite minor variations in the absolute values of the Centiloid scores obtained using these methods, the overall agreement suggests that they are interchangeable.
Collapse
Affiliation(s)
- Cong Shang
- Department of Radiology, National Center for Geriatrics and Gerontology, 7-430 Morioka-Cho, Obu, Aichi, 474-8511, Japan
- Department of Radiology, Fujita Health University School of Medicine, Toyoake, Japan
| | - Keita Sakurai
- Department of Radiology, National Center for Geriatrics and Gerontology, 7-430 Morioka-Cho, Obu, Aichi, 474-8511, Japan
| | - Takashi Nihashi
- Department of Radiology, National Center for Geriatrics and Gerontology, 7-430 Morioka-Cho, Obu, Aichi, 474-8511, Japan
| | - Yutaka Arahata
- Department of Neurology, National Center for Geriatrics and Gerontology, Obu, Japan
| | - Akinori Takeda
- Department of Neurology, National Center for Geriatrics and Gerontology, Obu, Japan
| | - Kazunari Ishii
- Department of Radiology, Faculty of Medicine, Kindai University, Osakasayama, Japan
| | - Kenji Ishii
- Team for Neuroimaging Research, Tokyo Metropolitan Institute of Gerontology, Tokyo, Japan
| | - Hiroshi Matsuda
- Department of Biofunctional Imaging, Fukushima Medical University, Fukushima, Japan
- Drug Discovery and Cyclotron Research Center, Southern Tohoku Research Institute for Neuroscience, Koriyama, Japan
| | - Kengo Ito
- Department of Radiology, National Center for Geriatrics and Gerontology, 7-430 Morioka-Cho, Obu, Aichi, 474-8511, Japan
- Department of Clinical and Experimental Neuroimaging, National Center for Geriatrics and Gerontology, Obu, Japan
| | - Takashi Kato
- Department of Radiology, National Center for Geriatrics and Gerontology, 7-430 Morioka-Cho, Obu, Aichi, 474-8511, Japan.
- Department of Clinical and Experimental Neuroimaging, National Center for Geriatrics and Gerontology, Obu, Japan.
| | - Hiroshi Toyama
- Department of Radiology, Fujita Health University School of Medicine, Toyoake, Japan
| | - Akinori Nakamura
- Department of Clinical and Experimental Neuroimaging, National Center for Geriatrics and Gerontology, Obu, Japan
- Department of Biomarker Research, National Center for Geriatrics and Gerontology, Obu, Japan
| |
Collapse
|
2
|
Jiang J, Shi R, Lu J, Wang M, Zhang Q, Zhang S, Wang L, Alberts I, Rominger A, Zuo C, Shi K. Detection of individual brain tau deposition in Alzheimer's disease based on latent feature-enhanced generative adversarial network. Neuroimage 2024; 291:120593. [PMID: 38554780 DOI: 10.1016/j.neuroimage.2024.120593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 03/17/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024] Open
Abstract
OBJECTIVE The conventional methods for interpreting tau PET imaging in Alzheimer's disease (AD), including visual assessment and semi-quantitative analysis of fixed hallmark regions, are insensitive to detect individual small lesions because of the spatiotemporal neuropathology's heterogeneity. In this study, we proposed a latent feature-enhanced generative adversarial network model for the automatic extraction of individual brain tau deposition regions. METHODS The latent feature-enhanced generative adversarial network we propose can learn the distribution characteristics of tau PET images of cognitively normal individuals and output the abnormal distribution regions of patients. This model was trained and validated using 1131 tau PET images from multiple centres (with distinct races, i.e., Caucasian and Mongoloid) with different tau PET ligands. The overall quality of synthetic imaging was evaluated using structural similarity (SSIM), peak signal to noise ratio (PSNR), and mean square error (MSE). The model was compared to the fixed templates method for diagnosing and predicting AD. RESULTS The reconstructed images archived good quality, with SSIM = 0.967 ± 0.008, PSNR = 31.377 ± 3.633, and MSE = 0.0011 ± 0.0007 in the independent test set. The model showed higher classification accuracy (AUC = 0.843, 95 % CI = 0.796-0.890) and stronger correlation with clinical scales (r = 0.508, P < 0.0001). The model also achieved superior predictive performance in the survival analysis of cognitive decline, with a higher hazard ratio: 3.662, P < 0.001. INTERPRETATION The LFGAN4Tau model presents a promising new approach for more accurate detection of individualized tau deposition. Its robustness across tracers and races makes it a potentially reliable diagnostic tool for AD in practice.
Collapse
Affiliation(s)
- Jiehui Jiang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China.
| | - Rong Shi
- School of Information and Communication Engineering, Shanghai University, Shanghai, China
| | - Jiaying Lu
- Department of Nuclear Medicine & PET Center, Huashan Hospital, Fudan University, Shanghai, China; National Research Center for Aging and Medicine and National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai, China
| | - Min Wang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China.
| | - Qi Zhang
- School of Information and Communication Engineering, Shanghai University, Shanghai, China
| | - Shuoyan Zhang
- School of Information and Communication Engineering, Shanghai University, Shanghai, China
| | - Luyao Wang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China
| | - Ian Alberts
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Chuantao Zuo
- Department of Nuclear Medicine & PET Center, Huashan Hospital, Fudan University, Shanghai, China; National Research Center for Aging and Medicine and National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai, China; Human Phenome Institute, Fudan University, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland; Department of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
3
|
Bollack A, Markiewicz PJ, Wink AM, Prosser L, Lilja J, Bourgeat P, Schott JM, Coath W, Collij LE, Pemberton HG, Farrar G, Barkhof F, Cash DM. Evaluation of novel data-driven metrics of amyloid β deposition for longitudinal PET studies. Neuroimage 2023; 280:120313. [PMID: 37595816 DOI: 10.1016/j.neuroimage.2023.120313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 05/29/2023] [Accepted: 08/04/2023] [Indexed: 08/20/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) provides in vivo quantification of amyloid-β (Aβ) pathology. Established methods for assessing Aβ burden can be affected by physiological and technical factors. Novel, data-driven metrics have been developed to account for these sources of variability. We aimed to evaluate the performance of four of these amyloid PET metrics against conventional techniques, using a common set of criteria. METHODS Three cohorts were used for evaluation: Insight 46 (N=464, [18F]florbetapir), AIBL (N=277, [18F]flutemetamol), and an independent test-retest data (N=10, [18F]flutemetamol). Established metrics of amyloid tracer uptake included the Centiloid (CL) and where dynamic data was available, the non-displaceable binding potential (BPND). The four data-driven metrics computed were the amyloid load (Aβ load), the Aβ-PET pathology accumulation index (Aβ index), the Centiloid derived from non-negative matrix factorisation (CLNMF), and the amyloid pattern similarity score (AMPSS). These metrics were evaluated using reliability and repeatability in test-retest data, associations with BPND and CL, variability of the rate of change and sample size estimates to detect a 25% slowing in Aβ accumulation. RESULTS All metrics showed good reliability. Aβ load, Aβ index and CLNMF were strong associated with the BPND. The associations with CL suggest that cross-sectional measures of CLNMF, Aβ index and Aβ load are robust across studies. Sample size estimates for secondary prevention trial scenarios were the lowest for CLNMF and Aβ load compared to the CL. CONCLUSION Among the novel data-driven metrics evaluated, the Aβ load, the Aβ index and the CLNMF can provide comparable performance to more established quantification methods of Aβ PET tracer uptake. The CLNMF and Aβ load could offer a more precise alternative to CL, although further studies in larger cohorts should be conducted.
Collapse
Affiliation(s)
- Ariane Bollack
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, UCL, London, UK.
| | - Pawel J Markiewicz
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Alle Meije Wink
- Amsterdam UMC, location VUmc, Department of Radiology and Nuclear Medicine, Amsterdam, the Netherlands
| | - Lloyd Prosser
- Dementia Research Centre, UCL Queen Square Institute of Neurology, London, UK
| | | | | | - Jonathan M Schott
- Dementia Research Centre, UCL Queen Square Institute of Neurology, London, UK
| | - William Coath
- Dementia Research Centre, UCL Queen Square Institute of Neurology, London, UK
| | - Lyduine E Collij
- Amsterdam UMC, location VUmc, Department of Radiology and Nuclear Medicine, Amsterdam, the Netherlands; Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, Malmö, Sweden
| | - Hugh G Pemberton
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, UCL, London, UK; GE HealthCare, Amersham, UK; Queen Square Institute of Neurology, University College London, UK
| | | | - Frederik Barkhof
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, UCL, London, UK; Amsterdam UMC, location VUmc, Department of Radiology and Nuclear Medicine, Amsterdam, the Netherlands; Queen Square Institute of Neurology, University College London, UK
| | - David M Cash
- Queen Square Institute of Neurology, University College London, UK; UK Dementia Research Institute at University College London, London, UK
| |
Collapse
|
4
|
Seo SY, Oh JS, Chung J, Kim SY, Kim JS. MR Template-Based Individual Brain PET Volumes-of-Interest Generation Neither Using MR nor Using Spatial Normalization. Nucl Med Mol Imaging 2023; 57:73-85. [PMID: 36998592 PMCID: PMC10043100 DOI: 10.1007/s13139-022-00772-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 07/01/2022] [Accepted: 08/29/2022] [Indexed: 10/10/2022] Open
Abstract
For more anatomically precise quantitation of mouse brain PET, spatial normalization (SN) of PET onto MR template and subsequent template volumes-of-interest (VOIs)-based analysis are commonly used. Although this leads to dependency on the corresponding MR and the process of SN, routine preclinical/clinical PET images cannot always afford corresponding MR and relevant VOIs. To resolve this issue, we propose a deep learning (DL)-based individual-brain-specific VOIs (i.e., cortex, hippocampus, striatum, thalamus, and cerebellum) directly generated from PET images using the inverse-spatial-normalization (iSN)-based VOI labels and deep convolutional neural network model (deep CNN). Our technique was applied to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer's disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans before and after the administration of human immunoglobin or antibody-based treatments. To train the CNN, PET images were used as inputs and MR iSN-based target VOIs as labels. Our devised methods achieved decent performance in terms of not only VOI agreements (i.e., Dice similarity coefficient) but also the correlation of mean counts and SUVR, and CNN-based VOIs was highly accordant with ground-truth (the corresponding MR and MR template-based VOIs). Moreover, the performance metrics were comparable to that of VOI generated by MR-based deep CNN. In conclusion, we established a novel quantitative analysis method both MR-less and SN-less fashion to generate individual brain space VOIs using MR template-based VOIs for PET image quantification. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-022-00772-4.
Collapse
Affiliation(s)
- Seung Yeon Seo
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jungsu S. Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
| | - Jinwha Chung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seog-Young Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
| |
Collapse
|
5
|
Park J, Kang SK, Hwang D, Choi H, Ha S, Seo JM, Eo JS, Lee JS. Automatic Lung Cancer Segmentation in [ 18F]FDG PET/CT Using a Two-Stage Deep Learning Approach. Nucl Med Mol Imaging 2023; 57:86-93. [PMID: 36998591 PMCID: PMC10043063 DOI: 10.1007/s13139-022-00745-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 10/18/2022] Open
Abstract
Purpose Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
| | - Donghwi Hwang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seunggyun Ha
- Division of Nuclear Medicine, Department of Radiology, Seoul St Mary’s Hospital, The Catholic University of Korea, Seoul, 06591 Korea
| | - Jong Mo Seo
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
| | - Jae Seon Eo
- Department of Nuclear Medicine, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul, 08308 Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 Korea
| |
Collapse
|
6
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
7
|
Rana A, Dumka A, Singh R, Panda MK, Priyadarshi N. A Computerized Analysis with Machine Learning Techniques for the Diagnosis of Parkinson's Disease: Past Studies and Future Perspectives. Diagnostics (Basel) 2022; 12:2708. [PMID: 36359550 PMCID: PMC9689408 DOI: 10.3390/diagnostics12112708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/30/2022] [Accepted: 11/02/2022] [Indexed: 08/03/2023] Open
Abstract
According to the World Health Organization (WHO), Parkinson's disease (PD) is a neurodegenerative disease of the brain that causes motor symptoms including slower movement, rigidity, tremor, and imbalance in addition to other problems like Alzheimer's disease (AD), psychiatric problems, insomnia, anxiety, and sensory abnormalities. Techniques including artificial intelligence (AI), machine learning (ML), and deep learning (DL) have been established for the classification of PD and normal controls (NC) with similar therapeutic appearances in order to address these problems and improve the diagnostic procedure for PD. In this article, we examine a literature survey of research articles published up to September 2022 in order to present an in-depth analysis of the use of datasets, various modalities, experimental setups, and architectures that have been applied in the diagnosis of subjective disease. This analysis includes a total of 217 research publications with a list of the various datasets, methodologies, and features. These findings suggest that ML/DL methods and novel biomarkers hold promising results for application in medical decision-making, leading to a more methodical and thorough detection of PD. Finally, we highlight the challenges and provide appropriate recommendations on selecting approaches that might be used for subgrouping and connection analysis with structural magnetic resonance imaging (sMRI), DaTSCAN, and single-photon emission computerized tomography (SPECT) data for future Parkinson's research.
Collapse
Affiliation(s)
- Arti Rana
- Computer Science & Engineering, Veer Madho Singh Bhandari Uttarakhand Technical University, Dehradun 248007, Uttarakhand, India
| | - Ankur Dumka
- Department of Computer Science and Engineering, Women Institute of Technology, Dehradun 248007, Uttarakhand, India
- Department of Computer Science & Engineering, Graphic Era Deemed to be University, Dehradun 248001, Uttarakhand, India
| | - Rajesh Singh
- Division of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, Uttarakhand, India
- Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
| | - Manoj Kumar Panda
- Department of Electrical Engineering, G.B. Pant Institute of Engineering and Technology, Pauri 246194, Uttarakhand, India
| | - Neeraj Priyadarshi
- Department of Electrical Engineering, JIS College of Engineering, Kolkata 741235, West Bengal, India
| |
Collapse
|
8
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
9
|
Validation of deep learning-based nonspecific estimates for amyloid burden quantification with longitudinal data. Phys Med 2022; 99:85-93. [PMID: 35665624 DOI: 10.1016/j.ejmp.2022.05.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 05/26/2022] [Accepted: 05/27/2022] [Indexed: 11/20/2022] Open
Abstract
PURPOSE To validate our previously proposed method of quantifying amyloid-beta (Aβ) load using nonspecific (NS) estimates generated with convolutional neural networks (CNNs) using [18F]Florbetapir scans from longitudinal and multicenter ADNI data. METHODS 188 paired MR (T1-weighted and T2-weighted) and PET images were downloaded from the ADNI3 dataset, of which 49 subjects had 2 time-point scans. 40 Aβ- subjects with low specific uptake were selected for training. Multimodal ScaleNet (SN) and monomodal HighRes3DNet (HRN), using either T1-weighted or T2-weighted MR images as inputs) were trained to map structural MR to NS-PET images. The optimized SN and HRN networks were used to estimate the NS for all scans and then subtracted from SUVr images to determine the specific amyloid load (SAβL) images. The association of SAβL with various cognitive and functional test scores was evaluated using Spearman analysis, as well as the differences in SAβL with cognitive test scores for 49 subjects with 2 time-point scans and sensitivity analysis. RESULTS SAβL derived from both SN and HRN showed higher association with memory-related cognitive test scores compared to SUVr. However, for longitudinal scans, only SAβL estimated from multimodal SN consistently performed better than SUVr for all memory-related cognitive test scores. CONCLUSIONS Our proposed method of quantifying Aβ load using NS estimated from CNN correlated better than SUVr with cognitive decline for both static and longitudinal data, and was able to estimate NS of [18F]Florbetapir. We suggest employing multimodal networks with both T1-weighted and T2-weighted MR images for better NS estimation.
Collapse
|
10
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
11
|
Unified spatial normalization method of brain PET images using adaptive probabilistic brain atlas. Eur J Nucl Med Mol Imaging 2022; 49:3073-3085. [PMID: 35258689 DOI: 10.1007/s00259-022-05752-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 03/01/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE A unique advantage of the brain positron emission tomography (PET) imaging is the ability to image different biological processes with different radiotracers. However, the diversity of the brain PET image patterns also makes their spatial normalization challenging. Since structural MR images are not always available in the clinical practice, this study proposed a PET-only spatial normalization method based on adaptive probabilistic brain atlas. METHODS The proposed method (atlas-based method) consists of two parts, an adaptive probabilistic brain atlas generation algorithm, and a probabilistic framework for registering PET image to the generated atlas. To validate this method, the results of MRI-based method and template-based method (a widely used PET-only method) were treated as the gold standard and control, respectively. A total of 286 brain PET images, including seven radiotracers (FDG, PIB, FBB, AV-45, AV-1451, AV-133, [18F]altanserin) and four groups of subjects (Alzheimer disease, Parkinson disease, frontotemporal dementia, and healthy control), were spatially normalized using the three methods. The results were then quantitatively compared by using correlation analysis, meta region of interest (meta-ROI) standardized uptake value ratio (SUVR) analysis, and statistical parametric mapping (SPM) analysis. RESULTS The Pearson correlation coefficient between the images computed by atlas-based method and the gold standard was 0.908 ± 0.005. The relative error of meta-ROI SUVR computed by atlas-based method was 2.12 ± 0.18%. Compared with template-based method, atlas-based method was also more consistent with the gold standard in SPM analysis. CONCLUSION The proposed method provides a unified approach to spatially normalize brain PET images of different radiotracers accurately without MR images. A free MATLAB toolbox for this method has been provided.
Collapse
|
12
|
Seo SY, Kim SJ, Oh JS, Chung J, Kim SY, Oh SJ, Joo S, Kim JS. Unified Deep Learning-Based Mouse Brain MR Segmentation: Template-Based Individual Brain Positron Emission Tomography Volumes-of-Interest Generation Without Spatial Normalization in Mouse Alzheimer Model. Front Aging Neurosci 2022; 14:807903. [PMID: 35309883 PMCID: PMC8931825 DOI: 10.3389/fnagi.2022.807903] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/17/2022] [Indexed: 02/03/2023] Open
Abstract
Although skull-stripping and brain region segmentation are essential for precise quantitative analysis of positron emission tomography (PET) of mouse brains, deep learning (DL)-based unified solutions, particularly for spatial normalization (SN), have posed a challenging problem in DL-based image processing. In this study, we propose an approach based on DL to resolve these issues. We generated both skull-stripping masks and individual brain-specific volumes-of-interest (VOIs—cortex, hippocampus, striatum, thalamus, and cerebellum) based on inverse spatial normalization (iSN) and deep convolutional neural network (deep CNN) models. We applied the proposed methods to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer’s disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans two times, before and after the administration of human immunoglobulin or antibody-based treatments. For training the CNN, manually traced brain masks and iSN-based target VOIs were used as the label. We compared our CNN-based VOIs with conventional (template-based) VOIs in terms of the correlation of standardized uptake value ratio (SUVR) by both methods and two-sample t-tests of SUVR % changes in target VOIs before and after treatment. Our deep CNN-based method successfully generated brain parenchyma mask and target VOIs, which shows no significant difference from conventional VOI methods in SUVR correlation analysis, thus establishing methods of template-based VOI without SN.
Collapse
Affiliation(s)
- Seung Yeon Seo
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Soo-Jong Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Health Sciences and Technology, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Songpa-gu, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon-si, South Korea
| | - Jungsu S. Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- *Correspondence: Jungsu S. Oh, ;
| | - Jinwha Chung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Seog-Young Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Seung Jun Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Segyeong Joo
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| |
Collapse
|
13
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
14
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
15
|
Iaccarino L, La Joie R, Koeppe R, Siegel BA, Hillner BE, Gatsonis C, Whitmer RA, Carrillo MC, Apgar C, Camacho MR, Nosheny R, Rabinovici GD. rPOP: Robust PET-only processing of community acquired heterogeneous amyloid-PET data. Neuroimage 2021; 246:118775. [PMID: 34890793 DOI: 10.1016/j.neuroimage.2021.118775] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 11/12/2021] [Accepted: 11/30/2021] [Indexed: 11/17/2022] Open
Abstract
The reference standard for amyloid-PET quantification requires structural MRI (sMRI) for preprocessing in both multi-site research studies and clinical trials. Here we describe rPOP (robust PET-Only Processing), a MATLAB-based MRI-free pipeline implementing non-linear warping and differential smoothing of amyloid-PET scans performed with any of the FDA-approved radiotracers (18F-florbetapir/FBP, 18F-florbetaben/FBB or 18F-flutemetamol/FLUTE). Each image undergoes spatial normalization based on weighted PET templates and data-driven differential smoothing, then allowing users to perform their quantification of choice. Prior to normalization, users can choose whether to automatically reset the origin of the image to the center of mass or proceed with the pipeline with the image as it is. We validate rPOP with n = 740 (514 FBP, 182 FBB, 44 FLUTE) amyloid-PET scans from the Imaging Dementia-Evidence for Amyloid Scanning - Brain Health Registry sub-study (IDEAS-BHR) and n = 1,518 scans from the Alzheimer's Disease Neuroimaging Initiative (n = 1,249 FBP, n = 269 FBB), including heterogeneous acquisition and reconstruction protocols. After running rPOP, a standard quantification to extract Standardized Uptake Value ratios and the respective Centiloids conversion was performed. rPOP-based amyloid status (using an independent pathology-based threshold of ≥24.4 Centiloid units) was compared with either local visual reads (IDEAS-BHR, n = 663 with complete valid data and reads available) or with amyloid status derived from an MRI-based PET processing pipeline (ADNI, thresholds of >20/>18 Centiloids for FBP/FBB). Finally, within the ADNI dataset, we tested the linear associations between rPOP- and MRI-based Centiloid values. rPOP achieved accurate warping for N = 2,233/2,258 (98.9%) in the first pass. Of the N = 25 warping failures, 24 were rescued with manual reorientation and origin reset prior to warping. We observed high concordance between rPOP-based amyloid status and both visual reads (IDEAS-BHR, Cohen's k = 0.72 [0.7-0.74], ∼86% concordance) or MRI-pipeline based amyloid status (ADNI, k = 0.88 [0.87-0.89], ∼94% concordance). rPOP- and MRI-pipeline based Centiloids were strongly linearly related (R2:0.95, p<0.001), with this association being significantly modulated by estimated PET resolution (β= -0.016, p<0.001). rPOP provides reliable MRI-free amyloid-PET warping and quantification, leveraging widely available software and only requiring an attenuation-corrected amyloid-PET image as input. The rPOP pipeline enables the comparison and merging of heterogeneous datasets and is publicly available at https://github.com/leoiacca/rPOP.
Collapse
Affiliation(s)
- Leonardo Iaccarino
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, CA, United States
| | - Renaud La Joie
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, CA, United States
| | - Robert Koeppe
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Barry A Siegel
- Edward Mallinckrodt Institute of Radiology, Washington University School of Medicine in St Louis, St Louis, MO, United States
| | - Bruce E Hillner
- Department of Medicine, Virginia Commonwealth University, Richmond, VA, United States
| | - Constantine Gatsonis
- Center for Statistical Sciences, Brown University School of Public Health, Providence, RI, United States; Department of Biostatistics, Brown University School of Public Health, Providence, RI, United States
| | - Rachel A Whitmer
- Division of Research, Kaiser Permanente, Oakland, CA, United States; Department of Public Health Sciences, University of California Davis, Davis, CA, United States
| | - Maria C Carrillo
- Medical and Scientific Relations Division, Alzheimer's Association, Chicago, IL, United States
| | - Charles Apgar
- American College of Radiology, Reston, VA, United States
| | - Monica R Camacho
- San Francisco VA Medical Center, San Francisco, CA, United States; Northern California Institute for Research and Education (NCIRE), San Francisco, CA, United States
| | - Rachel Nosheny
- San Francisco VA Medical Center, San Francisco, CA, United States; Department of Psychiatry, University of California San Francisco, San Francisco, CA, United States
| | - Gil D Rabinovici
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California San Francisco, San Francisco, CA, United States; Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, United States.
| | | |
Collapse
|
16
|
Lee JS, Kim KM, Choi Y, Kim HJ. A Brief History of Nuclear Medicine Physics, Instrumentation, and Data Sciences in Korea. Nucl Med Mol Imaging 2021; 55:265-284. [PMID: 34868376 DOI: 10.1007/s13139-021-00721-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/14/2021] [Accepted: 10/18/2021] [Indexed: 10/19/2022] Open
Abstract
We review the history of nuclear medicine physics, instrumentation, and data sciences in Korea to commemorate the 60th anniversary of the Korean Society of Nuclear Medicine. In the 1970s and 1980s, the development of SPECT, nuclear stethoscope, and bone densitometry systems, as well as kidney and cardiac image analysis technology, marked the beginning of nuclear medicine physics and engineering in Korea. With the introduction of PET and cyclotron in Korea in 1994, nuclear medicine imaging research was further activated. With the support of large-scale government projects, the development of gamma camera, SPECT, and PET systems was carried out. Exploiting the use of PET scanners in conjunction with cyclotrons, extensive studies on myocardial blood flow quantification and brain image analysis were also actively pursued. In 2005, Korea's first domestic cyclotron succeeded in producing radioactive isotopes, and the cyclotron was provided to six universities and university hospitals, thereby facilitating the nationwide supply of PET radiopharmaceuticals. Since the late 2000s, research on PET/MRI has been actively conducted, and the advanced research results of Korean scientists in the fields of silicon photomultiplier PET and simultaneous PET/MRI have attracted significant attention from the academic community. Currently, Korean researchers are actively involved in endeavors to solve a variety of complex problems in nuclear medicine using artificial intelligence and deep learning technologies.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Kyeong Min Kim
- Department of Isotopic Drug Development, Korea Radioisotope Center for Pharmaceuticals, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Yong Choi
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Hee-Joung Kim
- Department of Radiological Science, Yonsei University, Wonju, Korea
| |
Collapse
|
17
|
Qu C, Zou Y, Dai Q, Ma Y, He J, Liu Q, Kuang W, Jia Z, Chen T, Gong Q. Advancing diagnostic performance and clinical applicability of deep learning-driven generative adversarial networks for Alzheimer's disease. PSYCHORADIOLOGY 2021; 1:225-248. [PMID: 38666217 PMCID: PMC10917234 DOI: 10.1093/psyrad/kkab017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/18/2021] [Accepted: 11/25/2021] [Indexed: 02/05/2023]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease that severely affects the activities of daily living in aged individuals, which typically needs to be diagnosed at an early stage. Generative adversarial networks (GANs) provide a new deep learning method that show good performance in image processing, while it remains to be verified whether a GAN brings benefit in AD diagnosis. The purpose of this research is to systematically review psychoradiological studies on the application of a GAN in the diagnosis of AD from the aspects of classification of AD state and AD-related image processing compared with other methods. In addition, we evaluated the research methodology and provided suggestions from the perspective of clinical application. Compared with other methods, a GAN has higher accuracy in the classification of AD state and better performance in AD-related image processing (e.g. image denoising and segmentation). Most studies used data from public databases but lacked clinical validation, and the process of quantitative assessment and comparison in these studies lacked clinicians' participation, which may have an impact on the improvement of generation effect and generalization ability of the GAN model. The application value of GANs in the classification of AD state and AD-related image processing has been confirmed in reviewed studies. Improvement methods toward better GAN architecture were also discussed in this paper. In sum, the present study demonstrated advancing diagnostic performance and clinical applicability of GAN for AD, and suggested that the future researchers should consider recruiting clinicians to compare the algorithm with clinician manual methods and evaluate the clinical effect of the algorithm.
Collapse
Affiliation(s)
- Changxing Qu
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu 610044, China
| | - Yinxi Zou
- West China School of Medicine, Sichuan University, Chengdu 610044, China
| | - Qingyi Dai
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu 610044, China
| | - Yingqiao Ma
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
| | - Jinbo He
- School of Psychology, Central China Normal University, Wuhan 430079, China
| | - Qihong Liu
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China
| | - Weihong Kuang
- Department of Psychiatry, West China Hospital of Sichuan University, Chengdu 610065, China
| | - Zhiyun Jia
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
| | - Taolin Chen
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
- Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu 610041, Sichuan, P.R. China
- Functional and Molecular Imaging Key Laboratory of Sichuan Provience, Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, Sichuan, P.R. China
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
- Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu 610041, Sichuan, P.R. China
- Functional and Molecular Imaging Key Laboratory of Sichuan Provience, Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, Sichuan, P.R. China
| |
Collapse
|
18
|
Pegueroles J, Montal V, Bejanin A, Vilaplana E, Aranha M, Santos‐Santos MA, Alcolea D, Carrió I, Camacho V, Blesa R, Lleó A, Fortea J. AMYQ: An index to standardize quantitative amyloid load across PET tracers. Alzheimers Dement 2021; 17:1499-1508. [PMID: 33797846 PMCID: PMC8519100 DOI: 10.1002/alz.12317] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 01/21/2021] [Accepted: 01/31/2021] [Indexed: 12/17/2022]
Abstract
INTRODUCTION Positron emission tomography (PET) amyloid quantification methods require magnetic resonance imaging (MRI) for spatial registration and a priori reference region to scale the images. Furthermore, different tracers have distinct thresholds for positivity. We propose the AMYQ index, a new measure of amyloid burden, to overcome these limitations. METHODS We selected 18F-amyloid scans from ADNI and Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL) with the corresponding T1-MRI. A subset also had neuropathological data. PET images were normalized, and the AMYQ was calculated based on an adaptive template. We compared AMYQ with the Centiloid scale on clinical and neuropathological diagnostic performance. RESULTS AMYQ was related with amyloid neuropathological burden and had excellent diagnostic performance to discriminate controls from patients with Alzheimer's disease (AD) (area under the curve [AUC] = 0.86). AMYQ had a high agreement with the Centiloid scale (intraclass correlation coefficient [ICC] = 0.88) and AUC between 0.94 and 0.99 to discriminate PET positivity when using different Centiloid cutoffs. DISCUSSION AMYQ is a new MRI-independent index for standardizing and quantifying amyloid load across tracers.
Collapse
Affiliation(s)
- Jordi Pegueroles
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Victor Montal
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Alexandre Bejanin
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Eduard Vilaplana
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Mateus Aranha
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Miguel Angel Santos‐Santos
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Daniel Alcolea
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Ignasi Carrió
- Department of Nuclear MedicineHospital de la Santa Creu i Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
| | - Valle Camacho
- Department of Nuclear MedicineHospital de la Santa Creu i Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
| | - Rafael Blesa
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Alberto Lleó
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | - Juan Fortea
- Sant Pau Memory Unit, Department of NeurologyHospital de la Santa Creu i Sant Pau, Biomedical Research Institute Sant Pau, Universitat Autònoma de BarcelonaBarcelonaSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED)MadridSpain
| | | | | |
Collapse
|
19
|
Accurate Transmission-Less Attenuation Correction Method for Amyloid-β Brain PET Using Deep Neural Network. ELECTRONICS 2021. [DOI: 10.3390/electronics10151836] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The lack of physically measured attenuation maps (μ-maps) for attenuation and scatter correction is an important technical challenge in brain-dedicated stand-alone positron emission tomography (PET) scanners. The accuracy of the calculated attenuation correction is limited by the nonuniformity of tissue composition due to pathologic conditions and the complex structure of facial bones. The aim of this study is to develop an accurate transmission-less attenuation correction method for amyloid-β (Aβ) brain PET studies. We investigated the validity of a deep convolutional neural network trained to produce a CT-derived μ-map (μ-CT) from simultaneously reconstructed activity and attenuation maps using the MLAA (maximum likelihood reconstruction of activity and attenuation) algorithm for Aβ brain PET. The performance of three different structures of U-net models (2D, 2.5D, and 3D) were compared. The U-net models generated less noisy and more uniform μ-maps than MLAA μ-maps. Among the three different U-net models, the patch-based 3D U-net model reduced noise and cross-talk artifacts more effectively. The Dice similarity coefficients between the μ-map generated using 3D U-net and μ-CT in bone and air segments were 0.83 and 0.67. All three U-net models showed better voxel-wise correlation of the μ-maps compared to MLAA. The patch-based 3D U-net model was the best. While the uptake value of MLAA yielded a high percentage error of 20% or more, the uptake value of 3D U-nets yielded the lowest percentage error within 5%. The proposed deep learning approach that requires no transmission data, anatomic image, or atlas/template for PET attenuation correction remarkably enhanced the quantitative accuracy of the simultaneously estimated MLAA μ-maps from Aβ brain PET.
Collapse
|
20
|
Kang SK, An HJ, Jin H, Kim JI, Chie EK, Park JM, Lee JS. Synthetic CT generation from weakly paired MR images using cycle-consistent GAN for MR-guided radiotherapy. Biomed Eng Lett 2021; 11:263-271. [PMID: 34350052 DOI: 10.1007/s13534-021-00195-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 06/01/2021] [Accepted: 06/11/2021] [Indexed: 12/22/2022] Open
Abstract
Although MR-guided radiotherapy (MRgRT) is advancing rapidly, generating accurate synthetic CT (sCT) from MRI is still challenging. Previous approaches using deep neural networks require large dataset of precisely co-registered CT and MRI pairs that are difficult to obtain due to respiration and peristalsis. Here, we propose a method to generate sCT based on deep learning training with weakly paired CT and MR images acquired from an MRgRT system using a cycle-consistent GAN (CycleGAN) framework that allows the unpaired image-to-image translation in abdomen and thorax. Data from 90 cancer patients who underwent MRgRT were retrospectively used. CT images of the patients were aligned to the corresponding MR images using deformable registration, and the deformed CT (dCT) and MRI pairs were used for network training and testing. The 2.5D CycleGAN was constructed to generate sCT from the MRI input. To improve the sCT generation performance, a perceptual loss that explores the discrepancy between high-dimensional representations of images extracted from a well-trained classifier was incorporated into the CycleGAN. The CycleGAN with perceptual loss outperformed the U-net in terms of errors and similarities between sCT and dCT, and dose estimation for treatment planning of thorax, and abdomen. The sCT generated using CycleGAN produced virtually identical dose distribution maps and dose-volume histograms compared to dCT. CycleGAN with perceptual loss outperformed U-net in sCT generation when trained with weakly paired dCT-MRI for MRgRT. The proposed method will be useful to increase the treatment accuracy of MR-only or MR-guided adaptive radiotherapy. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-021-00195-8.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Biomedical Sciences and Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Hyun Joon An
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
| | - Hyeongmin Jin
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
| | - Jung-In Kim
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Eui Kyu Chie
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Jong Min Park
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences and Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Department of Nuclear Medicine, Seoul National University Hospital, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| |
Collapse
|
21
|
Guo XY, Chang Y, Kim Y, Rhee HY, Cho AR, Park S, Ryu CW, San Lee J, Lee KM, Shin W, Park KC, Kim EJ, Jahng GH. Development and evaluation of a T1 standard brain template for Alzheimer disease. Quant Imaging Med Surg 2021; 11:2224-2244. [PMID: 34079697 DOI: 10.21037/qims-20-710] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Patients with Alzheimer disease (AD) and mild cognitive impairment (MCI) have high variability in brain tissue loss, making it difficult to use a disease-specific standard brain template. The objective of this study was to develop an AD-specific three-dimensional (3D) T1 brain tissue template and to evaluate the characteristics of the populations used to form the template. Methods We obtained 3D T1-weighted images from 294 individuals, including 101 AD, 96 amnestic MCI, and 97 cognitively normal (CN) elderly individuals, and segmented them into different brain tissues to generate AD-specific brain tissue templates. Demographic data and clinical outcome scores were compared between the three groups. Voxel-based analyses and regions-of-interest-based analyses were performed to compare gray matter volume (GMV) and white matter volume (WMV) between the three participant groups and to evaluate the relationship of GMV and WMV loss with age, years of education, and Mini-Mental State Examination (MMSE) scores. Results We created high-resolution AD-specific tissue probability maps (TPMs). In the AD and MCI groups, losses of both GMV and WMV were found with respect to the CN group in the hippocampus (F >44.60, P<0.001). GMV was lower with increasing age in all individuals in the left (r=-0.621, P<0.001) and right (r=-0.632, P<0.001) hippocampi. In the left hippocampus, GMV was positively correlated with years of education in the CN groups (r=0.345, P<0.001) but not in the MCI (r=0.223, P=0.0293) or AD (r=-0.021, P=0.835) groups. WMV of the corpus callosum was not significantly correlated with years of education in any of the three subject groups (r=0.035 and P=0.549 for left, r=0.013 and P=0.821 for right). In all individuals, GMV of the hippocampus was significantly correlated with MMSE scores (left, r=0.710 and P<0.001; right, r=0.680 and P<0.001), while WMV of the corpus callosum showed a weak correlation (left, r=0.142 and P=0.015; right, r=0.123 and P=0.035). Conclusions A 3D, T1 brain tissue template was created using imaging data from CN, MCI, and AD participants considering the participants' age, sex, and years of education. Our disease-specific template can help evaluate brains to promote early diagnosis of MCI individuals and aid treatment of MCI and AD individuals.
Collapse
Affiliation(s)
- Xiao-Yi Guo
- Department of Medicine, Graduate School, Kyung Hee University, Seoul, Republic of Korea
| | - Yunjung Chang
- Department of Biomedical Engineering, Undergraduate School, College of Electronics and Information, Kyung Hee University, Gyeonggi-do, Republic of Korea
| | - Yehee Kim
- Department of Biomedical Engineering, Undergraduate School, College of Electronics and Information, Kyung Hee University, Gyeonggi-do, Republic of Korea
| | - Hak Young Rhee
- Department of Neurology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Ah Rang Cho
- Department of Psychiatry, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Soonchan Park
- Department of Radiology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Chang-Woo Ryu
- Department of Radiology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Jin San Lee
- Department of Neurology, Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Kyung Mi Lee
- Department of Radiology, Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Wonchul Shin
- Department of Neurology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Key-Chung Park
- Department of Neurology, Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Eui Jong Kim
- Department of Radiology, Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| | - Geon-Ho Jahng
- Department of Radiology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Republic of Korea
| |
Collapse
|
22
|
Hwang D, Kang SK, Kim KY, Choi H, Seo S, Lee JS. Data-driven respiratory phase-matched PET attenuation correction without CT. Phys Med Biol 2021; 66. [PMID: 33910170 DOI: 10.1088/1361-6560/abfc8f] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 04/28/2021] [Indexed: 12/20/2022]
Abstract
We propose a deep learning-based data-driven respiratory phase-matched gated-PET attenuation correction (AC) method that does not need a gated-CT. The proposed method is a multi-step process that consists of data-driven respiratory gating, gated attenuation map estimation using maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm, and enhancement of the gated attenuation maps using convolutional neural network (CNN). The gated MLAA attenuation maps enhanced by the CNN allowed for the phase-matched AC of gated-PET images. We conducted a non-rigid registration of the gated-PET images to generate motion-free PET images. We trained the CNN by conducting a 3D patch-based learning with 80 oncologic whole-body18F-fluorodeoxyglucose (18F-FDG) PET/CT scan data and applied it to seven regional PET/CT scans that cover the lower lung and upper liver. We investigated the impact of the proposed respiratory phase-matched AC of PET without utilizing CT on tumor size and standard uptake value (SUV) assessment, and PET image quality (%STD). The attenuation corrected gated and motion-free PET images generated using the proposed method yielded sharper organ boundaries and better noise characteristics than conventional gated and ungated PET images. A banana artifact observed in a phase-mismatched CT-based AC was not observed in the proposed approach. By employing the proposed method, the size of tumor was reduced by 12.3% and SUV90%was increased by 13.3% in tumors with larger movements than 5 mm. %STD of liver uptake was reduced by 11.1%. The deep learning-based data-driven respiratory phase-matched AC method improved the PET image quality and reduced the motion artifacts.
Collapse
Affiliation(s)
- Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Kyeong Yun Kim
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Seongho Seo
- Department of Electronic Engineering, Pai Chai University, Daejeon, Republic of Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
23
|
Yin L, Cao Z, Wang K, Tian J, Yang X, Zhang J. A review of the application of machine learning in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:825. [PMID: 34268438 PMCID: PMC8246214 DOI: 10.21037/atm-20-5877] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/02/2020] [Indexed: 12/12/2022]
Abstract
Molecular imaging (MI) is a science that uses imaging methods to reflect the changes of molecular level in living state and conduct qualitative and quantitative studies on its biological behaviors in imaging. Optical molecular imaging (OMI) and nuclear medical imaging are two key research fields of MI. OMI technology refers to the optical information generated by the imaging target (such as tumors) due to drug intervention and other reasons. By collecting the optical information, researchers can track the motion trajectory of the imaging target at the molecular level. Owing to its high specificity and sensitivity, OMI has been widely used in preclinical research and clinical surgery. Nuclear medical imaging mainly detects ionizing radiation emitted by radioactive substances. It can provide molecular information for early diagnosis, effective treatment and basic research of diseases, which has become one of the frontiers and hot topics in the field of medicine in the world today. Both OMI and nuclear medical imaging technology require a lot of data processing and analysis. In recent years, artificial intelligence technology, especially neural network-based machine learning (ML) technology, has been widely used in MI because of its powerful data processing capability. It provides a feasible strategy to deal with large and complex data for the requirement of MI. In this review, we will focus on the applications of ML methods in OMI and nuclear medical imaging.
Collapse
Affiliation(s)
- Lin Yin
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Cao
- Peking University First Hospital, Beijing, China
| | - Kun Wang
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Xing Yang
- Peking University First Hospital, Beijing, China
| | | |
Collapse
|
24
|
Kang SK, Lee JS. Anatomy-guided PET reconstruction using l1bowsher prior. Phys Med Biol 2021; 66. [PMID: 33780912 DOI: 10.1088/1361-6560/abf2f7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 03/29/2021] [Indexed: 12/22/2022]
Abstract
Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher's method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on thel1-norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the originall2and proposedl1Bowsher priors was conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposedl1Bowsher prior methods than the original Bowsher prior. The originall2Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposedl1Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced byl1-norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Therefore, these methods will be useful for improving the PET image quality based on the anatomical side information.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| |
Collapse
|
25
|
Tsubaki Y, Kitamura T, Shimokawa N, Akamatsu G, Sasaki M. Improved Accuracy of Amyloid PET Quantification with Adaptive Template-Based Anatomic Standardization. J Nucl Med Technol 2021; 49:256-261. [PMID: 33820861 DOI: 10.2967/jnmt.120.261701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 03/01/2021] [Indexed: 11/16/2022] Open
Abstract
Amyloid PET noninvasively visualizes amyloid-β accumulation in the brain. Visual binary reading is the standard method for interpreting amyloid PET, whereas objective quantitative evaluation is required in research and clinical trials. Anatomic standardization is important for quantitative analysis, and various standard templates are used for this purpose. To address the large differences in radioactivity distribution between amyloid-positive and amyloid-negative participants, an adaptive-template method has been proposed for the anatomic standardization of amyloid PET. In this study, we investigated the difference between the adaptive-template method and the single-template methods (use of a positive or a negative template) in amyloid PET quantitative evaluation, focusing on the accuracy in diagnosing Alzheimer's disease (AD). Methods: In total, 166 participants (58 healthy controls [HCs], 62 patients with mild cognitive impairment [MCI], and 46 patients with AD) who underwent 11C-Pittsburgh compound B (11C-PiB) PET through the Japanese Alzheimer's Disease Neuroimaging Initiative study were examined. For the anatomic standardization of 11C-PiB PET images, we applied 3 methods: a positive-template-based method, a negative-template-based method, and an adaptive-template-based method. The positive template was created by averaging the PET images for 4 patients with AD and 7 patients with MCI. Conversely, the negative template was created by averaging the PET images for 8 HCs. In the adaptive-template-based method, either of the templates was used on the basis of the similarity (normalized cross-correlation [NCC]) between the individual standardized image and the corresponding template. An empiric PiB-prone region of interest was used to evaluate specific regions where amyloid-β accumulates. The reference region was the cerebellar cortex, and the evaluated regions were the posterior cingulate gyrus and precuneus and the frontal, lateral temporal, lateral parietal, and occipital lobes. The mean cortical SUV ratio (mcSUVR) was calculated for quantitative evaluation. Results: The NCCs of single-template-based methods (the positive template or negative template) showed a significant difference among the HC, MCI, and AD groups (P < 0.05), whereas the NCC of the adaptive-template-based method did not (P > 0.05). The mcSUVR exhibited significant differences among the HC, MCI, and AD groups with all methods (P < 0.05). The mcSUVR area under the curve by receiver operating characteristic analysis between the positive group (MCI and AD) and the HC group did not significantly differ among templates. With regard to diagnostic accuracy based on mcSUVR, the sensitivity of the negative-template-based and adaptive-template-based methods was superior to that of the positive-template-based method (P < 0.05); however, there was no significant difference in specificity between them. Conclusion: In quantitative evaluation of AD by amyloid PET, the adaptive-template-based anatomic standardization method had greater diagnostic accuracy than the single-template-based methods.
Collapse
Affiliation(s)
- Yuma Tsubaki
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Takayoshi Kitamura
- Department of Health Sciences, School of Medicine, Kyushu University, Fukuoka, Japan; and
| | - Natsumi Shimokawa
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Go Akamatsu
- National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
| | - Masayuki Sasaki
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan;
| | | |
Collapse
|
26
|
Josselyn N, MacLean MT, Jean C, Fuchs B, Moon BF, Hwuang E, Iyer SK, Litt H, Han Y, Kaghazchi F, Bravo PE, Witschey WR. Classification of Myocardial 18F-FDG PET Uptake Patterns Using Deep Learning. Radiol Artif Intell 2021; 3:e200148. [PMID: 34350405 DOI: 10.1148/ryai.2021200148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 02/17/2021] [Accepted: 03/11/2021] [Indexed: 11/11/2022]
Abstract
Purpose To perform automated myocardial segmentation and uptake classification from whole-body fluorine 18 fluorodeoxyglucose (FDG) PET. Materials and Methods In this retrospective study, consecutive patients who underwent FDG PET imaging for oncologic indications were included (July-August 2018). The left ventricle (LV) on whole-body FDG PET images was manually segmented and classified as showing no myocardial uptake, diffuse uptake, or partial uptake. A total of 609 patients (mean age, 64 years ± 14 [standard deviation]; 309 women) were included and split between training (60%, 365 patients), validation (20%, 122 patients), and testing (20%, 122 patients) datasets. Two sequential neural networks were developed to automatically segment the LV and classify the myocardial uptake pattern using segmentation and classification training data provided by human experts. Linear regression was performed to correlate findings from human experts and deep learning. Classification performance was evaluated using receiver operating characteristic (ROC) analysis. Results There was moderate agreement of uptake pattern between experts and deep learning (as a fraction of correctly categorized images) with 78% (36 of 46) for no uptake, 71% (34 of 48) for diffuse uptake, and 71% (20 of 28) for partial uptake. There was no bias in LV volume for partial or diffuse uptake categories (P = .56); however, deep learning underestimated LV volumes in the no uptake category. There was good correlation for LV volume (R 2 = 0.35, b = .71). ROC analysis showed the area under the curve for classifying no uptake and diffuse uptake was high (> 0.90) but lower for partial uptake (0.77). The feasibility of a myocardial uptake index (MUI) for quantifying the degree of myocardial activity patterns was shown, and there was excellent visual agreement between MUI and uptake patterns. Conclusion Deep learning was able to segment and classify myocardial uptake patterns on FDG PET images.Keywords: PET, Heart, Computer Aided Diagnosis, Computer Application-Detection/DiagnosisSupplemental material is available for this article.©RSNA, 2021.
Collapse
Affiliation(s)
- Nicholas Josselyn
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Matthew T MacLean
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Christopher Jean
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Ben Fuchs
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Brianna F Moon
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Eileen Hwuang
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Srikant Kamesh Iyer
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Harold Litt
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Yuchi Han
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Fatemeh Kaghazchi
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Paco E Bravo
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| | - Walter R Witschey
- Departments of Radiology (N.J., M.T.M., C.J., B.F., H.L., Y.H., F.K., P.E.B., W.R.W.), Bioengineering (B.F.M., E.H., S.K.I.), and Medicine (Y.H., P.B.), Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, South Pavilion, Room 11-155, Philadelphia, PA 19104
| |
Collapse
|
27
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
28
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
29
|
Translating amyloid PET of different radiotracers by a deep generative model for interchangeability. Neuroimage 2021; 232:117890. [PMID: 33617991 DOI: 10.1016/j.neuroimage.2021.117890] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 12/31/2020] [Accepted: 02/15/2021] [Indexed: 11/24/2022] Open
Abstract
It is challenging to compare amyloid PET images obtained with different radiotracers. Here, we introduce a new approach to improve the interchangeability of amyloid PET acquired with different radiotracers through image-level translation. Deep generative networks were developed using unpaired PET datasets, consisting of 203 [11C]PIB and 850 [18F]florbetapir brain PET images. Using 15 paired PET datasets, the standardized uptake value ratio (SUVR) values obtained from pseudo-PIB or pseudo-florbetapir PET images translated using the generative networks was compared to those obtained from the original images. The generated amyloid PET images showed similar distribution patterns with original amyloid PET of different radiotracers. The SUVR obtained from the original [18F]florbetapir PET was lower than those obtained from the original [11C]PIB PET. The translated amyloid PET images reduced the difference in SUVR. The SUVR obtained from the pseudo-PIB PET images generated from [18F]florbetapir PET showed a good agreement with those of the original PIB PET (ICC = 0.87 for global SUVR). The SUVR obtained from the pseudo-florbetapir PET also showed a good agreement with those of the original [18F]florbetapir PET (ICC = 0.85 for global SUVR). The ICC values between the original and generated PET images were higher than those between original [11C]PIB and [18F]florbetapir images (ICC = 0.65 for global SUVR). Our approach provides the image-level translation of amyloid PET images obtained using different radiotracers. It may facilitate the clinical studies designed with variable amyloid PET images due to long-term clinical follow-up as well as multicenter trials by enabling the translation of different types of amyloid PET.
Collapse
|
30
|
Kang SK, Shin SA, Seo S, Byun MS, Lee DY, Kim YK, Lee DS, Lee JS. Deep learning-Based 3D inpainting of brain MR images. Sci Rep 2021; 11:1673. [PMID: 33462321 PMCID: PMC7814079 DOI: 10.1038/s41598-020-80930-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 12/14/2020] [Indexed: 12/22/2022] Open
Abstract
The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea
| | - Seong A Shin
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea
| | - Seongho Seo
- Department of Electronic Engineering, Pai Chai University, Daejeon, Korea
| | - Min Soo Byun
- Institute of Human Behavioral Medicine, Medical Research Center, Seoul National University, Seoul, Korea
| | - Dong Young Lee
- Department of Psychiatry, Seoul National University College of Medicine, Seoul, Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, SMG-SNU Boramae Medical Center, Seoul, Korea
| | - Dong Soo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
31
|
Delso G, Cirillo D, Kaggie JD, Valencia A, Metser U, Veit-Haibach P. How to Design AI-Driven Clinical Trials in Nuclear Medicine. Semin Nucl Med 2020; 51:112-119. [PMID: 33509367 DOI: 10.1053/j.semnuclmed.2020.09.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Artificial intelligence (AI) is an overarching term for a multitude of technologies which are currently being discussed and introduced in several areas of medicine and in medical imaging specifically. There is, however, limited literature and information about how AI techniques can be integrated into the design of clinical imaging trials. This article will present several aspects of AI being used in trials today and how imaging departments and especially nuclear medicine departments can prepare themselves to be at the forefront of AI-driven clinical trials. Beginning with some basic explanation on AI techniques currently being used and existing challenges of its implementation, it will also cover the logistical prerequisites which have to be in place in nuclear medicine departments to participate successfully in AI-driven clinical trials.
Collapse
Affiliation(s)
| | | | - Joshua D Kaggie
- Department of Radiology, University of Cambridge, Cambridge, UK
| | | | - Ur Metser
- Joint Department of Medical Imaging, University Health Network, Toronto, CA
| | | |
Collapse
|
32
|
Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Mol Imaging 2020; 18:1536012119869070. [PMID: 31429375 PMCID: PMC6702769 DOI: 10.1177/1536012119869070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Machine learning (ML) algorithms have found increasing utility in the medical imaging field and numerous applications in the analysis of digital biomarkers within positron emission tomography (PET) imaging have emerged. Interest in the use of artificial intelligence in PET imaging for the study of neurodegenerative diseases and oncology stems from the potential for such techniques to streamline decision support for physicians providing early and accurate diagnosis and allowing personalized treatment regimens. In this review, the use of ML to improve PET image acquisition and reconstruction is presented, along with an overview of its applications in the analysis of PET images for the study of Alzheimer's disease and oncology.
Collapse
Affiliation(s)
- Ian R Duffy
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Amanda J Boyle
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Neil Vasdev
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
33
|
Gupta A, Lee MS, Kim JH, Lee DS, Lee JS. Preclinical Voxel-Based Dosimetry in Theranostics: a Review. Nucl Med Mol Imaging 2020; 54:86-97. [PMID: 32377260 DOI: 10.1007/s13139-020-00640-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 03/27/2020] [Accepted: 03/31/2020] [Indexed: 12/22/2022] Open
Abstract
Due to the increasing use of preclinical targeted radionuclide therapy (TRT) studies for the development of novel theranostic agents, several studies have been performed to accurately estimate absorbed doses to mice at the voxel level using reference mouse phantoms and Monte Carlo (MC) simulations. Accurate dosimetry is important in preclinical theranostics to interpret radiobiological dose-response relationships and to translate results for clinical use. Direct MC (DMC) simulation is believed to produce more realistic voxel-level dose distribution with high precision because tissue heterogeneities and nonuniform source distributions in patients or animals are considered. Although MC simulation is considered to be an accurate method for voxel-based absorbed dose calculations, it is time-consuming, computationally demanding, and often impractical in daily practice. In this review, we focus on the current status of voxel-based dosimetry methods applied in preclinical theranostics and discuss the need for accurate and fast voxel-based dosimetry methods for pretherapy absorbed dose calculations to optimize the dose computation time in preclinical TRT.
Collapse
Affiliation(s)
- Arun Gupta
- 1Department of Radiology & Imaging, B.P. Koirala Institute of Health Sciences, Dharan, Nepal
| | - Min Sun Lee
- 2Department of Radiology, School of Medicine, Stanford University, Stanford, CA USA
| | - Joong Hyun Kim
- 3Center for Ionizing Radiation, Korea Research Institute of Standards and Science, Daejeon, South Korea
| | - Dong Soo Lee
- 4Department of Nuclear Medicine, College of Medicine, Seoul National University, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
| | - Jae Sung Lee
- 4Department of Nuclear Medicine, College of Medicine, Seoul National University, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,5Interdisciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, South Korea.,6Department of Biomedical Sciences, College of Medicine, Seoul National University, Seoul, South Korea
| |
Collapse
|
34
|
Robust nonlinear parameter estimation in tracer kinetic analysis using infinity norm regularization and particle swarm optimization. Phys Med 2020; 72:60-72. [PMID: 32200299 DOI: 10.1016/j.ejmp.2020.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/06/2020] [Accepted: 03/08/2020] [Indexed: 11/23/2022] Open
Abstract
In positron emission tomography (PET) studies, the voxel-wise calculation of individual rate constants describing the tracer kinetics is quite challenging because of the nonlinear relationship between the rate constants and PET data and the high noise level in voxel data. Based on preliminary simulations using a standard two-tissue compartment model, we can hypothesize that it is possible to reduce errors in the rate constant estimates when constraining the overestimation of the larger of two exponents in the model equation. We thus propose a novel approach based on infinity-norm regularization for limiting this exponent. Owing to the non-smooth cost function of this regularization scheme, which prevents the use of conventional Jacobian-based optimization methods, we examined a proximal gradient algorithm and the particle swarm optimization (PSO) through a simulation study. Because it exploits multiple initial values, the PSO method shows much better convergence than the proximal gradient algorithm, which is susceptible to the initial values. In the implementation of PSO, the use of a Gamma distribution to govern random movements was shown to improve the convergence rate and stability compared to a uniform distribution. Consequently, Gamma-based PSO with regularization was shown to outperform all other methods tested, including the conventional basis function method and Levenberg-Marquardt algorithm, in terms of its statistical properties.
Collapse
|
35
|
Highly multiplexed SiPM signal readout for brain-dedicated TOF-DOI PET detectors. Phys Med 2019; 68:117-123. [DOI: 10.1016/j.ejmp.2019.11.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/12/2019] [Accepted: 11/18/2019] [Indexed: 12/22/2022] Open
|
36
|
Wenzel M, Milletari F, Krüger J, Lange C, Schenk M, Apostolova I, Klutmann S, Ehrenburg M, Buchert R. Automatic classification of dopamine transporter SPECT: deep convolutional neural networks can be trained to be robust with respect to variable image characteristics. Eur J Nucl Med Mol Imaging 2019; 46:2800-2811. [DOI: 10.1007/s00259-019-04502-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 08/22/2019] [Indexed: 01/29/2023]
|
37
|
Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry. Sci Rep 2019; 9:10308. [PMID: 31311963 PMCID: PMC6635490 DOI: 10.1038/s41598-019-46620-y] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 06/27/2019] [Indexed: 12/22/2022] Open
Abstract
Personalized dosimetry with high accuracy is crucial owing to the growing interests in personalized medicine. The direct Monte Carlo simulation is considered as a state-of-art voxel-based dosimetry technique; however, it incurs an excessive computational cost and time. To overcome the limitations of the direct Monte Carlo approach, we propose using a deep convolutional neural network (CNN) for the voxel dose prediction. PET and CT image patches were used as inputs for the CNN with the given ground truth from direct Monte Carlo. The predicted voxel dose rate maps from the CNN were compared with the ground truth and dose rate maps generated voxel S-value (VSV) kernel convolution method, which is one of the common voxel-based dosimetry techniques. The CNN-based dose rate map agreed well with the ground truth with voxel dose rate errors of 2.54% ± 2.09%. The VSV kernel approach showed a voxel error of 9.97% ± 1.79%. In the whole-body dosimetry study, the average organ absorbed dose errors were 1.07%, 9.43%, and 34.22% for the CNN, VSV, and OLINDA/EXM dosimetry software, respectively. The proposed CNN-based dosimetry method showed improvements compared to the conventional dosimetry approaches and showed results comparable with that of the direct Monte Carlo simulation with significantly lower calculation time.
Collapse
|
38
|
Park J, Bae S, Seo S, Park S, Bang JI, Han JH, Lee WW, Lee JS. Measurement of Glomerular Filtration Rate using Quantitative SPECT/CT and Deep-learning-based Kidney Segmentation. Sci Rep 2019; 9:4223. [PMID: 30862873 PMCID: PMC6414660 DOI: 10.1038/s41598-019-40710-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 02/12/2019] [Indexed: 12/01/2022] Open
Abstract
Quantitative SPECT/CT is potentially useful for more accurate and reliable measurement of glomerular filtration rate (GFR) than conventional planar scintigraphy. However, manual drawing of a volume of interest (VOI) on renal parenchyma in CT images is a labor-intensive and time-consuming task. The aim of this study is to develop a fully automated GFR quantification method based on a deep learning approach to the 3D segmentation of kidney parenchyma in CT. We automatically segmented the kidneys in CT images using the proposed method with remarkably high Dice similarity coefficient relative to the manual segmentation (mean = 0.89). The GFR values derived using manual and automatic segmentation methods were strongly correlated (R2 = 0.96). The absolute difference between the individual GFR values using manual and automatic methods was only 2.90%. Moreover, the two segmentation methods had comparable performance in the urolithiasis patients and kidney donors. Furthermore, both segmentation modalities showed significantly decreased individual GFR in symptomatic kidneys compared with the normal or asymptomatic kidney groups. The proposed approach enables fast and accurate GFR measurement.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Sungwoo Bae
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea.,Department of Nuclear Medicine, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, Korea
| | - Seongho Seo
- Department of Neuroscience, College of Medicine, Gachon University, Incheon, Korea
| | - Sohyun Park
- Department of Nuclear Medicine, National Cancer Center, Goyang-si, Gyeonggi-do, Korea
| | - Ji-In Bang
- Department of Nuclear Medicine, Ewha Womans University School of Medicine, Seoul, Korea
| | - Jeong Hee Han
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, Korea
| | - Won Woo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea. .,Department of Nuclear Medicine, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, Korea. .,Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea.
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea. .,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea. .,Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea.
| |
Collapse
|
39
|
Abstract
PURPOSE We propose a multi-atlas based segmentation method for cardiac PET and SPECT images to deal with the high variability of tracer uptake characteristics in myocardium. In addition, we verify its performance by comparing it to the manual segmentation and single-atlas based approach, using dynamic myocardial PET. METHODS Twelve left coronary artery ligated SD rats underwent ([18F]fluoropentyl) triphenylphosphonium salt PET/CT scans. Atlas-based segmentation is based on the spatial normalized template with pre-defined region-of-interest (ROI) for each anatomical or functional structure. To generate multiple left ventricular (LV) atlases, each LV image was segmented manually and divided into angular segments. The segmentation methods performances were compared in regional count information using leave-one-out cross-validation. Additionally, the polar-maps of kinetic parameters were estimated. RESULTS In all images, the highest r2 template yielded the lowest root-mean-square error (RMSE) between the source image and the best-matching templates ranged between 0.91-0.97 and 0.06-0.11, respectively. The single-atlas and multi-atlas based ROIs yielded remarkably different perfusion distributions: only the multi-atlas based segmentation showed equivalent high correlation results (r2 = 0.92) with the manual segmentation compared with the single-atlas based (r2 = 0.88). The high perfusion value underestimation was remarkable in single-atlas based segmentation. CONCLUSIONS The main advantage of the proposed multi-atlas based cardiac segmentation method is that it does not require any prior information on the tracer distribution to be incorporated into the image segmentation algorithms. Therefore, the same procedure suggested here is applicable to any other cardiac PET or SPECT imaging agents without modification.
Collapse
|
40
|
Clinical Personal Connectomics Using Hybrid PET/MRI. Nucl Med Mol Imaging 2019; 53:153-163. [PMID: 31231434 DOI: 10.1007/s13139-019-00572-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 12/31/2018] [Accepted: 01/04/2019] [Indexed: 01/08/2023] Open
Abstract
Brain connectivity can now be studied with topological analysis using persistent homology. It overcame the arbitrariness of thresholding to make binary graphs for comparison between disease and normal control groups. Resting-state fMRI can yield personal interregional brain connectivity based on perfusion signal on MRI on individual subject bases and FDG PET produces the topography of glucose metabolism. Assuming metabolism perfusion coupling and disregarding the slight difference of representing time of metabolism (before image acquisition) and representing time of perfusion (during image acquisition), topography of brain metabolism on FDG PET and topologically analyzed brain connectivity on resting-state fMRI might be related to yield personal connectomics of individual subjects and even individual patients. The work of association of FDG PET/resting-state fMRI is yet to be warranted; however, the statistics behind the group comparison of connectivity on FDG PET or resting-state MRI was already developed. Before going further into the connectomics construction using directed weighted brain graphs of FDG PET or resting-state fMRI, I detailed in this review the plausibility of using hybrid PET/MRI to enable the interpretation of personal connectomics which can lead to the clinical use of brain connectivity in the near future.
Collapse
|
41
|
Kang SK, Seo S, Shin SA, Byun MS, Lee DY, Kim YK, Lee DS, Lee JS. Adaptive template generation for amyloid PET using a deep learning approach. Hum Brain Mapp 2018; 39:3769-3778. [PMID: 29752765 PMCID: PMC6866631 DOI: 10.1002/hbm.24210] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Revised: 04/27/2018] [Accepted: 05/01/2018] [Indexed: 12/26/2022] Open
Abstract
Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Biomedical SciencesSeoul National UniversitySeoulKorea
- Department of Nuclear MedicineSeoul National UniversitySeoulKorea
| | - Seongho Seo
- Department of Neuroscience, College of MedicineGachon UniversityIncheonKorea
| | - Seong A. Shin
- Department of Biomedical SciencesSeoul National UniversitySeoulKorea
- Department of Nuclear MedicineSeoul National University Boramae Medical CenterSeoulKorea
| | - Min Soo Byun
- Department of NeuropsychiatrySeoul National UniversitySeoulKorea
| | - Dong Young Lee
- Department of NeuropsychiatrySeoul National UniversitySeoulKorea
| | - Yu Kyeong Kim
- Department of Nuclear MedicineSeoul National University Boramae Medical CenterSeoulKorea
| | - Dong Soo Lee
- Department of Nuclear MedicineSeoul National UniversitySeoulKorea
- Department of Molecular Medicine and Biopharmaceutical Sciences, Graduate School of Convergence Science and TechnologySeoul National UniversitySuwonKorea
- Institute of Radiation MedicineMedical Research Center, Seoul National UniversitySeoulKorea
| | - Jae Sung Lee
- Department of Biomedical SciencesSeoul National UniversitySeoulKorea
- Department of Nuclear MedicineSeoul National UniversitySeoulKorea
- Institute of Radiation MedicineMedical Research Center, Seoul National UniversitySeoulKorea
| |
Collapse
|