1
|
Sun Y, Cheng Z, Qiu J, Lu W. Performance and application of the total-body PET/CT scanner: a literature review. EJNMMI Res 2024; 14:38. [PMID: 38607510 PMCID: PMC11014840 DOI: 10.1186/s13550-023-01059-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 12/14/2023] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND The total-body positron emission tomography/computed tomography (PET/CT) system, with a long axial field of view, represents the state-of-the-art PET imaging technique. Recently, the total-body PET/CT system has been commercially available. The total-body PET/CT system enables high-resolution whole-body imaging, even under extreme conditions such as ultra-low dose, extremely fast imaging speed, delayed imaging more than 10 h after tracer injection, and total-body dynamic scan. The total-body PET/CT system provides a real-time picture of the tracers of all organs across the body, which not only helps to explain normal human physiological process, but also facilitates the comprehensive assessment of systemic diseases. In addition, the total-body PET/CT system may play critical roles in other medical fields, including cancer imaging, drug development and immunology. MAIN BODY Therefore, it is of significance to summarize the existing studies of the total-body PET/CT systems and point out its future direction. This review collected research literatures from the PubMed database since the advent of commercially available total-body PET/CT systems to the present, and was divided into the following sections: Firstly, a brief introduction to the total-body PET/CT system was presented, followed by a summary of the literature on the performance evaluation of the total-body PET/CT. Then, the research and clinical applications of the total-body PET/CT were discussed. Fourthly, deep learning studies based on total-body PET imaging was reviewed. At last, the shortcomings of existing research and future directions for the total-body PET/CT were discussed. CONCLUSION Due to its technical advantages, the total-body PET/CT system is bound to play a greater role in clinical practice in the future.
Collapse
Affiliation(s)
- Yuanyuan Sun
- Department of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Taian, 271016, China
| | - Zhaoping Cheng
- Department of PET-CT, The First Affiliated Hospital of Shandong First Medical University, Shandong Provincial Qianfoshan Hospital Affiliated to Shandong University, Jinan, 250014, China
| | - Jianfeng Qiu
- Department of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Taian, 271016, China
| | - Weizhao Lu
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, No. 366 Taishan Street, Taian, 271000, China.
| |
Collapse
|
2
|
Roberts EJ, Chavez T, Hexemer A, Zwart PH. DLSIA: Deep Learning for Scientific Image Analysis. J Appl Crystallogr 2024; 57:392-402. [PMID: 38596727 PMCID: PMC11001410 DOI: 10.1107/s1600576724001390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/12/2024] [Indexed: 04/11/2024] Open
Abstract
DLSIA (Deep Learning for Scientific Image Analysis) is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing. DLSIA features easy-to-use architectures, such as autoencoders, tunable U-Nets and parameter-lean mixed-scale dense networks (MSDNets). Additionally, this article introduces sparse mixed-scale networks (SMSNets), generated using random graphs, sparse connections and dilated convolutions connecting different length scales. For verification, several DLSIA-instantiated networks and training scripts are employed in multiple applications, including inpainting for X-ray scattering data using U-Nets and MSDNets, segmenting 3D fibers in X-ray tomographic reconstructions of concrete using an ensemble of SMSNets, and leveraging autoencoder latent spaces for data compression and clustering. As experimental data continue to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration and advance research in scientific image analysis.
Collapse
Affiliation(s)
- Eric J. Roberts
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Tanny Chavez
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Alexander Hexemer
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Petrus H. Zwart
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Berkeley Synchrotron Infrared Structural Biology Program, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| |
Collapse
|
3
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
4
|
Artesani A, Bruno A, Gelardi F, Chiti A. Empowering PET: harnessing deep learning for improved clinical insight. Eur Radiol Exp 2024; 8:17. [PMID: 38321340 PMCID: PMC10847083 DOI: 10.1186/s41747-023-00413-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/20/2023] [Indexed: 02/08/2024] Open
Abstract
This review aims to take a journey into the transformative impact of artificial intelligence (AI) on positron emission tomography (PET) imaging. To this scope, a broad overview of AI applications in the field of nuclear medicine and a thorough exploration of deep learning (DL) implementations in cancer diagnosis and therapy through PET imaging will be presented. We firstly describe the behind-the-scenes use of AI for image generation, including acquisition (event positioning, noise reduction though time-of-flight estimation and scatter correction), reconstruction (data-driven and model-driven approaches), restoration (supervised and unsupervised methods), and motion correction. Thereafter, we outline the integration of AI into clinical practice through the applications to segmentation, detection and classification, quantification, treatment planning, dosimetry, and radiomics/radiogenomics combined to tumour biological characteristics. Thus, this review seeks to showcase the overarching transformation of the field, ultimately leading to tangible improvements in patient treatment and response assessment. Finally, limitations and ethical considerations of the AI application to PET imaging and future directions of multimodal data mining in this discipline will be briefly discussed, including pressing challenges to the adoption of AI in molecular imaging such as the access to and interoperability of huge amount of data as well as the "black-box" problem, contributing to the ongoing dialogue on the transformative potential of AI in nuclear medicine.Relevance statementAI is rapidly revolutionising the world of medicine, including the fields of radiology and nuclear medicine. In the near future, AI will be used to support healthcare professionals. These advances will lead to improvements in diagnosis, in the assessment of response to treatment, in clinical decision making and in patient management.Key points• Applying AI has the potential to enhance the entire PET imaging pipeline.• AI may support several clinical tasks in both PET diagnosis and prognosis.• Interpreting the relationships between imaging and multiomics data will heavily rely on AI.
Collapse
Affiliation(s)
- Alessia Artesani
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy
| | - Alessandro Bruno
- Department of Business, Law, Economics and Consumer Behaviour "Carlo A. Ricciardi", IULM Libera Università Di Lingue E Comunicazione, Via P. Filargo 38, Milan, 20143, Italy
| | - Fabrizia Gelardi
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy.
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy.
| | - Arturo Chiti
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy
- Department of Nuclear Medicine, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, 20132, Italy
| |
Collapse
|
5
|
Usanase N, Uzun B, Ozsahin DU, Ozsahin I. A look at radiation detectors and their applications in medical imaging. Jpn J Radiol 2024; 42:145-157. [PMID: 37733205 DOI: 10.1007/s11604-023-01486-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 08/28/2023] [Indexed: 09/22/2023]
Abstract
The effectiveness and precision of disease diagnosis and treatment have increased, thanks to developments in clinical imaging over the past few decades. Science is developing and progressing steadily in imaging modalities, and effective outcomes are starting to show up as a result of the shorter scanning periods needed as well as the higher-resolution images generated. The choice of one clinical device over another is influenced by technical disparities among the equipment, such as detection medium, shorter scan time, patient comfort, cost-effectiveness, accessibility, greater sensitivity and specificity, and spatial resolution. Lately, computational algorithms, artificial intelligence (AI), in particular, have been incorporated with diagnostic and treatment techniques, including imaging systems. AI is a discipline comprised of multiple computational and mathematical models. Its applications aided in manipulating sophisticated data in imaging processes and increased imaging tests' accuracy and precision during diagnosis. Computed tomography (CT), positron emission tomography (PET), and Single Photon Emission Computed Tomography (SPECT) along with their corresponding radiation detectors have been reviewed in this study. This review will provide an in-depth explanation of the above-mentioned imaging modalities as well as the radiation detectors that are their essential components. From the early development of these medical instruments till now, various modifications and improvements have been done and more is yet to be established for better performance which calls for a necessity to capture the available information and record the gaps to be filled for better future advances.
Collapse
Affiliation(s)
- Natacha Usanase
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey.
| | - Berna Uzun
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey
- Department of Statistics, Carlos III Madrid University, Getafe, Madrid, Spain
| | - Dilber Uzun Ozsahin
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Ilker Ozsahin
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey
- Brain Health Imaging Institute, Department of Radiology, Weill Cornell Medicine, New York, NY, 10065, USA
| |
Collapse
|
6
|
Kobayashi T, Shigeki Y, Yamakawa Y, Tsutsumida Y, Mizuta T, Hanaoka K, Watanabe S, Morimoto-Ishikawa D, Yamada T, Kaida H, Ishii K. Generating PET Attenuation Maps via Sim2Real Deep Learning-Based Tissue Composition Estimation Combined with MLACF. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:167-179. [PMID: 38343219 DOI: 10.1007/s10278-023-00902-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/20/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Deep learning (DL) has recently attracted attention for data processing in positron emission tomography (PET). Attenuation correction (AC) without computed tomography (CT) data is one of the interests. Here, we present, to our knowledge, the first attempt to generate an attenuation map of the human head via Sim2Real DL-based tissue composition estimation from model training using only the simulated PET dataset. The DL model accepts a two-dimensional non-attenuation-corrected PET image as input and outputs a four-channel tissue-composition map of soft tissue, bone, cavity, and background. Then, an attenuation map is generated by a linear combination of the tissue composition maps and, finally, used as input for scatter+random estimation and as an initial estimate for attenuation map reconstruction by the maximum likelihood attenuation correction factor (MLACF), i.e., the DL estimate is refined by the MLACF. Preliminary results using clinical brain PET data showed that the proposed DL model tended to estimate anatomical details inaccurately, especially in the neck-side slices. However, it succeeded in estimating overall anatomical structures, and the PET quantitative accuracy with DL-based AC was comparable to that with CT-based AC. Thus, the proposed DL-based approach combined with the MLACF is also a promising CT-less AC approach.
Collapse
Affiliation(s)
- Tetsuya Kobayashi
- Technology Research Laboratory, Shimadzu Corporation, 3-9-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237, Japan.
| | - Yui Shigeki
- Technology Research Laboratory, Shimadzu Corporation, 3-9-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237, Japan
| | - Yoshiyuki Yamakawa
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Yumi Tsutsumida
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Tetsuro Mizuta
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Kohei Hanaoka
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Shota Watanabe
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Daisuke Morimoto-Ishikawa
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Hayato Kaida
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Kazunari Ishii
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| |
Collapse
|
7
|
Izadi S, Shiri I, F Uribe C, Geramifar P, Zaidi H, Rahmim A, Hamarneh G. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks. Z Med Phys 2024:S0939-3889(24)00002-3. [PMID: 38302292 DOI: 10.1016/j.zemedi.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 12/24/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024]
Abstract
In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.
Collapse
Affiliation(s)
- Saeed Izadi
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada.
| |
Collapse
|
8
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
9
|
Lee JS, Lee MS. Advancements in Positron Emission Tomography Detectors: From Silicon Photomultiplier Technology to Artificial Intelligence Applications. PET Clin 2024; 19:1-24. [PMID: 37802675 DOI: 10.1016/j.cpet.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
This review article focuses on PET detector technology, which is the most crucial factor in determining PET image quality. The article highlights the desired properties of PET detectors, including high detection efficiency, spatial resolution, energy resolution, and timing resolution. Recent advancements in PET detectors to improve these properties are also discussed, including the use of silicon photomultiplier technology, advancements in depth-of-interaction and time-of-flight PET detectors, and the use of artificial intelligence for detector development. The article provides an overview of PET detector technology and its recent advancements, which can significantly enhance PET image quality.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, South Korea; Brightonix Imaging Inc., Seoul 04782, South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon 34057, South Korea.
| |
Collapse
|
10
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
11
|
Cherry SR, Diekmann J, Bengel FM. Total-Body Positron Emission Tomography: Adding New Perspectives to Cardiovascular Research. JACC Cardiovasc Imaging 2023; 16:1335-1347. [PMID: 37676207 DOI: 10.1016/j.jcmg.2023.06.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 09/08/2023]
Abstract
The recent advent of positron emission tomography (PET) scanners that can image the entire human body opens up intriguing possibilities for cardiovascular research and future clinical applications. These new systems permit radiotracer kinetics to be measured in all organs simultaneously. They are particularly well suited to study cardiovascular disease and its effects on the entire body. They could also play a role in quantitatively measuring physiologic, metabolic, and immunologic responses in healthy individuals to a variety of stressors and lifestyle interventions, and may ultimately be instrumental for evaluating novel therapeutic agents and their molecular effects across different tissues. In this review, we summarize recent progress in PET technology and methodology, discuss several emerging cardiovascular applications for total-body PET, and place this in the context of multiorgan and systems medicine. Finally, we discuss opportunities that will be enabled by the technology, while also pointing to some of the challenges that still need to be addressed.
Collapse
Affiliation(s)
- Simon R Cherry
- Departments of Biomedical Engineering and Radiology, University of California, Davis, California, USA.
| | - Johanna Diekmann
- Departments of Biomedical Engineering and Radiology, University of California, Davis, California, USA; Department of Nuclear Medicine, Hannover Medical School, Hannover, Germany
| | - Frank M Bengel
- Department of Nuclear Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
12
|
Zhu W, Lee SJ. Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2023; 23:5783. [PMID: 37447633 DOI: 10.3390/s23135783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
We present an adaptive method for fine-tuning hyperparameters in edge-preserving regularization for PET image reconstruction. For edge-preserving regularization, in addition to the smoothing parameter that balances data fidelity and regularization, one or more control parameters are typically incorporated to adjust the sensitivity of edge preservation by modifying the shape of the penalty function. Although there have been efforts to develop automated methods for tuning the hyperparameters in regularized PET reconstruction, the majority of these methods primarily focus on the smoothing parameter. However, it is challenging to obtain high-quality images without appropriately selecting the control parameters that adjust the edge preservation sensitivity. In this work, we propose a method to precisely tune the hyperparameters, which are initially set with a fixed value for the entire image, either manually or using an automated approach. Our core strategy involves adaptively adjusting the control parameter at each pixel, taking into account the degree of patch similarities calculated from the previous iteration within the pixel's neighborhood that is being updated. This approach allows our new method to integrate with a wide range of existing parameter-tuning techniques for edge-preserving regularization. Experimental results demonstrate that our proposed method effectively enhances the overall reconstruction accuracy across multiple image quality metrics, including peak signal-to-noise ratio, structural similarity, visual information fidelity, mean absolute error, root-mean-square error, and mean percentage error.
Collapse
Affiliation(s)
- Wen Zhu
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| | - Soo-Jin Lee
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| |
Collapse
|
13
|
Yang C, Zannoni EM, Meng LJ. Joint estimation of interaction position and energy deposition in semiconductor SPECT imaging sensors using fully connected neural network. Phys Med Biol 2023; 68:10.1088/1361-6560/aca740. [PMID: 36595331 PMCID: PMC10329845 DOI: 10.1088/1361-6560/aca740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 11/29/2022] [Indexed: 12/02/2022]
Abstract
Objective.Pixelated semiconductor detectors such as CdTe and CZT sensors suffer spatial resolution and spectral performance degradation induced by charge-sharing effects. It is critical to enhance the detector property through recovering the energy-deposition and position estimation.Approach.In this work, we proposed a fully-connected-neural-network-based charge-sharing reconstruction algorithm to correct the charge-loss and estimate the sub-pixel position for every multi-pixel charge-sharing event.Main results.Evident energy resolution improvement can be observed by comparing the spectrum produced by a simple charge-sharing addition method and the proposed energy correction methods. We also demonstrate that sub-pixel resolution can be achieved in projections obtained with a small pinhole collimator and an innovative micro-ring collimator.Significance.These achievements are crucial for multiple-tracer SPECT imaging applications, and for other semiconductor detector-based imaging modalities.
Collapse
Affiliation(s)
- Can Yang
- Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana-Champaign, Urbana, United States of America
| | - Elena Maria Zannoni
- Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana-Champaign, Urbana, United States of America
| | - Ling-Jian Meng
- Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana-Champaign, Urbana, United States of America
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, United States of America
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, United States of America
| |
Collapse
|
14
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
15
|
Daube-Witherspoon ME, Pantel AR, Pryma DA, Karp JS. Total-body PET: a new paradigm for molecular imaging. Br J Radiol 2022; 95:20220357. [PMID: 35993615 PMCID: PMC9733603 DOI: 10.1259/bjr.20220357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 06/25/2022] [Accepted: 08/12/2022] [Indexed: 11/05/2022] Open
Abstract
Total body (TB) positron emission tomography (PET) instruments have dramatically changed the paradigm of PET clinical and research studies due to their very high sensitivity and capability to image dynamic radiopharmaceutical distributions in the major organs of the body simultaneously. In this manuscript, we review the design of these systems and discuss general challenges and trade-offs to maximize the performance gains of current TB-PET systems. We then describe new concepts and technology that may impact future TB-PET systems. The manuscript summarizes what has been learned from the initial sites with TB-PET and explores potential research and clinical applications of TB-PET. The current generation of TB-PET systems range in axial field-of-view (AFOV) from 1 to 2 m and serve to illustrate the benefits and opportunities of a longer AFOV for various applications in PET. In only a few years of use these new TB-PET systems have shown that they will play an important role in expanding the field of molecular imaging and benefiting clinical practice.
Collapse
Affiliation(s)
| | - Austin R Pantel
- Department of Radiology, University of Pennsylvania, Philadelphia, United States
| | - Daniel A Pryma
- Department of Radiology, University of Pennsylvania, Philadelphia, United States
| | - Joel S Karp
- Department of Radiology, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
16
|
Leal JP, Rowe SP, Stearns V, Connolly RM, Vaklavas C, Liu MC, Storniolo AM, Wahl RL, Pomper MG, Solnes LB. Automated lesion detection of breast cancer in [ 18F] FDG PET/CT using a novel AI-Based workflow. Front Oncol 2022; 12:1007874. [PMID: 36457510 PMCID: PMC9705734 DOI: 10.3389/fonc.2022.1007874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 10/20/2022] [Indexed: 09/10/2023] Open
Abstract
UNLABELLED Applications based on artificial intelligence (AI) and deep learning (DL) are rapidly being developed to assist in the detection and characterization of lesions on medical images. In this study, we developed and examined an image-processing workflow that incorporates both traditional image processing with AI technology and utilizes a standards-based approach for disease identification and quantitation to segment and classify tissue within a whole-body [18F]FDG PET/CT study. METHODS One hundred thirty baseline PET/CT studies from two multi-institutional preoperative clinical trials in early-stage breast cancer were semi-automatically segmented using techniques based on PERCIST v1.0 thresholds and the individual segmentations classified as to tissue type by an experienced nuclear medicine physician. These classifications were then used to train a convolutional neural network (CNN) to automatically accomplish the same tasks. RESULTS Our CNN-based workflow demonstrated Sensitivity at detecting disease (either primary lesion or lymphadenopathy) of 0.96 (95% CI [0.9, 1.0], 99% CI [0.87,1.00]), Specificity of 1.00 (95% CI [1.0,1.0], 99% CI [1.0,1.0]), DICE score of 0.94 (95% CI [0.89, 0.99], 99% CI [0.86, 1.00]), and Jaccard score of 0.89 (95% CI [0.80, 0.98], 99% CI [0.74, 1.00]). CONCLUSION This pilot work has demonstrated the ability of AI-based workflow using DL-CNNs to specifically identify breast cancer tissue as determined by [18F]FDG avidity in a PET/CT study. The high sensitivity and specificity of the network supports the idea that AI can be trained to recognize specific tissue signatures, both normal and disease, in molecular imaging studies using radiopharmaceuticals. Future work will explore the applicability of these techniques to other disease types and alternative radiotracers, as well as explore the accuracy of fully automated and quantitative detection and response assessment.
Collapse
Affiliation(s)
- Jeffrey P. Leal
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Steven P. Rowe
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Vered Stearns
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Roisin M. Connolly
- Cancer Research @ UCC, College of Medicine and Health, University College Cork, Cork, Ireland
| | - Christos Vaklavas
- Huntsville Cancer Institute, University of Alabama, Birmingham, AL, United States
| | - Minetta C. Liu
- Division of Medical Oncology, Mayo Clinic, Rochester, MN, United States
| | - Anna Maria Storniolo
- Melvin and Bren Simon Cancer Center, Indiana University, Indianapolis, IN, United States
| | - Richard L. Wahl
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, United States
| | - Martin G. Pomper
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Lilja B. Solnes
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
17
|
Mehranian A, Wollenweber SD, Walker MD, Bradley KM, Fielding PA, Huellner M, Kotasidis F, Su KH, Johnsen R, Jansen FP, McGowan DR. Deep learning-based time-of-flight (ToF) image enhancement of non-ToF PET scans. Eur J Nucl Med Mol Imaging 2022; 49:3740-3749. [PMID: 35507059 PMCID: PMC9399038 DOI: 10.1007/s00259-022-05824-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 04/26/2022] [Indexed: 12/02/2022]
Abstract
PURPOSE To improve the quantitative accuracy and diagnostic confidence of PET images reconstructed without time-of-flight (ToF) using deep learning models trained for ToF image enhancement (DL-ToF). METHODS A total of 273 [18F]-FDG PET scans were used, including data from 6 centres equipped with GE Discovery MI ToF scanners. PET data were reconstructed using the block-sequential-regularised-expectation-maximisation (BSREM) algorithm with and without ToF. The images were then split into training (n = 208), validation (n = 15), and testing (n = 50) sets. Three DL-ToF models were trained to transform non-ToF BSREM images to their target ToF images with different levels of DL-ToF strength (low, medium, high). The models were objectively evaluated using the testing set based on standardised uptake value (SUV) in 139 identified lesions, and in normal regions of liver and lungs. Three radiologists subjectively rated the models using testing sets based on lesion detectability, diagnostic confidence, and image noise/quality. RESULTS The non-ToF, DL-ToF low, medium, and high methods resulted in - 28 ± 18, - 28 ± 19, - 8 ± 22, and 1.7 ± 24% differences (mean; SD) in the SUVmax for the lesions in testing set, compared to ToF-BSREM image. In background lung VOIs, the SUVmean differences were 7 ± 15, 0.6 ± 12, 1 ± 13, and 1 ± 11% respectively. In normal liver, SUVmean differences were 4 ± 5, 0.7 ± 4, 0.8 ± 4, and 0.1 ± 4%. Visual inspection showed that our DL-ToF improved feature sharpness and convergence towards ToF reconstruction. Blinded clinical readings of testing sets for diagnostic confidence (scale 0-5) showed that non-ToF, DL-ToF low, medium, and high, and ToF images scored 3.0, 3.0, 4.1, 3.8, and 3.5 respectively. For this set of images, DL-ToF medium therefore scored highest for diagnostic confidence. CONCLUSION Deep learning-based image enhancement models may provide converged ToF-equivalent image quality without ToF reconstruction. In clinical scoring DL-ToF-enhanced non-ToF images (medium and high) on average scored as high as, or higher than, ToF images. The model is generalisable and hence, could be applied to non-ToF images from BGO-based PET/CT scanners.
Collapse
Affiliation(s)
| | | | - Matthew D Walker
- Department of Medical Physics and Clinical Engineering, Oxford University Hospitals NHS FT, Oxford, UK
| | - Kevin M Bradley
- Wales Research and Diagnostic PET Imaging Centre, University Hospital of Wales, Cardiff, UK
| | | | | | | | | | | | | | - Daniel R McGowan
- Department of Medical Physics and Clinical Engineering, Oxford University Hospitals NHS FT, Oxford, UK.
- Department of Oncology, University of Oxford, Oxford, UK.
| |
Collapse
|
18
|
Xie E, Sung E, Saad E, Trayanova N, Wu KC, Chrispin J. Advanced imaging for risk stratification for ventricular arrhythmias and sudden cardiac death. Front Cardiovasc Med 2022; 9:884767. [PMID: 36072882 PMCID: PMC9441865 DOI: 10.3389/fcvm.2022.884767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
Sudden cardiac death (SCD) is a leading cause of mortality, comprising approximately half of all deaths from cardiovascular disease. In the US, the majority of SCD (85%) occurs in patients with ischemic cardiomyopathy (ICM) and a subset in patients with non-ischemic cardiomyopathy (NICM), who tend to be younger and whose risk of mortality is less clearly delineated than in ischemic cardiomyopathies. The conventional means of SCD risk stratification has been the determination of the ejection fraction (EF), typically via echocardiography, which is currently a means of determining candidacy for primary prevention in the form of implantable cardiac defibrillators (ICDs). Advanced cardiac imaging methods such as cardiac magnetic resonance imaging (CMR), single-photon emission computerized tomography (SPECT) and positron emission tomography (PET), and computed tomography (CT) have emerged as promising and non-invasive means of risk stratification for sudden death through their characterization of the underlying myocardial substrate that predisposes to SCD. Late gadolinium enhancement (LGE) on CMR detects myocardial scar, which can inform ICD decision-making. Overall scar burden, region-specific scar burden, and scar heterogeneity have all been studied in risk stratification. PET and SPECT are nuclear methods that determine myocardial viability and innervation, as well as inflammation. CT can be used for assessment of myocardial fat and its association with reentrant circuits. Emerging methodologies include the development of "virtual hearts" using complex electrophysiologic modeling derived from CMR to attempt to predict arrhythmic susceptibility. Recent developments have paired novel machine learning (ML) algorithms with established imaging techniques to improve predictive performance. The use of advanced imaging to augment risk stratification for sudden death is increasingly well-established and may soon have an expanded role in clinical decision-making. ML could help shift this paradigm further by advancing variable discovery and data analysis.
Collapse
Affiliation(s)
- Eric Xie
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Eric Sung
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Elie Saad
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Natalia Trayanova
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Katherine C. Wu
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Jonathan Chrispin
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
19
|
Sanaat A, Jamalizadeh M, Khanmohammadi H, Arabi H, Zaidi H. Active-PET: a multifunctional PET scanner with dynamic gantry size featuring high-resolution and high-sensitivity imaging: a Monte Carlo simulation study. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7fd8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Accepted: 07/08/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Organ-specific PET scanners have been developed to provide both high spatial resolution and sensitivity, although the deployment of several dedicated PET scanners at the same center is costly and space-consuming. Active-PET is a multifunctional PET scanner design exploiting the advantages of two different types of detector modules and mechanical arms mechanisms enabling repositioning of the detectors to allow the implementation of different geometries/configurations. Active-PET can be used for different applications, including brain, axilla, breast, prostate, whole-body, preclinical and pediatrics imaging, cell tracking, and image guidance for therapy. Monte Carlo techniques were used to simulate a PET scanner with two sets of high resolution and high sensitivity pixelated Lutetium Oxyorthoscilicate (LSO(Ce)) detector blocks (24 for each group, overall 48 detector modules for each ring), one with large pixel size (4 × 4 mm2) and crystal thickness (20 mm), and another one with small pixel size (2 × 2 mm2) and thickness (10 mm). Each row of detector modules is connected to a linear motor that can displace the detectors forward and backward along the radial axis to achieve variable gantry diameter in order to image the target subject at the optimal/desired resolution and/or sensitivity. At the center of the field-of-view, the highest sensitivity (15.98 kcps MBq−1) was achieved by the scanner with a small gantry and high-sensitivity detectors while the best spatial resolution was obtained by the scanner with a small gantry and high-resolution detectors (2.2 mm, 2.3 mm, 2.5 mm FWHM for tangential, radial, and axial, respectively). The configuration with large-bore (combination of high-resolution and high-sensitivity detectors) achieved better performance and provided higher image quality compared to the Biograph mCT as reflected by the 3D Hoffman brain phantom simulation study. We introduced the concept of a non-static PET scanner capable of switching between large and small field-of-view as well as high-resolution and high-sensitivity imaging.
Collapse
|
20
|
De Luca GMR, Habraken JBA. Method to determine the statistical technical variability of SUV metrics. EJNMMI Phys 2022; 9:40. [PMID: 35666316 PMCID: PMC9170854 DOI: 10.1186/s40658-022-00470-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 05/19/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The Standardized Uptake Value (SUV) Max, SUVMean, and SUVPeak are metrics used to quantify positron emission tomography (PET) images. In order to assess the significance of a change in these metrics for diagnostic purposes, it is relevant to know their variation. The sources of variation can be biological or technical. In this study, we present a method to determine the statistical technical variation of SUV in PET images. RESULTS This method was tested on a NEMA quality phantom with spheres of various diameters with a full-length acquisition time of 150 s per bed position and foreground-to-background activity ratio of F18-2-fluoro-2-deoxy-D-glucose (FDG) of 10:1. Our method divides the 150 s acquisition into subsets with statistically independent frames of shorter reconstruction length. SUVMax, Mean and Peak were calculated for each reconstructed image in a subset. The coefficient of variation of SUV within each subset has been used to estimate the expected coefficient of variation at 150 s reconstruction length. We report the largest coefficient of variation of the SUV metrics for the smallest sphere and the smallest variation for the largest sphere. The expected variation at 150 s reconstruction length does not exceed 6% for the smallest sphere and 2% for the largest sphere. CONCLUSIONS With the presented method, we aim to determine the statistical technical variation of SUV. The method enables the evaluation of the effect of SUV metric choice (Max, Mean, Peak) and lesion size on the technical variation and, therefore, to evaluate its relevance on the total variation of the SUV value between clinical studies.
Collapse
Affiliation(s)
- Giulia M R De Luca
- Department of Medical Physics, St. Antonius Hospital, Nieuwegein, The Netherlands.
| | - Jan B A Habraken
- Department of Medical Physics, St. Antonius Hospital, Nieuwegein, The Netherlands
| |
Collapse
|
21
|
Dal Toso L, Chalampalakis Z, Buvat I, Comtat C, Cook G, Goh V, Schnabel JA, Marsden PK. Improved 3D tumour definition and quantification of uptake in simulated lung tumours using deep learning. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac65d6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 04/08/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake. Approach. In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with ‘ground truth’ radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network. Results. When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities. Significance. Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.
Collapse
|
22
|
Bradshaw TJ, Boellaard R, Dutta J, Jha AK, Jacobs P, Li Q, Liu C, Sitek A, Saboury B, Scott PJH, Slomka PJ, Sunderland JJ, Wahl RL, Yousefirizi F, Zuehlsdorff S, Rahmim A, Buvat I. Nuclear Medicine and Artificial Intelligence: Best Practices for Algorithm Development. J Nucl Med 2022; 63:500-510. [PMID: 34740952 PMCID: PMC10949110 DOI: 10.2967/jnumed.121.262567] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 11/01/2021] [Indexed: 11/16/2022] Open
Abstract
The nuclear medicine field has seen a rapid expansion of academic and commercial interest in developing artificial intelligence (AI) algorithms. Users and developers can avoid some of the pitfalls of AI by recognizing and following best practices in AI algorithm development. In this article, recommendations on technical best practices for developing AI algorithms in nuclear medicine are provided, beginning with general recommendations and then continuing with descriptions of how one might practice these principles for specific topics within nuclear medicine. This report was produced by the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Tyler J Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin;
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | | | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | | | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Peter J H Scott
- Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Piotr J Slomka
- Department of Imaging, Medicine, and Cardiology, Cedars-Sinai Medical Center, Los Angeles, California
| | - John J Sunderland
- Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa
| | - Richard L Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | | | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Irène Buvat
- Institut Curie, Université PSL, INSERM, Université Paris-Saclay, Orsay, France
| |
Collapse
|
23
|
Onishi Y, Hashimoto F, Ote K, Ota R. Unbiased TOF estimation using leading-edge discriminator and convolutional neural network trained by single-source-position waveforms. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac508f] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 01/31/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Convolutional neural networks (CNNs) are a strong tool for improving the coincidence time resolution (CTR) of time-of-flight (TOF) positron emission tomography detectors. However, several signal waveforms from multiple source positions are required for CNN training. Furthermore, there is concern that TOF estimation is biased near the edge of the training space, despite the reduced estimation variance (i.e. timing uncertainty). Approach. We propose a simple method for unbiased TOF estimation by combining a conventional leading-edge discriminator (LED) and a CNN that can be trained with waveforms collected from one source position. The proposed method estimates and corrects the time difference error calculated by the LED rather than the absolute time difference. This model can eliminate the TOF estimation bias, as the combination with the LED converts the distribution of the label data from discrete values at each position into a continuous symmetric distribution. Main results. Evaluation results using signal waveforms collected from scintillation detectors show that the proposed method can correctly estimate all source positions without bias from a single source position. Moreover, the proposed method improves the CTR of the conventional LED. Significance. We believe that the improved CTR will not only increase the signal-to-noise ratio but will also contribute significantly to a part of the direct positron emission imaging.
Collapse
|
24
|
Mehranian A, Wollenweber SD, Walker MD, Bradley KM, Fielding PA, Su KH, Johnsen R, Kotasidis F, Jansen FP, McGowan DR. Image enhancement of whole-body oncology [ 18F]-FDG PET scans using deep neural networks to reduce noise. Eur J Nucl Med Mol Imaging 2022; 49:539-549. [PMID: 34318350 PMCID: PMC8803788 DOI: 10.1007/s00259-021-05478-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 06/20/2021] [Indexed: 02/07/2023]
Abstract
PURPOSE To enhance the image quality of oncology [18F]-FDG PET scans acquired in shorter times and reconstructed by faster algorithms using deep neural networks. METHODS List-mode data from 277 [18F]-FDG PET/CT scans, from six centres using GE Discovery PET/CT scanners, were split into ¾-, ½- and ¼-duration scans. Full-duration datasets were reconstructed using the convergent block sequential regularised expectation maximisation (BSREM) algorithm. Short-duration datasets were reconstructed with the faster OSEM algorithm. The 277 examinations were divided into training (n = 237), validation (n = 15) and testing (n = 25) sets. Three deep learning enhancement (DLE) models were trained to map full and partial-duration OSEM images into their target full-duration BSREM images. In addition to standardised uptake value (SUV) evaluations in lesions, liver and lungs, two experienced radiologists scored the quality of testing set images and BSREM in a blinded clinical reading (175 series). RESULTS OSEM reconstructions demonstrated up to 22% difference in lesion SUVmax, for different scan durations, compared to full-duration BSREM. Application of the DLE models reduced this difference significantly for full-, ¾- and ½-duration scans, while simultaneously reducing the noise in the liver. The clinical reading showed that the standard DLE model with full- or ¾-duration scans provided an image quality substantially comparable to full-duration scans with BSREM reconstruction, yet in a shorter reconstruction time. CONCLUSION Deep learning-based image enhancement models may allow a reduction in scan time (or injected activity) by up to 50%, and can decrease reconstruction time to a third, while maintaining image quality.
Collapse
Affiliation(s)
| | | | | | - Kevin M Bradley
- Wales Research and Diagnostic PET Imaging Centre, University Hospital of Wales, Cardiff, UK
| | | | | | | | | | | | - Daniel R McGowan
- Oxford University Hospitals NHS FT, Oxford, UK.
- Department of Oncology, University of Oxford, Oxford, UK.
| |
Collapse
|
25
|
Abstract
Medical imaging is considered one of the most important advances in the history of medicine and has become an essential part of the diagnosis and treatment of patients. Earlier prediction and treatment have been driving the acquisition of higher image resolutions as well as the fusion of different modalities, raising the need for sophisticated hardware and software systems for medical image registration, storage, analysis, and processing. In this scenario and given the new clinical pipelines and the huge clinical burden of hospitals, these systems are often required to provide both highly accurate and real-time processing of large amounts of imaging data. Additionally, lowering the prices of each part of imaging equipment, as well as its development and implementation, and increasing their lifespan is crucial to minimize the cost and lead to more accessible healthcare. This paper focuses on the evolution and the application of different hardware architectures (namely, CPU, GPU, DSP, FPGA, and ASIC) in medical imaging through various specific examples and discussing different options depending on the specific application. The main purpose is to provide a general introduction to hardware acceleration techniques for medical imaging researchers and developers who need to accelerate their implementations.
Collapse
|
26
|
Abstract
Artificial intelligence (AI) has been widely used throughout medical imaging, including PET, for data correction, image reconstruction, and image processing tasks. However, there are number of opportunities for the application of AI in photon detector performance or the data collection process, such as to improve detector spatial resolution, time-of-flight information, or other PET system performance characteristics. This review outlines current topics, research highlights, and future directions of AI in PET instrumentation.
Collapse
Affiliation(s)
| | - Craig S Levin
- Department of Radiology, Stanford University, Stanford, CA 94305, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Physics, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
27
|
Shida JF, Spieglan E, Adams BW, Angelico E, Domurat-Sousa K, Elagin A, Frisch HJ, La Riviere P, Squires AH. Low-Dose High-Resolution TOF-PET Using Ionization-activated Multi-State Low-Z Detector Media. NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH. SECTION A, ACCELERATORS, SPECTROMETERS, DETECTORS AND ASSOCIATED EQUIPMENT 2021; 1017:165801. [PMID: 34690392 PMCID: PMC8530277 DOI: 10.1016/j.nima.2021.165801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
We propose PET scanners using low atomic number media that undergo a persistent local change of state along the paths of the Compton recoil electrons. Measurement of the individual scattering locations and angles, deposited energies, and recoil electron directions allows using the kinematical constraints of the 2-body Compton scattering process to perform a statistical time-ordering of the scatterings, with a high probability of precisely identifying where the gamma first interacted in the detector. In these cases the Line-of-Response is measured with high resolution, determined by the underlying physics processes and not the detector segmentation. There are multiple such media that act through different mechanisms. As an example in which the change of state is quantum-mechanical through a change in molecular configuration, rather than thermodynamic, as in a bubble chamber, we present simulations of a two-state photoswitchable organic dye, a 'Switchillator', that is activated to a fluorescent-capable state by the ionization of the recoil electrons. The activated state is persistent, and can be optically excited multiple times to image individual activated molecules. Energy resolution is provided by counting the activated molecules. Location along the LOR is implemented by large-area time-of-flight MCP-PMT photodetectors with single photon time resolution in the tens of ps and sub-mm spatial resolution. Simulations indicate a large reduction of dose.
Collapse
Affiliation(s)
- J F Shida
- Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637
| | - E Spieglan
- Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637
| | - B W Adams
- Quantum Optics Applied Research, Naperville, IL 60564
| | - E Angelico
- Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637
| | - K Domurat-Sousa
- Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637
| | - A Elagin
- Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637
| | - H J Frisch
- Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637
| | - P La Riviere
- Department of Radiology, The University of Chicago, Billings Hospital, P220, 5841 South Maryland Avenue, MC2026, Chicago, IL 60637
| | - A H Squires
- Pritzker School of Molecular Engineering, The University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637
| |
Collapse
|
28
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
29
|
Wang Y, Li E, Cherry SR, Wang G. Total-Body PET Kinetic Modeling and Potential Opportunities Using Deep Learning. PET Clin 2021; 16:613-625. [PMID: 34353745 PMCID: PMC8453049 DOI: 10.1016/j.cpet.2021.06.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The uEXPLORER total-body PET/CT system provides a very high level of detection sensitivity and simultaneous coverage of the entire body for dynamic imaging for quantification of tracer kinetics. This article describes the fundamentals and potential benefits of total-body kinetic modeling and parametric imaging focusing on the noninvasive derivation of blood input function, multiparametric imaging, and high-temporal resolution kinetic modeling. Along with its attractive properties, total-body kinetic modeling also brings significant challenges, such as the large scale of total-body dynamic PET data, the need for organ and tissue appropriate input functions and kinetic models, and total-body motion correction. These challenges, and the opportunities using deep learning, are discussed.
Collapse
Affiliation(s)
- Yiran Wang
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA; Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Elizabeth Li
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA; Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA.
| |
Collapse
|
30
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
31
|
Lee S, Lee JS. Inter-crystal scattering recovery of light-sharing PET detectors using convolutional neural networks. Phys Med Biol 2021; 66. [PMID: 34438380 DOI: 10.1088/1361-6560/ac215d] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 08/26/2021] [Indexed: 11/12/2022]
Abstract
Inter-crystal scattering (ICS) is a type of Compton scattering of photons from one crystal to adjacent crystals and causes inaccurate assignment of the annihilation photon interaction position in positron emission tomography (PET). Because ICS frequently occurs in highly light-shared PET detectors, its recovery is crucial for the spatial resolution improvement. In this study, we propose two different convolutional neural networks (CNNs) for ICS recovery, exploiting the good pattern recognition ability of CNN techniques. Using the signal distribution of a photosensor array as input, one network estimates the energy deposition in each crystal (ICS-eNet) and another network chooses the first-interacted crystal (ICS-cNet). We performed GATE Monte Carlo simulations with optical photon tracking to test PET detectors comprising different crystal arrays (8 × 8 to 21 × 21) with lengths of 20 mm and the same photosensor array (3 mm 8 × 8 array) covering an area of 25.8 × 25.8 mm2. For each detector design, we trained ICS-eNet and ICS-cNet and evaluated their respective performance. ICS-eNet accurately identified whether the events were ICS (accuracy > 90%) and selected interacted crystals (accuracy > 60%) with appropriate energy estimation performance (R2 > 0.7) in the 8 × 8, 12 × 12, and 16 × 16 arrays. ICS-cNet also exhibited satisfactory performance, which was less dependent on the crystal-to-sensor ratio, with an accuracy enhancement that exceeds 10% in selecting the first-interacted crystal and a reduction in error distances compared when no recovery was applied. Both ICS-eNet and ICS-cNet exhibited consistent performances under various optical property settings of the crystals. For spatial resolution measurements in PET rings, both networks achieved significant enhancements particularly for highly pixelated arrays. We also discuss approaches for training the networks in an actual experimental setup. This proof-of-concept study demonstrated the feasibility of CNNs for ICS recovery in various light-sharing designs to efficiently improve the spatial resolution of PET in various applications.
Collapse
Affiliation(s)
- Seungeun Lee
- Department of Nuclear Medicine, Seoul National University, Seoul, 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University, Seoul, 03080, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University, Seoul, 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul, 04782, Republic of Korea
| |
Collapse
|
32
|
Xie Z, Li T, Zhang X, Qi W, Asma E, Qi J. Anatomically aided PET image reconstruction using deep neural networks. Med Phys 2021; 48:5244-5258. [PMID: 34129690 PMCID: PMC8510002 DOI: 10.1002/mp.15051] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 05/07/2021] [Accepted: 06/02/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. METHODS We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance. RESULTS Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. CONCLUSIONS The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.
Collapse
Affiliation(s)
- Zhaoheng Xie
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Tiantian Li
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Wenyuan Qi
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Evren Asma
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| |
Collapse
|
33
|
Sitek A, Ahn S, Asma E, Chandler A, Ihsani A, Prevrhal S, Rahmim A, Saboury B, Thielemans K. Artificial Intelligence in PET: An Industry Perspective. PET Clin 2021; 16:483-492. [PMID: 34353746 DOI: 10.1016/j.cpet.2021.06.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Artificial intelligence (AI) has significant potential to positively impact and advance medical imaging, including positron emission tomography (PET) imaging applications. AI has the ability to enhance and optimize all aspects of the PET imaging chain from patient scheduling, patient setup, protocoling, data acquisition, detector signal processing, reconstruction, image processing, and interpretation. AI poses industry-specific challenges which will need to be addressed and overcome to maximize the future potentials of AI in PET. This article provides an overview of these industry-specific challenges for the development, standardization, commercialization, and clinical adoption of AI and explores the potential enhancements to PET imaging brought on by AI in the near future. In particular, the combination of on-demand image reconstruction, AI, and custom-designed data-processing workflows may open new possibilities for innovation which would positively impact the industry and ultimately patients.
Collapse
Affiliation(s)
- Arkadiusz Sitek
- Sano Centre for Computational Medicine, Nawojki 11 Street, Kraków 30-072, Poland.
| | - Sangtae Ahn
- GE Research, 1 Research Circle KWC-1310C, Niskayuna, NY 12309, USA
| | - Evren Asma
- Canon Medical Research, 706 N Deerpath Drive, Vernon Hills, IL 60061, USA
| | - Adam Chandler
- Global Scientific Collaborations Group, United Imaging Healthcare, America, 9230 Kirby Drive, Houston, TX 77054, USA
| | - Alvin Ihsani
- NVIDIA, 2 Technology Park Drive, Westford, MA 01886, USA
| | - Sven Prevrhal
- Philips Research Europe, Röntgenstr. 22, Hamburg 22335, Germany
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, UCL Hospital Tower 5, 235 Euston Road, London NW1 2BU, UK; Algorithms and Software Consulting Ltd, 10 Laneway, London SW15 5HX, UK
| |
Collapse
|
34
|
Loignon-Houle F, Gundacker S, Toussaint M, Camirand Lemyre F, Auffray E, Fontaine R, Charlebois SA, Lecoq P, Lecomte R. DOI estimation through signal arrival time distribution: a theoretical description including proof of concept measurements. Phys Med Biol 2021; 66. [PMID: 33831858 DOI: 10.1088/1361-6560/abf604] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 04/08/2021] [Indexed: 11/12/2022]
Abstract
The challenge to reach 10 ps coincidence time resolution (CTR) in time-of-flight positron emission tomography (TOF-PET) is triggering major efforts worldwide, but timing improvements of scintillation detectors will remain elusive without depth-of-interaction (DOI) correction in long crystals. Nonetheless, this momentum opportunely brings up the prospect of a fully time-based DOI estimation since fast timing signals intrinsically carry DOI information, even with a traditional single-ended readout. Consequently, extracting features of the detected signal time distribution could uncover the spatial origin of the interaction and in return, provide enhancement on the timing precision of detectors. We demonstrate the validity of a time-based DOI estimation concept in two steps. First, experimental measurements were carried out with current LSO:Ce:Ca crystals coupled to FBK NUV-HD SiPMs read out by fast high-frequency electronics to provide new evidence of a distinct DOI effect on CTR not observable before with slower electronics. Using this detector, a DOI discrimination using a double-threshold scheme on the analog timing signal together with the signal intensity information was also developed without any complex readout or detector modification. As a second step, we explored by simulation the anticipated performance requirements of future detectors to efficiently estimate the DOI and we proposed four estimators that exploit either more generic or more precise features of the DOI-dependent timestamp distribution. A simple estimator using the time difference between two timestamps provided enhanced CTR. Additional improvements were achieved with estimators using multiple timestamps (e.g. kernel density estimation and neural network) converging to the Cramér-Rao lower bound developed in this work for a time-based DOI estimation. This two-step study provides insights on current and future possibilities in exploiting the timing signal features for DOI estimation aiming at ultra-fast CTR while maintaining detection efficiency for TOF PET.
Collapse
Affiliation(s)
- Francis Loignon-Houle
- Sherbrooke Molecular Imaging Center, CRCHUS, and Department of Nuclear Medicine and Radiobiology, Université de Sherbrooke, Sherbrooke, Canada
| | - Stefan Gundacker
- CERN, 1211 Geneva 23, Switzerland.,UniMIB, Piazza dell'Ateneo Nuovo, I-120126, Milano, Italy
| | - Maxime Toussaint
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Canada
| | | | | | - Réjean Fontaine
- Interdisciplinary Institute for Technological Innovation and Department of Electrical and Computer Engineering, Université de Sherbrooke, Sherbrooke, Canada
| | - Serge A Charlebois
- Interdisciplinary Institute for Technological Innovation and Department of Electrical and Computer Engineering, Université de Sherbrooke, Sherbrooke, Canada
| | - Paul Lecoq
- CERN, 1211 Geneva 23, Switzerland.,Polytechnic University, I3M laboratory, Valencia, Spain
| | - Roger Lecomte
- Sherbrooke Molecular Imaging Center, CRCHUS, and Department of Nuclear Medicine and Radiobiology, Université de Sherbrooke, Sherbrooke, Canada
| |
Collapse
|
35
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
36
|
Meikle SR, Sossi V, Roncali E, Cherry SR, Banati R, Mankoff D, Jones T, James M, Sutcliffe J, Ouyang J, Petibon Y, Ma C, El Fakhri G, Surti S, Karp JS, Badawi RD, Yamaya T, Akamatsu G, Schramm G, Rezaei A, Nuyts J, Fulton R, Kyme A, Lois C, Sari H, Price J, Boellaard R, Jeraj R, Bailey DL, Eslick E, Willowson KP, Dutta J. Quantitative PET in the 2020s: a roadmap. Phys Med Biol 2021; 66:06RM01. [PMID: 33339012 PMCID: PMC9358699 DOI: 10.1088/1361-6560/abd4f7] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Positron emission tomography (PET) plays an increasingly important role in research and clinical applications, catalysed by remarkable technical advances and a growing appreciation of the need for reliable, sensitive biomarkers of human function in health and disease. Over the last 30 years, a large amount of the physics and engineering effort in PET has been motivated by the dominant clinical application during that period, oncology. This has led to important developments such as PET/CT, whole-body PET, 3D PET, accelerated statistical image reconstruction, and time-of-flight PET. Despite impressive improvements in image quality as a result of these advances, the emphasis on static, semi-quantitative 'hot spot' imaging for oncologic applications has meant that the capability of PET to quantify biologically relevant parameters based on tracer kinetics has not been fully exploited. More recent advances, such as PET/MR and total-body PET, have opened up the ability to address a vast range of new research questions, from which a future expansion of applications and radiotracers appears highly likely. Many of these new applications and tracers will, at least initially, require quantitative analyses that more fully exploit the exquisite sensitivity of PET and the tracer principle on which it is based. It is also expected that they will require more sophisticated quantitative analysis methods than those that are currently available. At the same time, artificial intelligence is revolutionizing data analysis and impacting the relationship between the statistical quality of the acquired data and the information we can extract from the data. In this roadmap, leaders of the key sub-disciplines of the field identify the challenges and opportunities to be addressed over the next ten years that will enable PET to realise its full quantitative potential, initially in research laboratories and, ultimately, in clinical practice.
Collapse
Affiliation(s)
- Steven R Meikle
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Canada
| | - Emilie Roncali
- Department of Biomedical Engineering, University of California, Davis, United States of America
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Richard Banati
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
- Australian Nuclear Science and Technology Organisation, Sydney, Australia
| | - David Mankoff
- Department of Radiology, University of Pennsylvania, United States of America
| | - Terry Jones
- Department of Radiology, University of California, Davis, United States of America
| | - Michelle James
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), CA, United States of America
- Department of Neurology and Neurological Sciences, Stanford University, CA, United States of America
| | - Julie Sutcliffe
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Internal Medicine, University of California, Davis, CA, United States of America
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Yoann Petibon
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Suleman Surti
- Department of Radiology, University of Pennsylvania, United States of America
| | - Joel S Karp
- Department of Radiology, University of Pennsylvania, United States of America
| | - Ramsey D Badawi
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Taiga Yamaya
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Go Akamatsu
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Georg Schramm
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Johan Nuyts
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Roger Fulton
- Brain and Mind Centre, The University of Sydney, Australia
- Department of Medical Physics, Westmead Hospital, Sydney, Australia
| | - André Kyme
- Brain and Mind Centre, The University of Sydney, Australia
- School of Biomedical Engineering, Faculty of Engineering and IT, The University of Sydney, Australia
| | - Cristina Lois
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Hasan Sari
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Julie Price
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Ronald Boellaard
- Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam University Medical Center, location VUMC, Netherlands
| | - Robert Jeraj
- Departments of Medical Physics, Human Oncology and Radiology, University of Wisconsin, United States of America
- Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
| | - Dale L Bailey
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Enid Eslick
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
| | - Kathy P Willowson
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, United States of America
| |
Collapse
|
37
|
Gong K, Yang J, Larson PEZ, Behr SC, Hope TA, Seo Y, Li Q. MR-based Attenuation Correction for Brain PET Using 3D Cycle-Consistent Adversarial Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:185-192. [PMID: 33778235 PMCID: PMC7993643 DOI: 10.1109/trpms.2020.3006844] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Attenuation correction (AC) is important for the quantitative merits of positron emission tomography (PET). However, attenuation coefficients cannot be derived from magnetic resonance (MR) images directly for PET/MR systems. In this work, we aimed to derive continuous AC maps from Dixon MR images without the requirement of MR and computed tomography (CT) image registration. To achieve this, a 3D generative adversarial network with both discriminative and cycle-consistency loss (Cycle-GAN) was developed. The modified 3D U-net was employed as the structure of the generative networks to generate the pseudo CT/MR images. The 3D patch-based discriminative networks were used to distinguish the generated pseudo CT/MR images from the true CT/MR images. To evaluate its performance, datasets from 32 patients were used in the experiment. The Dixon segmentation and atlas methods provided by the vendor and the convolutional neural network (CNN) method which utilized registered MR and CT images were employed as the reference methods. Dice coefficients of the pseudo-CT image and the regional quantification in the reconstructed PET images were compared. Results show that the Cycle-GAN framework can generate better AC compared to the Dixon segmentation and atlas methods, and shows comparable performance compared to the CNN method.
Collapse
Affiliation(s)
- Kuang Gong
- Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| | - Jaewon Yang
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Spencer C Behr
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Youngho Seo
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
38
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
39
|
Hashimoto F, Ohba H, Ote K, Kakimoto A, Tsukada H, Ouchi Y. 4D deep image prior: dynamic PET image denoising using an unsupervised four-dimensional branch convolutional neural network. Phys Med Biol 2021; 66:015006. [PMID: 33227725 DOI: 10.1088/1361-6560/abcd1a] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Although convolutional neural networks (CNNs) demonstrate the superior performance in denoising positron emission tomography (PET) images, a supervised training of the CNN requires a pair of large, high-quality PET image datasets. As an unsupervised learning method, a deep image prior (DIP) has recently been proposed; it can perform denoising with only the target image. In this study, we propose an innovative procedure for the DIP approach with a four-dimensional (4D) branch CNN architecture in end-to-end training to denoise dynamic PET images. Our proposed 4D CNN architecture can be applied to end-to-end dynamic PET image denoising by introducing a feature extractor and a reconstruction branch for each time frame of the dynamic PET image. In the proposed DIP method, it is not necessary to prepare high-quality and large patient-related PET images. Instead, a subject's own static PET image is used as additional information, dynamic PET images are treated as training labels, and denoised dynamic PET images are obtained from the CNN outputs. Both simulation with [18F]fluoro-2-deoxy-D-glucose (FDG) and preclinical data with [18F]FDG and [11C]raclopride were used to evaluate the proposed framework. The results showed that our 4D DIP framework quantitatively and qualitatively outperformed 3D DIP and other unsupervised denoising methods. The proposed 4D DIP framework thus provides a promising procedure for dynamic PET image denoising.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | | | | | | | | | | |
Collapse
|
40
|
Abstract
Total-body PET image reconstruction follows a similar procedure to the image reconstruction process for standard whole-body PET scanners. One unique aspect of total-body imaging is simultaneous coverage of the entire human body, which makes it convenient to perform total-body dynamic PET scans. Therefore, four-dimensional dynamic PET reconstruction and parametric imaging are of great interest in total-body imaging. This article covers some basics of PET image reconstruction and then focuses on three- and four-dimensional PET reconstruction for total-body imaging. Methods for image formation from raw measurements in total-body PET are described. Challenges and opportunities in total-body PET image reconstruction are discussed.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Biomedical Engineering, University of California, One Shields Avenue, Davis, CA 95616, USA.
| | - Samuel Matej
- Department of Radiology, University of Pennsylvania, 3620 Hamilton Walk, John Morgan Building, Room 156A, Philadelphia, PA 19104-6061, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Medical Center, Lawrence J. Ellison Ambulatory Care Center Building, Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of California, One Shields Avenue, Davis, CA 95616, USA
| |
Collapse
|
41
|
Abstract
This article describes aspects of PET scanner design for long axial field-of-view systems and how these choices have an impact on scanner performance.
Collapse
Affiliation(s)
- Margaret E Daube-Witherspoon
- Department of Radiology, University of Pennsylvania, 3620 Hamilton Walk, Room 156H, Philadelphia, PA 19104, USA.
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, 451 Health Sciences Drive, Davis, CA 95616, USA
| |
Collapse
|
42
|
Reader AJ, Corda G, Mehranian A, Costa-Luis CD, Ellis S, Schnabel JA. Deep Learning for PET Image Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3014786] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
43
|
DPIR-Net: Direct PET Image Reconstruction Based on the Wasserstein Generative Adversarial Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2995717] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
44
|
Qiao T, Liu S, Cui Z, Yu X, Cai H, Zhang H, Sun M, Lv Z, Li D. Deep learning for intelligent diagnosis in thyroid scintigraphy. J Int Med Res 2021; 49:300060520982842. [PMID: 33445994 PMCID: PMC7812409 DOI: 10.1177/0300060520982842] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/30/2020] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE To construct deep learning (DL) models to improve the accuracy and efficiency of thyroid disease diagnosis by thyroid scintigraphy. METHODS We constructed DL models with AlexNet, VGGNet, and ResNet. The models were trained separately with transfer learning. We measured each model's performance with six indicators: recall, precision, negative predictive value (NPV), specificity, accuracy, and F1-score. We also compared the diagnostic performances of first- and third-year nuclear medicine (NM) residents with assistance from the best-performing DL-based model. The Kappa coefficient and average classification time of each model were compared with those of two NM residents. RESULTS The recall, precision, NPV, specificity, accuracy, and F1-score of the three models ranged from 73.33% to 97.00%. The Kappa coefficient of all three models was >0.710. All models performed better than the first-year NM resident but not as well as the third-year NM resident in terms of diagnostic ability. However, the ResNet model provided "diagnostic assistance" to the NM residents. The models provided results at speeds 400 to 600 times faster than the NM residents. CONCLUSION DL-based models perform well in diagnostic assessment by thyroid scintigraphy. These models may serve as tools for NM residents in the diagnosis of Graves' disease and subacute thyroiditis.
Collapse
Affiliation(s)
- Tingting Qiao
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| | - Simin Liu
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zhijun Cui
- Department of Medicine Imaging, the Chongming Branch of Shanghai Tenth People’s Hospital, Tongji University, Shanghai, China
| | - Xiaqing Yu
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| | - Haidong Cai
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| | - Huijuan Zhang
- School of Software Engineering, Tongji University, Shanghai, China
| | - Ming Sun
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zhongwei Lv
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| | - Dan Li
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
45
|
Herraiz JL, Bembibre A, López-Montes A. Deep-Learning Based Positron Range Correction of PET Images. APPLIED SCIENCES-BASEL 2020. [DOI: https://doi.org/10.3390/app11010266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Positron emission tomography (PET) is a molecular imaging technique that provides a 3D image of functional processes in the body in vivo. Some of the radionuclides proposed for PET imaging emit high-energy positrons, which travel some distance before they annihilate (positron range), creating significant blurring in the reconstructed images. Their large positron range compromises the achievable spatial resolution of the system, which is more significant when using high-resolution scanners designed for the imaging of small animals. In this work, we trained a deep neural network named Deep-PRC to correct PET images for positron range effects. Deep-PRC was trained with modeled cases using a realistic Monte Carlo simulation tool that considers the positron energy distribution and the materials and tissues it propagates into. Quantification of the reconstructed PET images corrected with Deep-PRC showed that it was able to restore the images by up to 95% without any significant noise increase. The proposed method, which is accessible via Github, can provide an accurate positron range correction in a few seconds for a typical PET acquisition.
Collapse
|
46
|
Abstract
Positron emission tomography (PET) is a molecular imaging technique that provides a 3D image of functional processes in the body in vivo. Some of the radionuclides proposed for PET imaging emit high-energy positrons, which travel some distance before they annihilate (positron range), creating significant blurring in the reconstructed images. Their large positron range compromises the achievable spatial resolution of the system, which is more significant when using high-resolution scanners designed for the imaging of small animals. In this work, we trained a deep neural network named Deep-PRC to correct PET images for positron range effects. Deep-PRC was trained with modeled cases using a realistic Monte Carlo simulation tool that considers the positron energy distribution and the materials and tissues it propagates into. Quantification of the reconstructed PET images corrected with Deep-PRC showed that it was able to restore the images by up to 95% without any significant noise increase. The proposed method, which is accessible via Github, can provide an accurate positron range correction in a few seconds for a typical PET acquisition.
Collapse
|
47
|
|
48
|
Xiang H, Lim H, Fessler JA, Dewaraja YK. A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions. Eur J Nucl Med Mol Imaging 2020; 47:2956-2967. [PMID: 32415551 PMCID: PMC7666660 DOI: 10.1007/s00259-020-04840-9] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 04/24/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE A major challenge for accurate quantitative SPECT imaging of some radionuclides is the inadequacy of simple energy window-based scatter estimation methods, widely available on clinic systems. A deep learning approach for SPECT/CT scatter estimation is investigated as an alternative to computationally expensive Monte Carlo (MC) methods for challenging SPECT radionuclides, such as 90Y. METHODS A deep convolutional neural network (DCNN) was trained to separately estimate each scatter projection from the measured 90Y bremsstrahlung SPECT emission projection and CT attenuation projection that form the network inputs. The 13-layer deep architecture consisted of separate paths for the emission and attenuation projection that are concatenated before the final convolution steps. The training label consisted of MC-generated "true" scatter projections in phantoms (MC is needed only for training) with the mean square difference relative to the model output serving as the loss function. The test data set included a simulated sphere phantom with a lung insert, measurements of a liver phantom, and patients after 90Y radioembolization. OS-EM SPECT reconstruction without scatter correction (NO-SC), with the true scatter (TRUE-SC) (available for simulated data only), with the DCNN estimated scatter (DCNN-SC), and with a previously developed MC scatter model (MC-SC) were compared, including with 90Y PET when available. RESULTS The contrast recovery (CR) vs. noise and lung insert residual error vs. noise curves for images reconstructed with DCNN-SC and MC-SC estimates were similar. At the same noise level of 10% (across multiple realizations), the average sphere CR was 24%, 52%, 55%, and 67% for NO-SC, MC-SC, DCNN-SC, and TRUE-SC, respectively. For the liver phantom, the average CR for liver inserts were 32%, 73%, and 65% for NO-SC, MC-SC, and DCNN-SC, respectively while the corresponding values for average contrast-to-noise ratio (visibility index) in low-concentration extra-hepatic inserts were 2, 19, and 61, respectively. In patients, there was high concordance between lesion-to-liver uptake ratios for SPECT reconstruction with DCNN-SC (median 4.8, range 0.02-13.8) compared with MC-SC (median 4.0, range 0.13-12.1; CCC = 0.98) and with 90Y PET (median 4.9, range 0.02-11.2; CCC = 0.96) while the concordance with NO-SC was poor (median 2.8, range 0.3-7.2; CCC = 0.59). The trained DCNN took ~ 40 s (using a single i5 processor on a desktop computer) to generate the scatter estimates for all 128 views in a patient scan, compared to ~ 80 min for the MC scatter model using 12 processors. CONCLUSIONS For diverse 90Y test data that included patient studies, we demonstrated comparable performance between images reconstructed with deep learning and MC-based scatter estimates using metrics relevant for dosimetry and for safety. This approach that can be generalized to other radionuclides by changing the training data is well suited for real-time clinical use because of the high speed, orders of magnitude faster than MC, while maintaining high accuracy.
Collapse
Affiliation(s)
- Haowei Xiang
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Hongki Lim
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Jeffrey A Fessler
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Yuni K Dewaraja
- Department of Radiology, University of Michigan, 1301 Catherine, 2276 Medical Science I/5610, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
49
|
Wang G, Rahmim A, Gunn RN. PET Parametric Imaging: Past, Present, and Future. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:663-675. [PMID: 33763624 PMCID: PMC7983029 DOI: 10.1109/trpms.2020.3025086] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Positron emission tomography (PET) is actively used in a diverse range of applications in oncology, cardiology, and neurology. The use of PET in the clinical setting focuses on static (single time frame) imaging at a specific time-point post radiotracer injection and is typically considered as semi-quantitative; e.g. standardized uptake value (SUV) measures. In contrast, dynamic PET imaging requires increased acquisition times but has the advantage that it measures the full spatiotemporal distribution of a radiotracer and, in combination with tracer kinetic modeling, enables the generation of multiparametric images that more directly quantify underlying biological parameters of interest, such as blood flow, glucose metabolism, and receptor binding. Parametric images have the potential for improved detection and for more accurate and earlier therapeutic response assessment. Parametric imaging with dynamic PET has witnessed extensive research in the past four decades. In this paper, we provide an overview of past and present activities and discuss emerging opportunities in the field of parametric imaging for the future.
Collapse
Affiliation(s)
- Guobao Wang
- Department of Radiology, University of California Davis Health, Sacramento, CA 95817, USA
| | - Arman Rahmim
- University of British Columbia, Vancouver, BC, Canada
| | | |
Collapse
|
50
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|