1
|
Ai X, Huang B, Chen F, Shi L, Li B, Wang S, Liu Q. RED: Residual estimation diffusion for low-dose PET sinogram reconstruction. Med Image Anal 2025; 102:103558. [PMID: 40121810 DOI: 10.1016/j.media.2025.103558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 03/08/2025] [Accepted: 03/17/2025] [Indexed: 03/25/2025]
Abstract
Recent advances in diffusion models have demonstrated exceptional performance in generative tasks across various fields. In positron emission tomography (PET), the reduction in tracer dose leads to information loss in sinograms. Using diffusion models to reconstruct missing information can improve imaging quality. Traditional diffusion models effectively use Gaussian noise for image reconstructions. However, in low-dose PET reconstruction, Gaussian noise can worsen the already sparse data by introducing artifacts and inconsistencies. To address this issue, we propose a diffusion model named residual estimation diffusion (RED). From the perspective of diffusion mechanism, RED uses the residual between sinograms to replace Gaussian noise in diffusion process, respectively sets the low-dose and full-dose sinograms as the starting point and endpoint of reconstruction. This mechanism helps preserve the original information in the low-dose sinogram, thereby enhancing reconstruction reliability. From the perspective of data consistency, RED introduces a drift correction strategy to reduce accumulated prediction errors during the reverse process. Calibrating the intermediate results of reverse iterations helps maintain the data consistency and enhances the stability of reconstruction process. In the experiments, RED achieved the best performance across all metrics. Specifically, the PSNR metric showed improvements of 2.75, 5.45, and 8.08 dB in DRF4, 20, and 100 respectively, compared to traditional methods. The code is available at: https://github.com/yqx7150/RED.
Collapse
Affiliation(s)
- Xingyu Ai
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Bin Huang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
| | - Fang Chen
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Liu Shi
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Binxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230000, China
| | - Shaoyu Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
2
|
Zhang Q, Huang Z, Jin Y, Li W, Zheng H, Liang D, Hu Z. Total-Body PET/CT: A Role of Artificial Intelligence? Semin Nucl Med 2025; 55:124-136. [PMID: 39368911 DOI: 10.1053/j.semnuclmed.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 09/06/2024] [Accepted: 09/09/2024] [Indexed: 10/07/2024]
Abstract
The purpose of this paper is to provide an overview of the cutting-edge applications of artificial intelligence (AI) technology in total-body positron emission tomography/computed tomography (PET/CT) scanning technology and its profound impact on the field of medical imaging. The introduction of total-body PET/CT scanners marked a major breakthrough in medical imaging, as their superior sensitivity and ultralong axial fields of view allowed for high-quality PET images of the entire body to be obtained in a single scan, greatly enhancing the efficiency and accuracy of diagnoses. However, this advancement is accompanied by the challenges of increasing data volumes and data complexity levels, which pose severe challenges for traditional image processing and analysis methods. Given the excellent ability of AI technology to process massive and high-dimensional data, the combination of AI technology and ultrasensitive PET/CT can be considered a complementary match, opening a new path for rapidly improving the efficiency of the PET-based medical diagnosis process. Recently, AI technology has demonstrated extraordinary potential in several key areas related to total-body PET/CT, including radiation dose reductions, dynamic parametric imaging refinements, quantitative analysis accuracy improvements, and significant image quality enhancements. The accelerated adoption of AI in clinical practice is of particular interest and is directly driven by the rapid progress made by AI technologies in terms of interpretability; i.e., the decision-making processes of algorithms and models have become more transparent and understandable. In the future, we believe that AI technology will fundamentally reshape the use of PET/CT, not only playing a more critical role in clinical diagnoses but also facilitating the customization and implementation of personalized healthcare solutions, providing patients with safer, more accurate, and more efficient healthcare experiences.
Collapse
Affiliation(s)
- Qiyang Zhang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhenxing Huang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuxi Jin
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenbo Li
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
3
|
Tang Y, Liang H, Yang X, Xue X, Zhan J. The metaverse in nuclear medicine: transformative applications, challenges, and future directions. Front Med (Lausanne) 2024; 11:1459701. [PMID: 39371341 PMCID: PMC11452868 DOI: 10.3389/fmed.2024.1459701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Accepted: 09/06/2024] [Indexed: 10/08/2024] Open
Abstract
The metaverse, a rapidly evolving virtual reality space, holds immense potential to revolutionize nuclear medicine by enhancing education, training, diagnostics, and therapeutics. This review explores the transformative applications of the metaverse in nuclear medicine, where immersive virtual learning environments, simulation-based training, artificial intelligence (AI)-powered decision support systems integrated into interactive three-dimensional (3D) visualizations, and personalized dosimetry using realistic patient-specific virtual models are seamlessly incorporated into the metaverse ecosystem, creating a synergistic platform for healthcare professionals and patients alike. However, the responsible and sustainable adoption of the metaverse in nuclear medicine requires a multidisciplinary approach to address challenges related to standardization, accessibility, data security, and ethical concerns. The formation of cross-disciplinary consortia, increased research and development (R&D) investment, and the strengthening of data governance and cybersecurity measures are crucial steps in ensuring the safe and effective integration of the metaverse in healthcare. As the metaverse continues to evolve, researchers, practitioners, and policymakers must collaborate and explore its potential, navigate the challenges, and shape a future where technology and medicine seamlessly integrate to enhance patient care and outcomes in nuclear medicine. Further research is needed to fully understand the implications of the metaverse in clinical practice, education, and research, as well as to develop evidence-based guidelines for its responsible implementation. By embracing responsible innovation and collaboration, the nuclear medicine community can harness the power of the metaverse to transform and improve patient care.
Collapse
Affiliation(s)
| | | | | | - Xiangming Xue
- Division of Radiology and Environmental Medicine, China Institute for Radiation Protection, Taiyuan, China
| | - Jingming Zhan
- Division of Radiology and Environmental Medicine, China Institute for Radiation Protection, Taiyuan, China
| |
Collapse
|
4
|
uz Zaman M, Fatima N. Artificial Intelligence (AI) in Nuclear Medicine: Is a Friend Not Foe. World J Nucl Med 2024; 23:1-2. [PMID: 38595844 PMCID: PMC11001459 DOI: 10.1055/s-0043-1777698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024] Open
Affiliation(s)
- Maseeh uz Zaman
- Department of Radiology, Aga Khan University Hospital, Karachi, Pakistan
| | - Nosheen Fatima
- Department of Radiology, Aga Khan University Hospital, Karachi, Pakistan
| |
Collapse
|
5
|
Galve P, Rodriguez-Vila B, Herraiz J, García-Vázquez V, Malpica N, Udias J, Torrado-Carvajal A. Recent advances in combined Positron Emission Tomography and Magnetic Resonance Imaging. JOURNAL OF INSTRUMENTATION 2024; 19:C01001. [DOI: 10.1088/1748-0221/19/01/c01001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Abstract
Abstract
Hybrid imaging modalities combine two or more medical imaging techniques offering exciting new possibilities to image the structure, function and biochemistry of the human body in far greater detail than has previously been possible to improve patient diagnosis. In this context, simultaneous Positron Emission Tomography and Magnetic Resonance (PET/MR) imaging offers great complementary information, but it also poses challenges from the point of view of hardware and software compatibility. The PET signal may interfere with the MR magnetic field and vice-versa, posing several challenges and constrains in the PET instrumentation for PET/MR systems. Additionally, anatomical maps are needed to properly apply attenuation and scatter corrections to the resulting reconstructed PET images, as well motion estimates to minimize the effects of movement throughout the acquisition. In this review, we summarize the instrumentation implemented in modern PET scanners to overcome these limitations, describing the historical development of hybrid PET/MR scanners. We pay special attention to the methods used in PET to achieve attenuation, scatter and motion correction when it is combined with MR, and how both imaging modalities may be combined in PET image reconstruction algorithms.
Collapse
|
6
|
Li J, Yang G, Zhang L. Artificial Intelligence Empowered Nuclear Medicine and Molecular Imaging in Cardiology: A State-of-the-Art Review. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:586-596. [PMID: 38223683 PMCID: PMC10781930 DOI: 10.1007/s43657-023-00137-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 01/16/2024]
Abstract
Nuclear medicine and molecular imaging plays a significant role in the detection and management of cardiovascular disease (CVD). With recent advancements in computer power and the availability of digital archives, artificial intelligence (AI) is rapidly gaining traction in the field of medical imaging, including nuclear medicine and molecular imaging. However, the complex and time-consuming workflow and interpretation involved in nuclear medicine and molecular imaging, limit their extensive utilization in clinical practice. To address this challenge, AI has emerged as a fundamental tool for enhancing the role of nuclear medicine and molecular imaging. It has shown promising applications in various crucial aspects of nuclear cardiology, such as optimizing imaging protocols, facilitating data processing, aiding in CVD diagnosis, risk classification and prognosis. In this review paper, we will introduce the key concepts of AI and provide an overview of its current progress in the field of nuclear cardiology. In addition, we will discuss future perspectives for AI in this domain.
Collapse
Affiliation(s)
- Junhao Li
- Department of Nuclear Medicine, Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University, 305 Zhongshan East Road, Xuanwu District, Nanjing, 210002 Jiangsu China
| | - Guifen Yang
- Department of Nuclear Medicine, Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University, 305 Zhongshan East Road, Xuanwu District, Nanjing, 210002 Jiangsu China
| | - Longjiang Zhang
- Department of Radiology, Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University, 305 Zhongshan East Road, Xuanwu District, Nanjing, 210002 Jiangsu China
| |
Collapse
|
7
|
You S, Lei B, Wang S, Chui CK, Cheung AC, Liu Y, Gan M, Wu G, Shen Y. Fine Perceptive GANs for Brain MR Image Super-Resolution in Wavelet Domain. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8802-8814. [PMID: 35254996 DOI: 10.1109/tnnls.2022.3153088] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Magnetic resonance (MR) imaging plays an important role in clinical and brain exploration. However, limited by factors such as imaging hardware, scanning time, and cost, it is challenging to acquire high-resolution MR images clinically. In this article, fine perceptive generative adversarial networks (FP-GANs) are proposed to produce super-resolution (SR) MR images from the low-resolution counterparts. By adopting the divide-and-conquer scheme, FP-GANs are designed to deal with the low-frequency (LF) and high-frequency (HF) components of MR images separately and parallelly. Specifically, FP-GANs first decompose an MR image into LF global approximation and HF anatomical texture subbands in the wavelet domain. Then, each subband generative adversarial network (GAN) simultaneously concentrates on super-resolving the corresponding subband image. In generator, multiple residual-in-residual dense blocks are introduced for better feature extraction. In addition, the texture-enhancing module is designed to trade off the weight between global topology and detailed textures. Finally, the reconstruction of the whole image is considered by integrating inverse discrete wavelet transformation in FP-GANs. Comprehensive experiments on the MultiRes_7T and ADNI datasets demonstrate that the proposed model achieves finer structure recovery and outperforms the competing methods quantitatively and qualitatively. Moreover, FP-GANs further show the value by applying the SR results in classification tasks.
Collapse
|
8
|
Küper A, Blanc-Durand P, Gafita A, Kersting D, Fendler WP, Seibold C, Moraitis A, Lückerath K, James ML, Seifert R. Is There a Role of Artificial Intelligence in Preclinical Imaging? Semin Nucl Med 2023; 53:687-693. [PMID: 37037684 DOI: 10.1053/j.semnuclmed.2023.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/14/2023] [Accepted: 03/14/2023] [Indexed: 04/12/2023]
Abstract
This review provides an overview of the current opportunities for integrating artificial intelligence methods into the field of preclinical imaging research in nuclear medicine. The growing demand for imaging agents and therapeutics that are adapted to specific tumor phenotypes can be excellently served by the evolving multiple capabilities of molecular imaging and theranostics. However, the increasing demand for rapid development of novel, specific radioligands with minimal side effects that excel in diagnostic imaging and achieve significant therapeutic effects requires a challenging preclinical pipeline: from target identification through chemical, physical, and biological development to the conduct of clinical trials, coupled with dosimetry and various pre, interim, and post-treatment staging images to create a translational feedback loop for evaluating the efficacy of diagnostic or therapeutic ligands. In virtually all areas of this pipeline, the use of artificial intelligence and in particular deep-learning systems such as neural networks could not only address the above-mentioned challenges, but also provide insights that would not have been possible without their use. In the future, we expect that not only the clinical aspects of nuclear medicine will be supported by artificial intelligence, but that there will also be a general shift toward artificial intelligence-assisted in silico research that will address the increasingly complex nature of identifying targets for cancer patients and developing radioligands.
Collapse
Affiliation(s)
- Alina Küper
- Department of Nuclear Medicine, University Hospital Essen; West German Cancer Center; German Cancer Consortium (DKTK), Essen, Germany
| | - Paul Blanc-Durand
- Department of Nuclear Medicine, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Andrei Gafita
- Division of Nuclear Medicine and Molecular Imaging, The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen; West German Cancer Center; German Cancer Consortium (DKTK), Essen, Germany
| | - Wolfgang P Fendler
- Department of Nuclear Medicine, University Hospital Essen; West German Cancer Center; German Cancer Consortium (DKTK), Essen, Germany
| | - Constantin Seibold
- Computer Vision for Human-Computer Interaction Lab, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Alexandros Moraitis
- Department of Nuclear Medicine, University Hospital Essen; West German Cancer Center; German Cancer Consortium (DKTK), Essen, Germany
| | - Katharina Lückerath
- Department of Nuclear Medicine, University Hospital Essen; West German Cancer Center; German Cancer Consortium (DKTK), Essen, Germany
| | - Michelle L James
- Department of Radiology, Stanford University School of Medicine, Stanford, CA; Department of Neurology and Neurological Sciences, Stanford University School of Medicine, Stanford, CA
| | - Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen; West German Cancer Center; German Cancer Consortium (DKTK), Essen, Germany.
| |
Collapse
|
9
|
Zhu W, Lee SJ. Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2023; 23:5783. [PMID: 37447633 DOI: 10.3390/s23135783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
We present an adaptive method for fine-tuning hyperparameters in edge-preserving regularization for PET image reconstruction. For edge-preserving regularization, in addition to the smoothing parameter that balances data fidelity and regularization, one or more control parameters are typically incorporated to adjust the sensitivity of edge preservation by modifying the shape of the penalty function. Although there have been efforts to develop automated methods for tuning the hyperparameters in regularized PET reconstruction, the majority of these methods primarily focus on the smoothing parameter. However, it is challenging to obtain high-quality images without appropriately selecting the control parameters that adjust the edge preservation sensitivity. In this work, we propose a method to precisely tune the hyperparameters, which are initially set with a fixed value for the entire image, either manually or using an automated approach. Our core strategy involves adaptively adjusting the control parameter at each pixel, taking into account the degree of patch similarities calculated from the previous iteration within the pixel's neighborhood that is being updated. This approach allows our new method to integrate with a wide range of existing parameter-tuning techniques for edge-preserving regularization. Experimental results demonstrate that our proposed method effectively enhances the overall reconstruction accuracy across multiple image quality metrics, including peak signal-to-noise ratio, structural similarity, visual information fidelity, mean absolute error, root-mean-square error, and mean percentage error.
Collapse
Affiliation(s)
- Wen Zhu
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| | - Soo-Jin Lee
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| |
Collapse
|
10
|
Weyts K, Quak E, Licaj I, Ciappuccini R, Lasnon C, Corroyer-Dulmont A, Foucras G, Bardet S, Jaudet C. Deep Learning Denoising Improves and Homogenizes Patient [ 18F]FDG PET Image Quality in Digital PET/CT. Diagnostics (Basel) 2023; 13:1626. [PMID: 37175017 PMCID: PMC10177812 DOI: 10.3390/diagnostics13091626] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 04/18/2023] [Accepted: 04/23/2023] [Indexed: 05/15/2023] Open
Abstract
Given the constant pressure to increase patient throughput while respecting radiation protection, global body PET image quality (IQ) is not satisfactory in all patients. We first studied the association between IQ and other variables, in particular body habitus, on a digital PET/CT. Second, to improve and homogenize IQ, we evaluated a deep learning PET denoising solution (Subtle PETTM) using convolutional neural networks. We analysed retrospectively in 113 patients visual IQ (by a 5-point Likert score in two readers) and semi-quantitative IQ (by the coefficient of variation in the liver, CVliv) as well as lesion detection and quantification in native and denoised PET. In native PET, visual and semi-quantitative IQ were lower in patients with larger body habitus (p < 0.0001 for both) and in men vs. women (p ≤ 0.03 for CVliv). After PET denoising, visual IQ scores increased and became more homogeneous between patients (4.8 ± 0.3 in denoised vs. 3.6 ± 0.6 in native PET; p < 0.0001). CVliv were lower in denoised PET than in native PET, 6.9 ± 0.9% vs. 12.2 ± 1.6%; p < 0.0001. The slope calculated by linear regression of CVliv according to weight was significantly lower in denoised than in native PET (p = 0.0002), demonstrating more uniform CVliv. Lesion concordance rate between both PET series was 369/371 (99.5%), with two lesions exclusively detected in native PET. SUVmax and SUVpeak of up to the five most intense native PET lesions per patient were lower in denoised PET (p < 0.001), with an average relative bias of -7.7% and -2.8%, respectively. DL-based PET denoising by Subtle PETTM allowed [18F]FDG PET global image quality to be improved and homogenized, while maintaining satisfactory lesion detection and quantification. DL-based denoising may render body habitus adaptive PET protocols unnecessary, and pave the way for the improvement and homogenization of PET modalities.
Collapse
Affiliation(s)
- Kathleen Weyts
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Elske Quak
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Idlir Licaj
- Department of Biostatistics, Baclesse Cancer Centre, 14076 Caen, France
- Department of Community Medicine, Faculty of Health Sciences, UiT The Arctic University of Norway, 9019 Tromsø, Norway
| | - Renaud Ciappuccini
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Charline Lasnon
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Aurélien Corroyer-Dulmont
- Department of Medical Physics, Baclesse Cancer Centre, 14076 Caen, France
- ISTCT Unit, CNRS, UNICAEN, Normandy University, GIP CYCERON, 14074 Caen, France
| | - Gauthier Foucras
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Stéphane Bardet
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
| | - Cyril Jaudet
- Department of Nuclear Medicine, Baclesse Cancer Centre, 14076 Caen, France
- Department of Medical Physics, Baclesse Cancer Centre, 14076 Caen, France
| |
Collapse
|
11
|
Fang R, Guo R, Zhao M, Yao M. FBP‐CNN: A Direct PET Image Reconstruction Network for Flow Visualization. ADVANCED THEORY AND SIMULATIONS 2023. [DOI: 10.1002/adts.202200604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
12
|
Sanaat A, Jamalizadeh M, Khanmohammadi H, Arabi H, Zaidi H. Active-PET: a multifunctional PET scanner with dynamic gantry size featuring high-resolution and high-sensitivity imaging: a Monte Carlo simulation study. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7fd8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Accepted: 07/08/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Organ-specific PET scanners have been developed to provide both high spatial resolution and sensitivity, although the deployment of several dedicated PET scanners at the same center is costly and space-consuming. Active-PET is a multifunctional PET scanner design exploiting the advantages of two different types of detector modules and mechanical arms mechanisms enabling repositioning of the detectors to allow the implementation of different geometries/configurations. Active-PET can be used for different applications, including brain, axilla, breast, prostate, whole-body, preclinical and pediatrics imaging, cell tracking, and image guidance for therapy. Monte Carlo techniques were used to simulate a PET scanner with two sets of high resolution and high sensitivity pixelated Lutetium Oxyorthoscilicate (LSO(Ce)) detector blocks (24 for each group, overall 48 detector modules for each ring), one with large pixel size (4 × 4 mm2) and crystal thickness (20 mm), and another one with small pixel size (2 × 2 mm2) and thickness (10 mm). Each row of detector modules is connected to a linear motor that can displace the detectors forward and backward along the radial axis to achieve variable gantry diameter in order to image the target subject at the optimal/desired resolution and/or sensitivity. At the center of the field-of-view, the highest sensitivity (15.98 kcps MBq−1) was achieved by the scanner with a small gantry and high-sensitivity detectors while the best spatial resolution was obtained by the scanner with a small gantry and high-resolution detectors (2.2 mm, 2.3 mm, 2.5 mm FWHM for tangential, radial, and axial, respectively). The configuration with large-bore (combination of high-resolution and high-sensitivity detectors) achieved better performance and provided higher image quality compared to the Biograph mCT as reflected by the 3D Hoffman brain phantom simulation study. We introduced the concept of a non-static PET scanner capable of switching between large and small field-of-view as well as high-resolution and high-sensitivity imaging.
Collapse
|
13
|
Manimegalai P, Suresh Kumar R, Valsalan P, Dhanagopal R, Vasanth Raj PT, Christhudass J. 3D Convolutional Neural Network Framework with Deep Learning for Nuclear Medicine. SCANNING 2022; 2022:9640177. [PMID: 35924105 PMCID: PMC9308558 DOI: 10.1155/2022/9640177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 06/27/2022] [Indexed: 05/15/2023]
Abstract
Though artificial intelligence (AI) has been used in nuclear medicine for more than 50 years, more progress has been made in deep learning (DL) and machine learning (ML), which have driven the development of new AI abilities in the field. ANNs are used in both deep learning and machine learning in nuclear medicine. Alternatively, if 3D convolutional neural network (CNN) is used, the inputs may be the actual images that are being analyzed, rather than a set of inputs. In nuclear medicine, artificial intelligence reimagines and reengineers the field's therapeutic and scientific capabilities. Understanding the concepts of 3D CNN and U-Net in the context of nuclear medicine provides for a deeper engagement with clinical and research applications, as well as the ability to troubleshoot problems when they emerge. Business analytics, risk assessment, quality assurance, and basic classifications are all examples of simple ML applications. General nuclear medicine, SPECT, PET, MRI, and CT may benefit from more advanced DL applications for classification, detection, localization, segmentation, quantification, and radiomic feature extraction utilizing 3D CNNs. An ANN may be used to analyze a small dataset at the same time as traditional statistical methods, as well as bigger datasets. Nuclear medicine's clinical and research practices have been largely unaffected by the introduction of artificial intelligence (AI). Clinical and research landscapes have been fundamentally altered by the advent of 3D CNN and U-Net applications. Nuclear medicine professionals must now have at least an elementary understanding of AI principles such as neural networks (ANNs) and convolutional neural networks (CNNs).
Collapse
Affiliation(s)
- P. Manimegalai
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - R. Suresh Kumar
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Prajoona Valsalan
- Department of Electrical and Computer Engineering, Dhofar University, Salalah, Oman
| | - R. Dhanagopal
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - P. T. Vasanth Raj
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Jerome Christhudass
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
14
|
Tamam MO, Tamam MC. Artificial intelligence technologies in nuclear medicine. World J Radiol 2022; 14:151-154. [PMID: 35978976 PMCID: PMC9258309 DOI: 10.4329/wjr.v14.i6.151] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/20/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023] Open
Abstract
The use of artificial intelligence plays a crucial role in developing precision medicine in nuclear medicine. Artificial intelligence refers to a field of computer science aimed at imitating the performance of tasks typically requiring human intelligence. From machine learning to generative adversarial networks, artificial intelligence automized the workflow of medical imaging. In this mini-review, we encapsulate artificial intelligence models and their use in nuclear medicine imaging workflow.
Collapse
Affiliation(s)
- Muge Oner Tamam
- Department of Nuclear Medicine, Prof. Dr. Cemil Tascioglu City Hospital, İstanbul 34381, Turkey
| | | |
Collapse
|
15
|
Pan B, Qi N, Meng Q, Wang J, Peng S, Qi C, Gong NJ, Zhao J. Ultra high speed SPECT bone imaging enabled by a deep learning enhancement method: a proof of concept. EJNMMI Phys 2022; 9:43. [PMID: 35698006 PMCID: PMC9192886 DOI: 10.1186/s40658-022-00472-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/29/2022] [Indexed: 11/12/2022] Open
Abstract
Background To generate high-quality bone scan SPECT images from only 1/7 scan time SPECT images using deep learning-based enhancement method. Materials and methods Normal-dose (925–1110 MBq) clinical technetium 99 m-methyl diphosphonate (99mTc-MDP) SPECT/CT images and corresponding SPECT/CT images with 1/7 scan time from 20 adult patients with bone disease and a phantom were collected to develop a lesion-attention weighted U2-Net (Qin et al. in Pattern Recognit 106:107404, 2020), which produces high-quality SPECT images from fast SPECT/CT images. The quality of synthesized SPECT images from different deep learning models was compared using PSNR and SSIM. Clinic evaluation on 5-point Likert scale (5 = excellent) was performed by two experienced nuclear physicians. Average score and Wilcoxon test were constructed to assess the image quality of 1/7 SPECT, DL-enhanced SPECT and the standard SPECT. SUVmax, SUVmean, SSIM and PSNR from each detectable sphere filled with imaging agent were measured and compared for different images. Results U2-Net-based model reached the best PSNR (40.8) and SSIM (0.788) performance compared with other advanced deep learning methods. The clinic evaluation showed the quality of the synthesized SPECT images is much higher than that of fast SPECT images (P < 0.05). Compared to the standard SPECT images, enhanced images exhibited the same general image quality (P > 0.999), similar detail of 99mTc-MDP (P = 0.125) and the same diagnostic confidence (P = 0.1875). 4, 5 and 6 spheres could be distinguished on 1/7 SPECT, DL-enhanced SPECT and the standard SPECT, respectively. The DL-enhanced phantom image outperformed 1/7 SPECT in SUVmax, SUVmean, SSIM and PSNR in quantitative assessment. Conclusions Our proposed method can yield significant image quality improvement in the noise level, details of anatomical structure and SUV accuracy, which enabled applications of ultra fast SPECT bone imaging in real clinic settings.
Collapse
Affiliation(s)
- Boyang Pan
- RadioDynamic Healthcare, Shanghai, China
| | - Na Qi
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | - Qingyuan Meng
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | | | - Siyue Peng
- RadioDynamic Healthcare, Shanghai, China
| | | | - Nan-Jie Gong
- Vector Lab for Intelligent Medical Imaging and Neural Engineering, International Innovation Center of Tsinghua University, No. 602 Tongpu Street, Putuo District, Shanghai, China.
| | - Jun Zhao
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China.
| |
Collapse
|
16
|
Artificial intelligence-based PET denoising could allow a two-fold reduction in [ 18F]FDG PET acquisition time in digital PET/CT. Eur J Nucl Med Mol Imaging 2022; 49:3750-3760. [PMID: 35593925 PMCID: PMC9399218 DOI: 10.1007/s00259-022-05800-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 04/10/2022] [Indexed: 11/18/2022]
Abstract
Purpose We investigated whether artificial intelligence (AI)-based denoising halves PET acquisition time in digital PET/CT. Methods One hundred ninety-five patients referred for [18F]FDG PET/CT were prospectively included. Body PET acquisitions were performed in list mode. Original “PET90” (90 s/bed position) was compared to reconstructed ½-duration PET (45 s/bed position) with and without AI-denoising, “PET45AI and PET45”. Denoising was performed by SubtlePET™ using deep convolutional neural networks. Visual global image quality (IQ) 3-point scores and lesion detectability were evaluated. Lesion maximal and peak standardized uptake values using lean body mass (SULmax and SULpeak), metabolic volumes (MV), and liver SULmean were measured, including both standard and EARL1 (European Association of Nuclear Medicine Research Ltd) compliant SUL. Lesion-to-liver SUL ratios (LLR) and liver coefficients of variation (CVliv) were calculated. Results PET45 showed mediocre IQ (scored poor in 8% and moderate in 68%) and lesion concordance rate with PET90 (88.7%). In PET45AI, IQ scores were similar to PET90 (P = 0.80), good in 92% and moderate in 8% for both. The lesion concordance rate between PET90 and PET45AI was 836/856 (97.7%), with 7 lesions (0.8%) only detected in PET90 and 13 (1.5%) exclusively in PET45AI. Lesion EARL1 SULpeak was not significantly different between both PET (P = 0.09). Lesion standard SULpeak, standard and EARL1 SULmax, LLR and CVliv were lower in PET45AI than in PET90 (P < 0.0001), while lesion MV and liver SULmean were higher (P < 0.0001). Good to excellent intraclass correlation coefficients (ICC) between PET90 and PET45AI were observed for lesion SUL and MV (ICC ≥ 0.97) and for liver SULmean (ICC ≥ 0.87). Conclusion AI allows [18F]FDG PET duration in digital PET/CT to be halved, while restoring degraded ½-duration PET image quality. Future multicentric studies, including other PET radiopharmaceuticals, are warranted. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05800-1.
Collapse
|
17
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
18
|
Luo Y, Zhou L, Zhan B, Fei Y, Zhou J, Wang Y, Shen D. Adaptive rectification based adversarial network with spectrum constraint for high-quality PET image synthesis. Med Image Anal 2021; 77:102335. [PMID: 34979432 DOI: 10.1016/j.media.2021.102335] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 11/02/2021] [Accepted: 12/13/2021] [Indexed: 12/13/2022]
Abstract
Positron emission tomography (PET) is a typical nuclear imaging technique, which can provide crucial functional information for early brain disease diagnosis. Generally, clinically acceptable PET images are obtained by injecting a standard-dose radioactive tracer into human body, while on the other hand the cumulative radiation exposure inevitably raises concerns about potential health risks. However, reducing the tracer dose will increase the noise and artifacts of the reconstructed PET image. For the purpose of acquiring high-quality PET images while reducing radiation exposure, in this paper, we innovatively present an adaptive rectification based generative adversarial network with spectrum constraint, named AR-GAN, which uses low-dose PET (LPET) image to synthesize standard-dose PET (SPET) image of high-quality. Specifically, considering the existing differences between the synthesized SPET image by traditional GAN and the real SPET image, an adaptive rectification network (AR-Net) is devised to estimate the residual between the preliminarily predicted image and the real SPET image, based on the hypothesis that a more realistic rectified image can be obtained by incorporating both the residual and the preliminarily predicted PET image. Moreover, to address the issue of high-frequency distortions in the output image, we employ a spectral regularization term in the training optimization objective to constrain the consistency of the synthesized image and the real image in the frequency domain, which further preserves the high-frequency detailed information and improves synthesis performance. Validations on both the phantom dataset and the clinical dataset show that the proposed AR-GAN can estimate SPET images from LPET images effectively and outperform other state-of-the-art image synthesis approaches.
Collapse
Affiliation(s)
- Yanmei Luo
- School of Computer Science, Sichuan University, China
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia
| | - Bo Zhan
- School of Computer Science, Sichuan University, China
| | - Yuchen Fei
- School of Computer Science, Sichuan University, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, China; School of Computer Science, Chengdu University of Information Technology, China
| | - Yan Wang
- School of Computer Science, Sichuan University, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
19
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
20
|
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H. DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage 2021; 245:118697. [PMID: 34742941 DOI: 10.1016/j.neuroimage.2021.118697] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
21
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
22
|
Jha AK, Mithun S, Rangarajan V, Wee L, Dekker A. Emerging role of artificial intelligence in nuclear medicine. Nucl Med Commun 2021; 42:592-601. [PMID: 33660696 DOI: 10.1097/mnm.0000000000001381] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The role of artificial intelligence is increasing in all branches of medicine. The emerging role of artificial intelligence applications in nuclear medicine is going to improve the nuclear medicine clinical workflow in the coming years. Initial research outcomes are suggestive of increasing role of artificial intelligence in nuclear medicine workflow, particularly where selective automation tasks are of concern. Artificial intelligence-assisted planning, dosimetry and procedure execution appear to be areas for rapid and significant development. The role of artificial intelligence in more directly imaging-related tasks, such as dose optimization, image corrections and image reconstruction, have been particularly strong points of artificial intelligence research in nuclear medicine. Natural Language Processing (NLP)-based text processing task is another area of interest of artificial intelligence implementation in nuclear medicine.
Collapse
Affiliation(s)
- Ashish Kumar Jha
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Department of Nuclear Medicine and Molecular Imaging, Tata Memorial Hospital
| | - Sneha Mithun
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Department of Nuclear Medicine and Molecular Imaging, Tata Memorial Hospital
- Homi Bhabha National Institute (HBNI), Deemed University, Mumbai, India
| | - Venkatesh Rangarajan
- Department of Nuclear Medicine and Molecular Imaging, Tata Memorial Hospital
- Homi Bhabha National Institute (HBNI), Deemed University, Mumbai, India
| | - Leonard Wee
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
23
|
Lv Y, Xi C. PET image reconstruction with deep progressive learning. Phys Med Biol 2021; 66. [PMID: 33892485 DOI: 10.1088/1361-6560/abfb17] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
Convolutional neural networks (CNNs) have recently achieved state-of-the-art results for positron emission tomography (PET) imaging problems. However direct learning from input image to target image is challenging if the gap is large between two images. Previous studies have shown that CNN can reduce image noise, but it can also degrade contrast recovery for small lesions. In this work, a deep progressive learning (DPL) method for PET image reconstruction is proposed to reduce background noise and improve image contrast. DPL bridges the gap between low quality image and high quality image through two learning steps. In the iterative reconstruction process, two pre-trained neural networks are introduced to control the image noise and contrast in turn. The feedback structure is adopted in the network design, which greatly reduces the parameters. The training data come from uEXPLORER, the world's first total-body PET scanner, in which the PET images show high contrast and very low image noise. We conducted extensive phantom and patient studies to test the algorithm for PET image quality improvement. The experimental results show that DPL is promising for reducing noise and improving contrast of PET images. Moreover, the proposed method has sufficient versatility to solve various imaging and image processing problems.
Collapse
Affiliation(s)
- Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| |
Collapse
|
24
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
25
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
26
|
Shirakawa S. [6. Programming in the Field of Nuclear Medicine]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:51-58. [PMID: 33473079 DOI: 10.6009/jjrt.2021_jsrt_77.1.51] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Seiji Shirakawa
- Faculty of Radiological Technology, School of Medical Science, Fujita Health University
| |
Collapse
|
27
|
Schramm G, Rigie D, Vahle T, Rezaei A, Van Laere K, Shepherd T, Nuyts J, Boada F. Approximating anatomically-guided PET reconstruction in image space using a convolutional neural network. Neuroimage 2021; 224:117399. [PMID: 32971267 PMCID: PMC7812485 DOI: 10.1016/j.neuroimage.2020.117399] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 08/20/2020] [Accepted: 09/17/2020] [Indexed: 12/22/2022] Open
Abstract
In the last two decades, it has been shown that anatomically-guided PET reconstruction can lead to improved bias-noise characteristics in brain PET imaging. However, despite promising results in simulations and first studies, anatomically-guided PET reconstructions are not yet available for use in routine clinical because of several reasons. In light of this, we investigate whether the improvements of anatomically-guided PET reconstruction methods can be achieved entirely in the image domain with a convolutional neural network (CNN). An entirely image-based CNN post-reconstruction approach has the advantage that no access to PET raw data is needed and, moreover, that the prediction times of trained CNNs are extremely fast on state of the art GPUs which will substantially facilitate the evaluation, fine-tuning and application of anatomically-guided PET reconstruction in real-world clinical settings. In this work, we demonstrate that anatomically-guided PET reconstruction using the asymmetric Bowsher prior can be well-approximated by a purely shift-invariant convolutional neural network in image space allowing the generation of anatomically-guided PET images in almost real-time. We show that by applying dedicated data augmentation techniques in the training phase, in which 16 [18F]FDG and 10 [18F]PE2I data sets were used, lead to a CNN that is robust against the used PET tracer, the noise level of the input PET images and the input MRI contrast. A detailed analysis of our CNN in 36 [18F]FDG, 18 [18F]PE2I, and 7 [18F]FET test data sets demonstrates that the image quality of our trained CNN is very close to the one of the target reconstructions in terms of regional mean recovery and regional structural similarity.
Collapse
Affiliation(s)
- Georg Schramm
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium.
| | - David Rigie
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NYC, US
| | | | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Koen Van Laere
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Timothy Shepherd
- Department of Neuroradiology, NYU Langone Health, Department of Radiology, New York University School of Medicine, New York, US
| | - Johan Nuyts
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Fernando Boada
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NYC, US
| |
Collapse
|
28
|
Galve P, Udias JM, Lopez-Montes A, Arias-Valcayo F, Vaquero JJ, Desco M, Herraiz JL. Super-Iterative Image Reconstruction in PET. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2021; 7:248-257. [DOI: 10.1109/tci.2021.3059107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
29
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
30
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
31
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
32
|
Sanaat A, Arabi H, Mainta I, Garibotto V, Zaidi H. Projection Space Implementation of Deep Learning-Guided Low-Dose Brain PET Imaging Improves Performance over Implementation in Image Space. J Nucl Med 2020; 61:1388-1396. [PMID: 31924718 DOI: 10.2967/jnumed.119.239327] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 01/09/2020] [Indexed: 01/08/2023] Open
Abstract
Our purpose was to assess the performance of full-dose (FD) PET image synthesis in both image and sinogram space from low-dose (LD) PET images and sinograms without sacrificing diagnostic quality using deep learning techniques. Methods: Clinical brain PET/CT studies of 140 patients were retrospectively used for LD-to-FD PET conversion. Five percent of the events were randomly selected from the FD list-mode PET data to simulate a realistic LD acquisition. A modified 3-dimensional U-Net model was implemented to predict FD sinograms in the projection space (PSS) and FD images in image space (PIS) from their corresponding LD sinograms and images, respectively. The quality of the predicted PET images was assessed by 2 nuclear medicine specialists using a 5-point grading scheme. Quantitative analysis using established metrics including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), regionwise SUV bias, and first-, second- and high-order texture radiomic features in 83 brain regions for the test and evaluation datasets was also performed. Results: All PSS images were scored 4 or higher (good to excellent) by the nuclear medicine specialists. PSNR and SSIM values of 0.96 ± 0.03 and 0.97 ± 0.02, respectively, were obtained for PIS, and values of 31.70 ± 0.75 and 37.30 ± 0.71, respectively, were obtained for PSS. The average SUV bias calculated over all brain regions was 0.24% ± 0.96% and 1.05% ± 1.44% for PSS and PIS, respectively. The Bland-Altman plots reported the lowest SUV bias (0.02) and variance (95% confidence interval, -0.92 to +0.84) for PSS, compared with the reference FD images. The relative error of the homogeneity radiomic feature belonging to the gray-level cooccurrence matrix category was -1.07 ± 1.77 and 0.28 ± 1.4 for PIS and PSS, respectively. Conclusion: The qualitative assessment and quantitative analysis demonstrated that the FD PET PSS led to superior performance, resulting in higher image quality and lower SUV bias and variance than for FD PET PIS.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland .,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, Netherlands; and.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
33
|
Song TA, Chowdhury SR, Yang F, Dutta J. Super-Resolution PET Imaging Using Convolutional Neural Networks. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:518-528. [PMID: 32055649 PMCID: PMC7017584 DOI: 10.1109/tci.2020.2964229] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).
Collapse
Affiliation(s)
- Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Samadrita Roy Chowdhury
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Fan Yang
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| |
Collapse
|
34
|
Gong K, Berg E, Cherry SR, Qi J. Machine Learning in PET: from Photon Detection to Quantitative Image Reconstruction. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:51-68. [PMID: 38045770 PMCID: PMC10691821 DOI: 10.1109/jproc.2019.2936809] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Machine learning has found unique applications in nuclear medicine from photon detection to quantitative image reconstruction. While there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time and position information from the detector signals. Now with the availability of fast waveform digitizers, machine learning techniques have been applied to estimate the position and arrival time of high-energy photons. In quantitative image reconstruction, machine learning has been used to estimate various corrections factors, including scattered events and attenuation images, as well as to reduce statistical noise in reconstructed images. Here machine learning either provides a faster alternative to an existing time-consuming computation, such as in the case of scatter estimation, or creates a data-driven approach to map an implicitly defined function, such as in the case of estimating the attenuation map for PET/MR scans. In this article, we will review the abovementioned applications of machine learning in nuclear medicine.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Biomedical Engineering, University of California, Davis, CA, USA and is now with Massachusetts General Hospital, Boston, MA, USA
| | - Eric Berg
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Simon R. Cherry
- Department of Biomedical Engineering and Department of Radiology, University of California, Davis, CA, USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| |
Collapse
|
35
|
Nensa F, Demircioglu A, Rischpler C. Artificial Intelligence in Nuclear Medicine. J Nucl Med 2019; 60:29S-37S. [DOI: 10.2967/jnumed.118.220590] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 05/16/2019] [Indexed: 02/06/2023] Open
|
36
|
Visvikis D, Cheze Le Rest C, Jaouen V, Hatt M. Artificial intelligence, machine (deep) learning and radio(geno)mics: definitions and nuclear medicine imaging applications. Eur J Nucl Med Mol Imaging 2019; 46:2630-2637. [DOI: 10.1007/s00259-019-04373-w] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 05/23/2019] [Indexed: 12/14/2022]
|
37
|
吴 洋, 杨 丰, 黄 靖, 刘 娅. [Super-resolution construction of intravascular ultrasound images using generative adversarial networks]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2019; 39:82-87. [PMID: 30692071 PMCID: PMC6765585 DOI: 10.12122/j.issn.1673-4254.2019.01.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Indexed: 06/09/2023]
Abstract
The low-resolution ultrasound images have poor visual effects. Herein we propose a method for generating clearer intravascular ultrasound images based on super-resolution reconstruction combined with generative adversarial networks. We used the generative adversarial networks to generate the images by a generator and to estimate the authenticity of the images by a discriminator. Specifically, the low-resolution image was passed through the sub-pixel convolution layer r2-feature channels to generate r2-feature maps in the same size, followed by realignment of the corresponding pixels in each feature map into r ×r sub-blocks, which corresponded to the sub-block in a high-resolution image; after amplification, an image with a r2-time resolution was generated. The generative adversarial networks can obtain a clearer image through continuous optimization. We compared the method (SRGAN) with other methods including Bicubic, super-resolution convolutional network (SRCNN) and efficient sub-pixel convolutional network (ESPCN), and the proposed method resulted in obvious improvements in the peak signal-to-noise ratio (PSNR) by 2.369 dB and in structural similarity index by 1.79% to enhance the diagnostic visual effects of intravascular ultrasound images.
Collapse
Affiliation(s)
- 洋洋 吴
- />南方医科大学生物医学工程学院广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - 丰 杨
- />南方医科大学生物医学工程学院广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - 靖 黄
- />南方医科大学生物医学工程学院广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - 娅琴 刘
- />南方医科大学生物医学工程学院广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|