1
|
Gautier V, Bousse A, Sureau F, Comtat C, Maxim V, Sixou B. Bimodal PET/MRI generative reconstruction based on VAE architectures. Phys Med Biol 2024; 69:245019. [PMID: 39527911 DOI: 10.1088/1361-6560/ad9133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 11/11/2024] [Indexed: 11/16/2024]
Abstract
Objective.In this study, we explore positron emission tomography (PET)/magnetic resonance imaging (MRI) joint reconstruction within a deep learning framework, introducing a novel synergistic method.Approach.We propose a new approach based on a variational autoencoder (VAE) constraint combined with the alternating direction method of multipliers (ADMM) optimization technique. We explore three VAE architectures, joint VAE, product of experts-VAE and multimodal JS divergence (MMJSD), to determine the optimal latent representation for the two modalities. We then trained and evaluated the architectures on a brain PET/MRI dataset.Main results.We showed that our approach takes advantage of each modality sharing information to each other, which results in improved peak signal-to-noise ratio and structural similarity as compared with traditional reconstruction, particularly for short acquisition times. We find that the one particular architecture, MMJSD, is the most effective for our methodology.Significance.The proposed method outperforms conventional approaches especially in noisy and undersampled conditions by making use of the two modalities together to compensate for the missing information.
Collapse
Affiliation(s)
- V Gautier
- Université de Lyon, INSA-Lyon, UCBL 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| | - A Bousse
- Univ. Brest, LaTIM, Inserm UMR 1101, 29238 Brest, France
| | - F Sureau
- BioMaps, Université Paris-Saclay, CEA, CNRS, Inserm, SHFJ, 91401 Orsay, France
| | - C Comtat
- BioMaps, Université Paris-Saclay, CEA, CNRS, Inserm, SHFJ, 91401 Orsay, France
| | - V Maxim
- Université de Lyon, INSA-Lyon, UCBL 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| | - B Sixou
- Université de Lyon, INSA-Lyon, UCBL 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| |
Collapse
|
2
|
Miranda EA, Basarab A, Lavarello R. Enhancing ultrasonic attenuation images through multi-frequency coupling with total nuclear variation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:2805-2815. [PMID: 39436361 DOI: 10.1121/10.0032458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 10/03/2024] [Indexed: 10/23/2024]
Abstract
Quantitative ultrasound is a non-invasive image modality that numerically characterizes tissues for medical diagnosis using acoustical parameters, such as the attenuation coefficient slope. A previous study introduced the total variation spectral log difference (TVSLD) method, which denoises spectral log ratios on a single-channel basis without inter-channel coupling. Therefore, this work proposes a multi-frequency joint framework by coupling information across frequency channels exploiting structural similarities among the spectral ratios to increase the quality of the attenuation images. A modification based on the total nuclear variation (TNV) was considered. Metrics were compared to the TVSLD method with simulated and experimental phantoms and two samples of fibroadenoma in vivo breast tissue. The TNV demonstrated superior performance, yielding enhanced attenuation coefficient slope maps with fewer artifacts at boundaries and a stable error. In terms of the contrast-to-noise ratio enhancement, the TNV approach obtained an average percentage improvement of 34% in simulation, 38% in the experimental phantom, and 89% in two in vivo breast tissue samples compared to TVSLD, showing potential to enhance visual clarity and depiction of attenuation images.
Collapse
Affiliation(s)
- Edmundo A Miranda
- Laboratorio de Imágenes Médicas, Departamento de Ingenería, Pontificia Universidad Católica del Perú, San Miguel 15088, Peru
| | - Adrian Basarab
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Roberto Lavarello
- Laboratorio de Imágenes Médicas, Departamento de Ingenería, Pontificia Universidad Católica del Perú, San Miguel 15088, Peru
| |
Collapse
|
3
|
Sharma V, Awate SP. Adversarial EM for variational deep learning: Application to semi-supervised image quality enhancement in low-dose PET and low-dose CT. Med Image Anal 2024; 97:103291. [PMID: 39121545 DOI: 10.1016/j.media.2024.103291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 07/23/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024]
Abstract
In positron emission tomography (PET) and X-ray computed tomography (CT), reducing radiation dose can cause significant degradation in image quality. For image quality enhancement in low-dose PET and CT, we propose a novel theoretical adversarial and variational deep neural network (DNN) framework relying on expectation maximization (EM) based learning, termed adversarial EM (AdvEM). AdvEM proposes an encoder-decoder architecture with a multiscale latent space, and generalized-Gaussian models enabling datum-specific robust statistical modeling in latent space and image space. The model robustness is further enhanced by including adversarial learning in the training protocol. Unlike typical variational-DNN learning, AdvEM proposes latent-space sampling from the posterior distribution, and uses a Metropolis-Hastings scheme. Unlike existing schemes for PET or CT image enhancement which train using pairs of low-dose images with their corresponding normal-dose versions, we propose a semi-supervised AdvEM (ssAdvEM) framework that enables learning using a small number of normal-dose images. AdvEM and ssAdvEM enable per-pixel uncertainty estimates for their outputs. Empirical analyses on real-world PET and CT data involving many baselines, out-of-distribution data, and ablation studies show the benefits of the proposed framework.
Collapse
Affiliation(s)
- Vatsala Sharma
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| |
Collapse
|
4
|
Xie T, Cui ZX, Luo C, Wang H, Liu C, Zhang Y, Wang X, Zhu Y, Chen G, Liang D, Jin Q, Zhou Y, Wang H. Joint diffusion: mutual consistency-driven diffusion model for PET-MRI co-reconstruction. Phys Med Biol 2024; 69:155019. [PMID: 38981592 DOI: 10.1088/1361-6560/ad6117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/09/2024] [Indexed: 07/11/2024]
Abstract
Objective. Positron Emission Tomography and Magnetic Resonance Imaging (PET-MRI) systems can obtain functional and anatomical scans. But PET suffers from a low signal-to-noise ratio, while MRI are time-consuming. To address time-consuming, an effective strategy involves reducing k-space data collection, albeit at the cost of lowering image quality. This study aims to leverage the inherent complementarity within PET-MRI data to enhance the image quality of PET-MRI.Approach. A novel PET-MRI joint reconstruction model, termed MC-Diffusion, is proposed in the Bayesian framework. The joint reconstruction problem is transformed into a joint regularization problem, where data fidelity terms of PET and MRI are expressed independently. The regular term, the derivative of the logarithm of the joint probability distribution of PET and MRI, employs a joint score-based diffusion model for learning. The diffusion model involves the forward diffusion process and the reverse diffusion process. The forward diffusion process adds noise to transform a complex joint data distribution into a known joint prior distribution for PET and MRI simultaneously, resembling a denoiser. The reverse diffusion process removes noise using a denoiser to revert the joint prior distribution to the original joint data distribution, effectively utilizing joint probability distribution to describe the correlations of PET and MRI for improved quality of joint reconstruction.Main results. Qualitative and quantitative improvements are observed with the MC-Diffusion model. Comparative analysis against LPLS and Joint ISAT-net on the ADNI dataset demonstrates superior performance by exploiting complementary information between PET and MRI. The MC-Diffusion model effectively enhances the quality of PET and MRI images.Significance. This study employs the MC-Diffusion model to enhance the quality of PET-MRI images by integrating the fundamental principles of PET and MRI modalities and leveraging their inherent complementarity. Furthermore, utilizing the diffusion model to learn the joint probability distribution of PET and MRI, thereby elucidating their latent correlation, facilitates a more profound comprehension of the priors obtained through deep learning, contrasting with black-box prior or artificially constructed structural similarities.
Collapse
Affiliation(s)
- Taofeng Xie
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, People's Republic of China
- College of Computer and Information, Inner Mongolia Medical University, Hohhot, People's Republic of China
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, People's Republic of China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China
| | - Chen Luo
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, People's Republic of China
| | - Huayu Wang
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, People's Republic of China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, People's Republic of China
| | - Yuanzhi Zhang
- Inner Mongolia Medical University Affiliated Hospital, Hohhot, People's Republic of China
| | - Xuemei Wang
- Inner Mongolia Medical University Affiliated Hospital, Hohhot, People's Republic of China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, People's Republic of China
| | - Guoqing Chen
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, People's Republic of China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, People's Republic of China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China
| | - Qiyu Jin
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, People's Republic of China
| | - Yihang Zhou
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, People's Republic of China
| |
Collapse
|
5
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
6
|
Fu M, Zhang N, Huang Z, Zhou C, Zhang X, Yuan J, He Q, Yang Y, Zheng H, Liang D, Wu FX, Fan W, Hu Z. OIF-Net: An Optical Flow Registration-Based PET/MR Cross-Modal Interactive Fusion Network for Low-Count Brain PET Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1554-1567. [PMID: 38096101 DOI: 10.1109/tmi.2023.3342809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The short frames of low-count positron emission tomography (PET) images generally cause high levels of statistical noise. Thus, improving the quality of low-count images by using image postprocessing algorithms to achieve better clinical diagnoses has attracted widespread attention in the medical imaging community. Most existing deep learning-based low-count PET image enhancement methods have achieved satisfying results, however, few of them focus on denoising low-count PET images with the magnetic resonance (MR) image modality as guidance. The prior context features contained in MR images can provide abundant and complementary information for single low-count PET image denoising, especially in ultralow-count (2.5%) cases. To this end, we propose a novel two-stream dual PET/MR cross-modal interactive fusion network with an optical flow pre-alignment module, namely, OIF-Net. Specifically, the learnable optical flow registration module enables the spatial manipulation of MR imaging inputs within the network without any extra training supervision. Registered MR images fundamentally solve the problem of feature misalignment in the multimodal fusion stage, which greatly benefits the subsequent denoising process. In addition, we design a spatial-channel feature enhancement module (SC-FEM) that considers the interactive impacts of multiple modalities and provides additional information flexibility in both the spatial and channel dimensions. Furthermore, instead of simply concatenating two extracted features from these two modalities as an intermediate fusion method, the proposed cross-modal feature fusion module (CM-FFM) adopts cross-attention at multiple feature levels and greatly improves the two modalities' feature fusion procedure. Extensive experimental assessments conducted on real clinical datasets, as well as an independent clinical testing dataset, demonstrate that the proposed OIF-Net outperforms the state-of-the-art methods.
Collapse
|
7
|
Bousse A, Kandarpa VSS, Rit S, Perelli A, Li M, Wang G, Zhou J, Wang G. Systematic Review on Learning-based Spectral CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:113-137. [PMID: 38476981 PMCID: PMC10927029 DOI: 10.1109/trpms.2023.3314131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Spectral computed tomography (CT) has recently emerged as an advanced version of medical CT and significantly improves conventional (single-energy) CT. Spectral CT has two main forms: dual-energy computed tomography (DECT) and photon-counting computed tomography (PCCT), which offer image improvement, material decomposition, and feature quantification relative to conventional CT. However, the inherent challenges of spectral CT, evidenced by data and image artifacts, remain a bottleneck for clinical applications. To address these problems, machine learning techniques have been widely applied to spectral CT. In this review, we present the state-of-the-art data-driven techniques for spectral CT.
Collapse
Affiliation(s)
- Alexandre Bousse
- LaTIM, Inserm UMR 1101, Université de Bretagne Occidentale, 29238 Brest, France
| | | | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Étienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| | - Alessandro Perelli
- Department of Biomedical Engineering, School of Science and Engineering, University of Dundee, DD1 4HN, UK
| | - Mengzhou Li
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Health, Sacramento, USA
| | - Jian Zhou
- CTIQ, Canon Medical Research USA, Inc., Vernon Hills, 60061, USA
| | - Ge Wang
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
8
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
9
|
Galve P, Rodriguez-Vila B, Herraiz J, García-Vázquez V, Malpica N, Udias J, Torrado-Carvajal A. Recent advances in combined Positron Emission Tomography and Magnetic Resonance Imaging. JOURNAL OF INSTRUMENTATION 2024; 19:C01001. [DOI: 10.1088/1748-0221/19/01/c01001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Abstract
Abstract
Hybrid imaging modalities combine two or more medical imaging techniques offering exciting new possibilities to image the structure, function and biochemistry of the human body in far greater detail than has previously been possible to improve patient diagnosis. In this context, simultaneous Positron Emission Tomography and Magnetic Resonance (PET/MR) imaging offers great complementary information, but it also poses challenges from the point of view of hardware and software compatibility. The PET signal may interfere with the MR magnetic field and vice-versa, posing several challenges and constrains in the PET instrumentation for PET/MR systems. Additionally, anatomical maps are needed to properly apply attenuation and scatter corrections to the resulting reconstructed PET images, as well motion estimates to minimize the effects of movement throughout the acquisition. In this review, we summarize the instrumentation implemented in modern PET scanners to overcome these limitations, describing the historical development of hybrid PET/MR scanners. We pay special attention to the methods used in PET to achieve attenuation, scatter and motion correction when it is combined with MR, and how both imaging modalities may be combined in PET image reconstruction algorithms.
Collapse
|
10
|
Nandhini Abirami R, Durai Raj Vincent PM, Srinivasan K, Manic KS, Chang CY. Multimodal Medical Image Fusion of Positron Emission Tomography and Magnetic Resonance Imaging Using Generative Adversarial Networks. Behav Neurol 2022; 2022:6878783. [PMID: 35464043 PMCID: PMC9023223 DOI: 10.1155/2022/6878783] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 03/27/2022] [Indexed: 12/02/2022] Open
Abstract
Multimodal medical image fusion is a current technique applied in the applications related to medical field to combine images from the same modality or different modalities to improve the visual content of the image to perform further operations like image segmentation. Biomedical research and medical image analysis highly demand medical image fusion to perform higher level of medical analysis. Multimodal medical fusion assists medical practitioners to visualize the internal organs and tissues. Multimodal medical fusion of brain image helps to medical practitioners to simultaneously visualize hard portion like skull and soft portion like tissue. Brain tumor segmentation can be accurately performed by utilizing the image obtained after multimodal medical image fusion. The area of the tumor can be accurately located with the information obtained from both Positron Emission Tomography and Magnetic Resonance Image in a single fused image. This approach increases the accuracy in diagnosing the tumor and reduces the time consumed in diagnosing and locating the tumor. The functional information of the brain is available in the Positron Emission Tomography while the anatomy of the brain tissue is available in the Magnetic Resonance Image. Thus, the spatial characteristics and functional information can be obtained from a single image using a robust multimodal medical image fusion model. The proposed approach uses a generative adversarial network to fuse Positron Emission Tomography and Magnetic Resonance Image into a single image. The results obtained from the proposed approach can be used for further medical analysis to locate the tumor and plan for further surgical procedures. The performance of the GAN based model is evaluated using two metrics, namely, structural similarity index and mutual information. The proposed approach achieved a structural similarity index of 0.8551 and a mutual information of 2.8059.
Collapse
Affiliation(s)
- R. Nandhini Abirami
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - P. M. Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, 632 014 Tamil Nadu, India
| | - K. Suresh Manic
- Department of Electrical and Communication Engineering, National University of Science and Technology, Muscat, Oman
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
- Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| |
Collapse
|
11
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
12
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. INFORMATION FUSION 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
13
|
Jamadar SD, Zhong S, Carey A, McIntyre R, Ward PGD, Fornito A, Premaratne M, Jon Shah N, O'Brien K, Stäb D, Chen Z, Egan GF. Task-evoked simultaneous FDG-PET and fMRI data for measurement of neural metabolism in the human visual cortex. Sci Data 2021; 8:267. [PMID: 34654823 PMCID: PMC8520012 DOI: 10.1038/s41597-021-01042-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 08/12/2021] [Indexed: 01/21/2023] Open
Abstract
Understanding how the living human brain functions requires sophisticated in vivo neuroimaging technologies to characterise the complexity of neuroanatomy, neural function, and brain metabolism. Fluorodeoxyglucose positron emission tomography (FDG-PET) studies of human brain function have historically been limited in their capacity to measure dynamic neural activity. Simultaneous [18 F]-FDG-PET and functional magnetic resonance imaging (fMRI) with FDG infusion protocols enable examination of dynamic changes in cerebral glucose metabolism simultaneously with dynamic changes in blood oxygenation. The Monash vis-fPET-fMRI dataset is a simultaneously acquired FDG-fPET/BOLD-fMRI dataset acquired from n = 10 healthy adults (18-49 yrs) whilst they viewed a flickering checkerboard task. The dataset contains both raw (unprocessed) images and source data organized according to the BIDS specification. The source data includes PET listmode, normalization, sinogram and physiology data. Here, the technical feasibility of using opensource frameworks to reconstruct the PET listmode data is demonstrated. The dataset has significant re-use value for the development of new processing pipelines, signal optimisation methods, and to formulate new hypotheses concerning the relationship between neuronal glucose uptake and cerebral haemodynamics.
Collapse
Affiliation(s)
- Sharna D Jamadar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia. .,Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia. .,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia.
| | - Shenjun Zhong
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,National Imaging Facility, Clayton, Australia
| | - Alexandra Carey
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Department of Medical Imaging, Monash Health, Clayton, VIC, Australia
| | - Richard McIntyre
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Department of Medical Imaging, Monash Health, Clayton, VIC, Australia
| | - Phillip G D Ward
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia.,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| | - Alex Fornito
- Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia.,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| | - Malin Premaratne
- Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC, Australia
| | - N Jon Shah
- Institute of Neuroscience and Medicine - 4, Forschungszentrum Jülich, Jülich, Germany
| | - Kieran O'Brien
- MR Research Collaborations, Siemens Healthcare Pty Ltd, Clayton, Australia
| | - Daniel Stäb
- MR Research Collaborations, Siemens Healthcare Pty Ltd, Clayton, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Monash Data Futures Institute, Monash University, Clayton, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia.,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| |
Collapse
|
14
|
Onishi Y, Hashimoto F, Ote K, Ohba H, Ota R, Yoshikawa E, Ouchi Y. Anatomical-guided attention enhances unsupervised PET image denoising performance. Med Image Anal 2021; 74:102226. [PMID: 34563861 DOI: 10.1016/j.media.2021.102226] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 08/02/2021] [Accepted: 09/05/2021] [Indexed: 10/20/2022]
Abstract
Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.
Collapse
Affiliation(s)
- Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan.
| | - Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hiroyuki Ohba
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Ryosuke Ota
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Etsuji Yoshikawa
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi-ku, Hamamatsu 431-3192, Japan
| |
Collapse
|
15
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
16
|
Sudarshan VP, Upadhyay U, Egan GF, Chen Z, Awate SP. Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med Image Anal 2021; 73:102187. [PMID: 34348196 DOI: 10.1016/j.media.2021.102187] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
Radiation exposure in positron emission tomography (PET) imaging limits its usage in the studies of radiation-sensitive populations, e.g., pregnant women, children, and adults that require longitudinal imaging. Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality. Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images. However, such DNN methods focus on applications involving test data that match the statistical characteristics of the training data very closely and give little attention to evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that models the (i) underlying sinogram-based physics of the PET imaging system and (ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity of the residuals between the predicted and the high-quality reference images. Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a standard-dose PET image using multimodal input in the form of (i) a low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI images, leading to improved robustness of suDNN to OOD acquisitions. Results on in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show the benefits of suDNN over the current state of the art, quantitatively and qualitatively.
Collapse
Affiliation(s)
- Viswanath P Sudarshan
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India; IITB-Monash Research Academy, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Uddeshya Upadhyay
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Gary F Egan
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| |
Collapse
|
17
|
Arridge SR, Ehrhardt MJ, Thielemans K. (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200205. [PMID: 33966461 DOI: 10.1098/rsta.2020.0205] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Simon R Arridge
- Department of Computer Science, University College London, London, UK
| | - Matthias J Ehrhardt
- Department of Mathematical Sciences, University of Bath, Bath, UK
- Institute for Mathematical Innovation, University of Bath, Bath, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
| |
Collapse
|
18
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
19
|
Incorporation of anatomical MRI knowledge for enhanced mapping of brain metabolism using functional PET. Neuroimage 2021; 233:117928. [PMID: 33716154 DOI: 10.1016/j.neuroimage.2021.117928] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 02/08/2021] [Accepted: 02/28/2021] [Indexed: 02/07/2023] Open
Abstract
Functional positron emission tomography (fPET) imaging using continuous infusion of [18F]-fluorodeoxyglucose (FDG) is a novel neuroimaging technique to track dynamic glucose utilization in the brain. In comparison to conventional static or dynamic bolus PET, fPET maintains a sustained supply of glucose in the blood plasma which improves sensitivity to measure dynamic glucose changes in the brain, and enables mapping of dynamic brain activity in task-based and resting-state fPET studies. However, there is a trade-off between temporal resolution and spatial noise due to the low concentration of FDG and the limited sensitivity of multi-ring PET scanners. Images from fPET studies suffer from partial volume errors and residual scatter noise that may cause the cerebral metabolic functional maps to be biased. Gaussian smoothing filters used to denoise the fPET images are suboptimal, as they introduce additional partial volume errors. In this work, a post-processing framework based on a magnetic resonance (MR) Bowsher-like prior was used to improve the spatial and temporal signal to noise characteristics of the fPET images. The performance of the MR guided method was compared with conventional denosing methods using both simulated and in vivo task fPET datasets. The results demonstrate that the MR-guided fPET framework denoises the fPET images and improves the partial volume correction, consequently enhancing the sensitivity to identify brain activation, and improving the anatomical accuracy for mapping changes of brain metabolism in response to a visual stimulation task. The framework extends the use of functional PET to investigate the dynamics of brain metabolic responses for faster presentation of brain activation tasks, and for applications in low dose PET imaging.
Collapse
|
20
|
Bhattarai A, Egan GF, Talman P, Chua P, Chen Z. Magnetic Resonance Iron Imaging in Amyotrophic Lateral Sclerosis. J Magn Reson Imaging 2021; 55:1283-1300. [PMID: 33586315 DOI: 10.1002/jmri.27530] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/05/2021] [Accepted: 01/08/2021] [Indexed: 01/18/2023] Open
Abstract
Amyotrophic lateral sclerosis (ALS) results in progressive impairment of upper and lower motor neurons. Increasing evidence from both in vivo and ex vivo studies suggest that iron accumulation in the motor cortex is a neuropathological hallmark in ALS. An in vivo neuroimaging marker of iron dysregulation in ALS would be useful in disease diagnosis and prognosis. Magnetic resonance imaging (MRI), with its unique capability to generate a variety of soft tissue contrasts, provides opportunities to image iron distribution in the human brain with millimeter to sub-millimeter anatomical resolution. Conventionally, MRI T1-weighted, T2-weighted, and T2*-weighted images have been used to investigate iron dysregulation in the brain in vivo. Susceptibility weighted imaging has enhanced contrast for para-magnetic materials that provides superior sensitivity to iron in vivo. Recently, the development of quantitative susceptibility mapping (QSM) has realized the possibility of using quantitative assessments of magnetic susceptibility measures in brain tissues as a surrogate measurement of in vivo brain iron. In this review, we provide an overview of MRI techniques that have been used to investigate iron dysregulation in ALS in vivo. The potential uses, strengths, and limitations of these techniques in clinical trials, disease diagnosis, and prognosis are presented and discussed. We recommend further longitudinal studies with appropriate cohort characterization to validate the efficacy of these techniques. We conclude that quantitative iron assessment using recent advances in MRI including QSM holds great potential to be a sensitive diagnostic and prognostic marker in ALS. The use of multimodal neuroimaging markers in combination with iron imaging may also offer improved sensitivity in ALS diagnosis and prognosis that could make a major contribution to clinical care and treatment trials. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY: Stage 3.
Collapse
Affiliation(s)
- Anjan Bhattarai
- Department of Psychiatry, School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia.,Monash Biomedical Imaging, Monash University, Melbourne, Victoria, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Victoria, Australia
| | - Paul Talman
- Department of Neuroscience, Barwon Health, Geelong, Victoria, Australia
| | - Phyllis Chua
- Department of Psychiatry, School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia.,Statewide Progressive Neurological Services, Calvary Health Care Bethlehem, Melbourne, Victoria, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
21
|
Estimation of simultaneous BOLD and dynamic FDG metabolic brain activations using a multimodality concatenated ICA (mcICA) method. Neuroimage 2020; 226:117603. [PMID: 33271271 DOI: 10.1016/j.neuroimage.2020.117603] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 11/11/2020] [Accepted: 11/24/2020] [Indexed: 12/14/2022] Open
Abstract
Simultaneous magnetic resonance and positron emission tomography provides an opportunity to measure brain haemodynamics and metabolism in a single scan session, and to identify brain activations from multimodal measurements in response to external stimulation. However, there are few analysis methods available for jointly analysing the simultaneously acquired blood-oxygen-level dependant functional MRI (fMRI) and 18-F-fluorodeoxyglucose functional PET (fPET) datasets. In this work, we propose a new multimodality concatenated ICA (mcICA) method to identify joint fMRI-fPET brain activations in response to a visual stimulation task. The mcICA method produces a fused map from the multimodal datasets with equal contributions of information from both modalities, measured by entropy. We validated the method in silico, and applied it to an in vivo visual stimulation experiment. The mcICA method estimated the activated brain regions in the visual cortex modulated by both BOLD and FDG signals. The mcICA provides a fully data-driven analysis approach to analyse cerebral haemodynamic response and glucose uptake signals arising from exogenously induced neuronal activity.
Collapse
|
22
|
Jamadar SD, Ward PGD, Close TG, Fornito A, Premaratne M, O'Brien K, Stäb D, Chen Z, Shah NJ, Egan GF. Simultaneous BOLD-fMRI and constant infusion FDG-PET data of the resting human brain. Sci Data 2020; 7:363. [PMID: 33087725 PMCID: PMC7578808 DOI: 10.1038/s41597-020-00699-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 09/11/2020] [Indexed: 12/13/2022] Open
Abstract
Simultaneous [18 F]-fluorodeoxyglucose positron emission tomography and functional magnetic resonance imaging (FDG-PET/fMRI) provides the capability to image two sources of energetic dynamics in the brain - cerebral glucose uptake and the cerebrovascular haemodynamic response. Resting-state fMRI connectivity has been enormously useful for characterising interactions between distributed brain regions in humans. Metabolic connectivity has recently emerged as a complementary measure to investigate brain network dynamics. Functional PET (fPET) is a new approach for measuring FDG uptake with high temporal resolution and has recently shown promise for assessing the dynamics of neural metabolism. Simultaneous fMRI/fPET is a relatively new hybrid imaging modality, with only a few biomedical imaging research facilities able to acquire FDG PET and BOLD fMRI data simultaneously. We present data for n = 27 healthy young adults (18-20 yrs) who underwent a 95-min simultaneous fMRI/fPET scan while resting with their eyes open. This dataset provides significant re-use value to understand the neural dynamics of glucose metabolism and the haemodynamic response, the synchrony, and interaction between these measures, and the development of new single- and multi-modality image preparation and analysis procedures.
Collapse
Affiliation(s)
- Sharna D Jamadar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.
- Australian Research Council Centre of Excellence for Integrative Brain Function, Melbourne, Australia.
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia.
| | - Phillip G D Ward
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, Melbourne, Australia
| | - Thomas G Close
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
- Australian National Imaging Facility, Brisbane, QLD, Australia
| | - Alex Fornito
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| | - Malin Premaratne
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Kieran O'Brien
- Siemens Healthineers, Siemens Healthcare Pty Ltd, Bayswater, VIC, 3153, Australia
| | - Daniel Stäb
- Siemens Healthineers, Siemens Healthcare Pty Ltd, Bayswater, VIC, 3153, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - N Jon Shah
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
- Institute of Neuroscience and Medicine, Forschungszentrum Jülich, 52425, Jülich, Germany
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
23
|
Li S, Jamadar SD, Ward PG, Premaratne M, Egan GF, Chen Z. Analysis of continuous infusion functional PET (fPET) in the human brain. Neuroimage 2020; 213:116720. [DOI: 10.1016/j.neuroimage.2020.116720] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 03/03/2020] [Accepted: 03/06/2020] [Indexed: 12/16/2022] Open
|