1
|
Marin T, Belov V, Chemli Y, Ouyang J, Najmaoui Y, Fakhri GE, Duvvuri S, Iredale P, Guehl NJ, Normandin MD, Petibon Y. PET Mapping of Receptor Occupancy Using Joint Direct Parametric Reconstruction. IEEE Trans Biomed Eng 2025; 72:1057-1066. [PMID: 39446540 PMCID: PMC11875991 DOI: 10.1109/tbme.2024.3486191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2024]
Abstract
Receptor occupancy (RO) studies using PET neuroimaging play a critical role in the development of drugs targeting the central nervous system (CNS). The conventional approach to estimate drug receptor occupancy consists in estimation of binding potential changes between two PET scans (baseline and post-drug injection). This estimation is typically performed separately for each scan by first reconstructing dynamic PET scan data before fitting a kinetic model to time activity curves. This approach fails to properly model the noise in PET measurements, resulting in poor RO estimates, especially in low receptor density regions. OBJECTIVE In this work, we evaluate a novel joint direct parametric reconstruction framework to directly estimate distributions of RO and other kinetic parameters in the brain from a pair of baseline and post-drug injection dynamic PET scans. METHODS The proposed method combines the use of regularization on RO maps with alternating optimization to enable estimation of occupancy even in low binding regions. RESULTS Simulation results demonstrate the quantitative improvement of this method over conventional approaches in terms of accuracy and precision of occupancy. The proposed method is also evaluated in preclinical in-vivo experiments using 11C-MK-6884 and a muscarinic acetylcholine receptor 4 positive allosteric modulator drug, showing improved estimation of receptor occupancy as compared to traditional estimators. CONCLUSION The proposed joint direct estimation framework improves RO estimation compared to conventional methods, especially in intermediate to low-binding regions. SIGNIFICANCE This work could potentially facilitate the evaluation of new drug candidates targeting the CNS.
Collapse
|
2
|
Artesani A, Providência L, van Sluis J, Tsoumpas C. Beyond stillness: the importance of tackling patient's motion for reliable parametric imaging. Eur J Nucl Med Mol Imaging 2024; 51:1210-1212. [PMID: 38216780 DOI: 10.1007/s00259-024-06592-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Affiliation(s)
- Alessia Artesani
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Italy
- Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, Netherlands
| | - Laura Providência
- Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, Netherlands
| | - Joyce van Sluis
- Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, Netherlands
| | - Charalampos Tsoumpas
- Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, Netherlands.
| |
Collapse
|
3
|
Guo X, Zhou B, Chen X, Chen MK, Liu C, Dvornek NC. MCP-Net: Introducing Patlak Loss Optimization to Whole-body Dynamic PET Inter-frame Motion Correction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; PP:10.1109/TMI.2023.3290003. [PMID: 37368811 PMCID: PMC10751388 DOI: 10.1109/tmi.2023.3290003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
In whole-body dynamic positron emission tomography (PET), inter-frame subject motion causes spatial misalignment and affects parametric imaging. Many of the current deep learning inter-frame motion correction techniques focus solely on the anatomy-based registration problem, neglecting the tracer kinetics that contains functional information. To directly reduce the Patlak fitting error for 18F-FDG and further improve model performance, we propose an interframe motion correction framework with Patlak loss optimization integrated into the neural network (MCP-Net). The MCP-Net consists of a multiple-frame motion estimation block, an image-warping block, and an analytical Patlak block that estimates Patlak fitting using motion-corrected frames and the input function. A novel Patlak loss penalty component utilizing mean squared percentage fitting error is added to the loss function to reinforce the motion correction. The parametric images were generated using standard Patlak analysis following motion correction. Our framework enhanced the spatial alignment in both dynamic frames and parametric images and lowered normalized fitting error when compared to both conventional and deep learning benchmarks. MCP-Net also achieved the lowest motion prediction error and showed the best generalization capability. The potential of enhancing network performance and improving the quantitative accuracy of dynamic PET by directly utilizing tracer kinetics is suggested.
Collapse
|
4
|
Guo X, Wu J, Chen MK, Liu Q, Onofrey JA, Pucar D, Pang Y, Pigg D, Casey ME, Dvornek NC, Liu C. Inter-pass motion correction for whole-body dynamic PET and parametric imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:344-353. [PMID: 37842204 PMCID: PMC10569406 DOI: 10.1109/trpms.2022.3227576] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Whole-body dynamic FDG-PET imaging through continuous-bed-motion (CBM) mode multi-pass acquisition protocol is a promising metabolism measurement. However, inter-pass misalignment originating from body movement could degrade parametric quantification. We aim to apply a non-rigid registration method for inter-pass motion correction in whole-body dynamic PET. 27 subjects underwent a 90-min whole-body FDG CBM PET scan on a Biograph mCT (Siemens Healthineers), acquiring 9 over-the-heart single-bed passes and subsequently 19 CBM passes (frames). The inter-pass motion correction was executed using non-rigid image registration with multi-resolution, B-spline free-form deformations. The parametric images were then generated by Patlak analysis. The overlaid Patlak slope Ki and y-intercept Vb images were visualized to qualitatively evaluate motion impact and correction effect. The normalized weighted mean squared Patlak fitting errors (NFE) were compared in the whole body, head, and hypermetabolic regions of interest (ROI). In Ki images, ROI statistics were collected and malignancy discrimination capacity was estimated by the area under the receiver operating characteristic curve (AUC). After the inter-pass motion correction was applied, the spatial misalignment appearance between Ki and Vb images was successfully reduced. Voxel-wise normalized fitting error maps showed global error reduction after motion correction. The NFE in the whole body (p = 0.0013), head (p = 0.0021), and ROIs (p = 0.0377) significantly decreased. The visual performance of each hypermetabolic ROI in Ki images was enhanced, while 3.59% and 3.67% average absolute percentage changes were observed in mean and maximum Ki values, respectively, across all evaluated ROIs. The estimated mean Ki values had substantial changes with motion correction (p = 0.0021). The AUC of both mean Ki and maximum Ki after motion correction increased, possibly suggesting the potential of enhancing oncological discrimination capacity through inter-pass motion correction.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Jing Wu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and the Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - John A Onofrey
- Department of Biomedical Engineering, the Department of Radiology and Biomedical Imaging, and the Department of Urology, Yale University, New Haven, CT, 06511, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Yulei Pang
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and Southern Connecticut State University, New Haven, CT, 06515, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
5
|
Lamare F, Bousse A, Thielemans K, Liu C, Merlin T, Fayad H, Visvikis D. PET respiratory motion correction: quo vadis? Phys Med Biol 2021; 67. [PMID: 34915465 DOI: 10.1088/1361-6560/ac43fc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/16/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) respiratory motion correction has been a subject of great interest for the last twenty years, prompted mainly by the development of multimodality imaging devices such as PET/computed tomography (CT) and PET/magnetic resonance imaging (MRI). PET respiratory motion correction involves a number of steps including acquisition synchronization, motion estimation and finally motion correction. The synchronization steps include the use of different external device systems or data driven approaches which have been gaining ground over the last few years. Patient specific or generic motion models using the respiratory synchronized datasets can be subsequently derived and used for correction either in the image space or within the image reconstruction process. Similar overall approaches can be considered and have been proposed for both PET/CT and PET/MRI devices. Certain variations in the case of PET/MRI include the use of MRI specific sequences for the registration of respiratory motion information. The proposed review includes a comprehensive coverage of all these areas of development in field of PET respiratory motion for different multimodality imaging devices and approaches in terms of synchronization, estimation and subsequent motion correction. Finally, a section on perspectives including the potential clinical usage of these approaches is included.
Collapse
Affiliation(s)
- Frederic Lamare
- Nuclear Medicine Department, University Hospital Centre Bordeaux Hospital Group South, ., Bordeaux, Nouvelle-Aquitaine, 33604, FRANCE
| | - Alexandre Bousse
- LaTIM, INSERM UMR1101, Université de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Kris Thielemans
- University College London Institute of Nuclear Medicine, UCL Hospital, Tower 5, 235 Euston Road, London, NW1 2BU, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Chi Liu
- Department of Diagnostic Radiology, Yale University School of Medicine Department of Radiology and Biomedical Imaging, PO Box 208048, 801 Howard Avenue, New Haven, Connecticut, 06520-8042, UNITED STATES
| | - Thibaut Merlin
- LaTIM, INSERM UMR1101, Universite de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Hadi Fayad
- Weill Cornell Medicine - Qatar, ., Doha, ., QATAR
| | - Dimitris Visvikis
- LaTIM, UMR1101, Universite de Bretagne Occidentale, INSERM, Brest, Bretagne, 29285, FRANCE
| |
Collapse
|
6
|
Besson FL, Fernandez B, Faure S, Mercier O, Seferian A, Mussot S, Levy A, Parent F, Bulifon S, Jais X, Montani D, Mitilian D, Fadel E, Planchard D, Ghigna-Bellinzoni MR, Comtat C, Lebon V, Durand E. Fully Integrated Quantitative Multiparametric Analysis of Non-Small Cell Lung Cancer at 3-T PET/MRI: Toward One-Stop-Shop Tumor Biological Characterization at the Supervoxel Level. Clin Nucl Med 2021; 46:e440-e447. [PMID: 34374682 DOI: 10.1097/rlu.0000000000003680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION The aim of this study was to study the feasibility of a fully integrated multiparametric imaging framework to characterize non-small cell lung cancer (NSCLC) at 3-T PET/MRI. PATIENTS AND METHODS An 18F-FDG PET/MRI multiparametric imaging framework was developed and prospectively applied to 11 biopsy-proven NSCLC patients. For each tumor, 12 parametric maps were generated, including PET full kinetic modeling, apparent diffusion coefficient, T1/T2 relaxation times, and DCE full kinetic modeling. Gaussian mixture model-based clustering was applied at the whole data set level to define supervoxels of similar multidimensional PET/MRI behaviors. Taking the multidimensional voxel behaviors as input and the supervoxel class as output, machine learning procedure was finally trained and validated voxelwise to reveal the dominant PET/MRI characteristics of these supervoxels at the whole data set and individual tumor levels. RESULTS The Gaussian mixture model-based clustering clustering applied at the whole data set level (17,316 voxels) found 3 main multidimensional behaviors underpinned by the 12 PET/MRI quantitative parameters. Four dominant PET/MRI parameters of clinical relevance (PET: k2, k3 and DCE: ve, vp) predicted the overall supervoxel behavior with 97% of accuracy (SD, 0.7; 10-fold cross-validation). At the individual tumor level, these dimensionality-reduced supervoxel maps showed mean discrepancy of 16.7% compared with the original ones. CONCLUSIONS One-stop-shop PET/MRI multiparametric quantitative analysis of NSCLC is clinically feasible. Both PET and MRI parameters are useful to characterize the behavior of tumors at the supervoxel level. In the era of precision medicine, the full capabilities of PET/MRI would give further insight of the characterization of NSCLC behavior, opening new avenues toward image-based personalized medicine in this field.
Collapse
Affiliation(s)
| | | | - Sylvain Faure
- Laboratoire de Mathématiques d'Orsay, CNRS, Université Paris-Saclay, Orsay
| | - Olaf Mercier
- Department of Thoracic and Vascular Surgery and Heart-Lung Transplantation, Marie Lannelongue Hospital
| | | | - Sacha Mussot
- Department of Thoracic and Vascular Surgery and Heart-Lung Transplantation, Marie Lannelongue Hospital
| | | | | | | | | | | | - Delphine Mitilian
- Department of Thoracic and Vascular Surgery and Heart-Lung Transplantation, Marie Lannelongue Hospital
| | - Elie Fadel
- Department of Thoracic and Vascular Surgery and Heart-Lung Transplantation, Marie Lannelongue Hospital
| | - David Planchard
- Oncology, Institut d'Oncologie Thoracique, Gustave Roussy, Université Paris Saclay, Villejuif
| | | | | | | | | |
Collapse
|
7
|
Xie N, Gong K, Guo N, Qin Z, Wu Z, Liu H, Li Q. Rapid high-quality PET Patlak parametric image generation based on direct reconstruction and temporal nonlocal neural network. Neuroimage 2021; 240:118380. [PMID: 34252526 DOI: 10.1016/j.neuroimage.2021.118380] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 07/04/2021] [Accepted: 07/07/2021] [Indexed: 11/25/2022] Open
Abstract
Parametric imaging based on dynamic positron emission tomography (PET) has wide applications in neurology. Compared to indirect methods, direct reconstruction methods, which reconstruct parametric images directly from the raw PET data, have superior image quality due to better noise modeling and richer information extracted from the PET raw data. For low-dose scenarios, the advantages of direct methods are more obvious. However, the wide adoption of direct reconstruction is inevitably impeded by the excessive computational demand and deficiency of the accessible raw data. In addition, motion modeling inside dynamic PET image reconstruction raises more computational challenges for direct reconstruction methods. In this work, we focused on the 18F-FDG Patlak model, and proposed a data-driven approach which can estimate the motion corrected full-dose direct Patlak images from the dynamic PET reconstruction series, based on a proposed novel temporal non-local convolutional neural network. During network training, direct reconstruction with motion correction based on full-dose dynamic PET sinograms was performed to obtain the training labels. The reconstructed full-dose /low-dose dynamic PET images were supplied as the network input. In addition, a temporal non-local block based on the dynamic PET images was proposed to better recover the structural information and reduce the image noise. During testing, the proposed network can directly output high-quality Patlak parametric images from the full-dose /low-dose dynamic PET images in seconds. Experiments based on 15 full-dose and 15 low-dose 18F-FDG brain datasets were conducted and analyzed to validate the feasibility of the proposed framework. Results show that the proposed framework can generate better image quality than reference methods.
Collapse
Affiliation(s)
- Nuobei Xie
- College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, Building 3, Hangzhou 310027, China; Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, White 427, 125 Nashua Street, Suite 660, Boston, MA 02114, United States.
| | - Kuang Gong
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, White 427, 125 Nashua Street, Suite 660, Boston, MA 02114, United States.
| | - Ning Guo
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, White 427, 125 Nashua Street, Suite 660, Boston, MA 02114, United States
| | - Zhixing Qin
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | - Zhifang Wu
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | - Huafeng Liu
- College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, Building 3, Hangzhou 310027, China.
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, White 427, 125 Nashua Street, Suite 660, Boston, MA 02114, United States.
| |
Collapse
|
8
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
9
|
Espinós-Morató H, Cascales-Picó D, Vergara M, Hernández-Martínez Á, Benlloch Baviera JM, Rodríguez-Álvarez MJ. Simulation Study of a Frame-Based Motion Correction Algorithm for Positron Emission Imaging. SENSORS (BASEL, SWITZERLAND) 2021; 21:2608. [PMID: 33917742 PMCID: PMC8068167 DOI: 10.3390/s21082608] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 04/02/2021] [Accepted: 04/04/2021] [Indexed: 11/16/2022]
Abstract
Positron emission tomography (PET) is a functional non-invasive imaging modality that uses radioactive substances (radiotracers) to measure changes in metabolic processes. Advances in scanner technology and data acquisition in the last decade have led to the development of more sophisticated PET devices with good spatial resolution (1-3 mm of full width at half maximum (FWHM)). However, there are involuntary motions produced by the patient inside the scanner that lead to image degradation and potentially to a misdiagnosis. The adverse effect of the motion in the reconstructed image increases as the spatial resolution of the current scanners continues improving. In order to correct this effect, motion correction techniques are becoming increasingly popular and further studied. This work presents a simulation study of an image motion correction using a frame-based algorithm. The method is able to cut the acquired data from the scanner in frames, taking into account the size of the object of study. This approach allows working with low statistical information without losing image quality. The frames are later registered using spatio-temporal registration developed in a multi-level way. To validate these results, several performance tests are applied to a set of simulated moving phantoms. The results obtained show that the method minimizes the intra-frame motion, improves the signal intensity over the background in comparison with other literature methods, produces excellent values of similarity with the ground-truth (static) image and is able to find a limit in the patient-injected dose when some prior knowledge of the lesion is present.
Collapse
Affiliation(s)
- Héctor Espinós-Morató
- Instituto de Instrumentación para Imagen Molecular (i3M), Centro Mixto CSIC—Universitat Politècnica de València, 46022 Valencia, Spain; (D.C.-P.); (M.V.); (Á.H.-M.); (J.M.B.B.); (M.J.R.-Á.)
| | - David Cascales-Picó
- Instituto de Instrumentación para Imagen Molecular (i3M), Centro Mixto CSIC—Universitat Politècnica de València, 46022 Valencia, Spain; (D.C.-P.); (M.V.); (Á.H.-M.); (J.M.B.B.); (M.J.R.-Á.)
| | - Marina Vergara
- Instituto de Instrumentación para Imagen Molecular (i3M), Centro Mixto CSIC—Universitat Politècnica de València, 46022 Valencia, Spain; (D.C.-P.); (M.V.); (Á.H.-M.); (J.M.B.B.); (M.J.R.-Á.)
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Ángel Hernández-Martínez
- Instituto de Instrumentación para Imagen Molecular (i3M), Centro Mixto CSIC—Universitat Politècnica de València, 46022 Valencia, Spain; (D.C.-P.); (M.V.); (Á.H.-M.); (J.M.B.B.); (M.J.R.-Á.)
| | - José María Benlloch Baviera
- Instituto de Instrumentación para Imagen Molecular (i3M), Centro Mixto CSIC—Universitat Politècnica de València, 46022 Valencia, Spain; (D.C.-P.); (M.V.); (Á.H.-M.); (J.M.B.B.); (M.J.R.-Á.)
| | - María José Rodríguez-Álvarez
- Instituto de Instrumentación para Imagen Molecular (i3M), Centro Mixto CSIC—Universitat Politècnica de València, 46022 Valencia, Spain; (D.C.-P.); (M.V.); (Á.H.-M.); (J.M.B.B.); (M.J.R.-Á.)
| |
Collapse
|
10
|
Hu J, Panin V, Smith AM, Spottiswoode B, Shah V, CA von Gall C, Baker M, Howe W, Kehren F, Casey M, Bendriem B. Design and Implementation of Automated Clinical Whole Body Parametric PET With Continuous Bed Motion. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020. [DOI: 10.1109/trpms.2020.2994316] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
11
|
Petibon Y, Alpert NM, Ouyang J, Pizzagalli DA, Cusin C, Fava M, El Fakhri G, Normandin MD. PET imaging of neurotransmission using direct parametric reconstruction. Neuroimage 2020; 221:117154. [PMID: 32679252 PMCID: PMC7800040 DOI: 10.1016/j.neuroimage.2020.117154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 07/06/2020] [Accepted: 07/07/2020] [Indexed: 11/18/2022] Open
Abstract
Receptor ligand-based dynamic Positron Emission Tomography (PET) permits the measurement of neurotransmitter release in the human brain. For single-scan paradigms, the conventional method of estimating changes in neurotransmitter levels relies on fitting a pharmacokinetic model to activity concentration histories extracted after PET image reconstruction. However, due to the statistical fluctuations of activity concentration data at the voxel scale, parametric images computed using this approach often exhibit low signal-to-noise ratio, impeding characterization of neurotransmitter release. Numerous studies have shown that direct parametric reconstruction (DPR) approaches, which combine image reconstruction and kinetic analysis in a unified framework, can improve the signal-to-noise ratio of parametric mapping. However, there is little experience with DPR in imaging of neurotransmission and the performance of the approach in this application has not been evaluated before in humans. In this report, we present and evaluate a DPR methodology that computes 3-D distributions of ligand transport, binding potential (BPND) and neurotransmitter release magnitude (γ) from a dynamic sequence of PET sinograms. The technique employs the linear simplified reference region model (LSRRM) of Alpert et al. (2003), which represents an extension of the simplified reference region model that incorporates time-varying binding parameters due to radioligand displacement by release of neurotransmitter. Estimation of parametric images is performed by gradient-based optimization of a Poisson log-likelihood function incorporating LSRRM kinetics and accounting for the effects of head movement, attenuation, detector sensitivity, random and scattered coincidences. A 11C-raclopride simulation study showed that the proposed approach substantially reduces the bias and variance of voxel-wise γ estimates as compared to standard methods. Moreover, simulations showed that detection of release could be made more reliable and/or conducted using a smaller sample size using the proposed DPR estimator. Likewise, images of BPND computed using DPR had substantially improved bias and variance properties. Application of the method in human subjects was demonstrated using 11C-raclopride dynamic scans and a reward task, confirming the improved quality of the estimated parametric images using the proposed approach.
Collapse
Affiliation(s)
- Yoann Petibon
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| | - Nathaniel M Alpert
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Diego A Pizzagalli
- Center for Depression, Anxiety & Stress Research, McLean Hospital and Harvard Medical School, Belmont, MA, USA
| | - Cristina Cusin
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Maurizio Fava
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Marc D Normandin
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
12
|
Gallezot JD, Lu Y, Naganawa M, Carson RE. Parametric Imaging With PET and SPECT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020. [DOI: 10.1109/trpms.2019.2908633] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
13
|
Zhu Y, Zhu X. MRI-Driven PET Image Optimization for Neurological Applications. Front Neurosci 2019; 13:782. [PMID: 31417346 PMCID: PMC6684790 DOI: 10.3389/fnins.2019.00782] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 07/12/2019] [Indexed: 12/12/2022] Open
Abstract
Positron emission tomography (PET) and magnetic resonance imaging (MRI) are established imaging modalities for the study of neurological disorders, such as epilepsy, dementia, psychiatric disorders and so on. Since these two available modalities vary in imaging principle and physical performance, each technique has its own advantages and disadvantages over the other. To acquire the mutual complementary information and reinforce each other, there is a need for the fusion of PET and MRI. This combined dual-modality (either sequential or simultaneous) could generate preferable soft tissue contrast of brain tissue, flexible acquisition parameters, and minimized exposure to radiation. The most unique superiority of PET/MRI is mainly manifested in MRI-based improvement for the inherent limitations of PET, such as motion artifacts, partial volume effect (PVE) and invasive procedure in quantitative analysis. Head motion during scanning significantly deteriorates the effective resolution of PET image, especially for the dynamic scan with lengthy time. Hybrid PET/MRI device can offer motion correction (MC) for PET data through MRI information acquired simultaneously. Regarding the PVE associated with limited spatial resolution, the process and reconstruction of PET data can be further optimized by using acquired MRI either sequentially or simultaneously. The quantitative analysis of dynamic PET data mainly relies upon an invasive arterial blood sampling procedure to acquire arterial input function (AIF). An image-derived input function (IDIF) method without the need of arterial cannulization, can serve as a potential alternative estimation of AIF. Compared with using PET data only, combining anatomical or functional information from MRI for improving the accuracy in IDIF approach has been demonstrated. Yet, due to the interference and inherent disparity between the two modalities, these methods for optimizing PET image based on MRI still have many technical challenges. This review discussed upon the most recent progress, current challenges and future directions of MRI-driven PET data optimization for neurological applications, with either sequential or simultaneous acquisition approach.
Collapse
Affiliation(s)
- Yuankai Zhu
- Department of Nuclear Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
14
|
Chen Z, Sforazzini F, Baran J, Close T, Shah NJ, Egan GF. MR-PET head motion correction based on co-registration of multicontrast MR images. Hum Brain Mapp 2019; 42:4081-4091. [PMID: 30604898 DOI: 10.1002/hbm.24497] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 10/22/2018] [Accepted: 12/05/2018] [Indexed: 01/01/2023] Open
Abstract
Head motion is a major source of image artefacts in neuroimaging studies and can lead to degradation of the quantitative accuracy of reconstructed PET images. Simultaneous magnetic resonance-positron emission tomography (MR-PET) makes it possible to estimate head motion information from high-resolution MR images and then correct motion artefacts in PET images. In this article, we introduce a fully automated PET motion correction method, MR-guided MAF, based on the co-registration of multicontrast MR images. The performance of the MR-guided MAF method was evaluated using MR-PET data acquired from a cohort of ten healthy participants who received a slow infusion of fluorodeoxyglucose ([18-F]FDG). Compared with conventional methods, MR-guided PET image reconstruction can reduce head motion introduced artefacts and improve the image sharpness and quantitative accuracy of PET images acquired using simultaneous MR-PET scanners. The fully automated motion estimation method has been implemented as a publicly available web-service.
Collapse
Affiliation(s)
- Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.,Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| | | | - Jakub Baran
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.,Department of Biophysics, Faculty of Mathematics and Natural Sciences, University of Rzesow, Rzesow, Poland
| | - Thomas Close
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.,Australian National Imaging Facility, St Lucia, Australia
| | - Nadim Jon Shah
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.,Institute of Neuroscience and Medicine - 4, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University, Clayton, Australia.,Monash Institute of Cognitive and Clinical Neuroscience, Monash University, Melbourne, Australia
| |
Collapse
|
15
|
Hutton BF, Erlandsson K, Thielemans K. Advances in clinical molecular imaging instrumentation. Clin Transl Imaging 2018. [DOI: 10.1007/s40336-018-0264-0] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
16
|
Markiewicz PJ, Ehrhardt MJ, Erlandsson K, Noonan PJ, Barnes A, Schott JM, Atkinson D, Arridge SR, Hutton BF, Ourselin S. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis. Neuroinformatics 2018; 16:95-115. [PMID: 29280050 PMCID: PMC5797201 DOI: 10.1007/s12021-017-9352-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Collapse
Affiliation(s)
- Pawel J Markiewicz
- Translational Imaging Group, CMIC, Department of Medical Physics, Biomedical Engineering, University College London, London, UK.
| | - Matthias J Ehrhardt
- Department for Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK
| | - Kjell Erlandsson
- Institute of Nuclear Medicine, University College London, London, UK
| | - Philip J Noonan
- Translational Imaging Group, CMIC, Department of Medical Physics, Biomedical Engineering, University College London, London, UK
| | - Anna Barnes
- Institute of Nuclear Medicine, University College London, London, UK
| | | | - David Atkinson
- Centre for Medical Imaging, University College London, London, UK
| | - Simon R Arridge
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Brian F Hutton
- Institute of Nuclear Medicine, University College London, London, UK
| | - Sebastien Ourselin
- Translational Imaging Group, CMIC, Department of Medical Physics, Biomedical Engineering, University College London, London, UK
| |
Collapse
|
17
|
Germino M, Gallezot JD, Yan J, Carson RE. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels. Phys Med Biol 2017; 62:5344-5364. [PMID: 28504644 PMCID: PMC5783541 DOI: 10.1088/1361-6560/aa731f] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, 'direct reconstruction', incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.
Collapse
Affiliation(s)
- Mary Germino
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | | | | | | |
Collapse
|