1
|
Hopson JB, Ellis S, Flaus A, McGinnity CJ, Neji R, Reader AJ, Hammers A. Clinical and Deep-Learned Evaluation of MR-Guided Self-Supervised PET Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2025; 9:337-346. [PMID: 40008384 PMCID: PMC7617360 DOI: 10.1109/trpms.2024.3496779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2025]
Abstract
Reduced dose Positron Emission Tomography (PET) lowers the radiation dose to patients and reduces costs. Lower count data, however, degrades reconstructed image quality. Advanced reconstruction methods help mitigate image quality losses, but it is important to assess the resulting images from a clinical perspective. Two experienced clinicians assessed four PET reconstruction algorithms for [18F]FDG brain data, compared to a clinical standard reference (Maximum-Likelihood Expectation-Maximization (MLEM)), based on seven clinical image quality metrics: global quality rating, pattern recognition, diagnostic confidence (all on a scale of 0-4), sharpness, caudate-putamen separation, noise, and contrast (on a scale between 0-2). The reconstruction methods assessed were a guided and unguided version of self-supervised maximum a posteriori EM (MAPEM) (where the guidance case used the patient's MR image to control the smoothness penalty). For 3 of the 11 patient datasets reconstructed, post-smoothed versions of the MAPEM reconstruction were also considered, where the smoothing was with the point-spread-function used in the resolution modelling. Statistically significant improvements were observed in sharpness, caudate-putamen separation, and contrast for self-supervised MR-guided MAPEM compared to MLEM. For example, MLEM scored between 1-1.1 out of 2 for sharpness, caudate-putamen separation and contrast, whereas self-supervised MR-guided MAPEM scored between 1.5-1.75. In addition to the clinical evaluation, pre-trained Convolutional Neural Networks (CNNs) were used to assess the image quality of a further 62 images. The CNNs demonstrated similar trends to the clinician, showing their potential as automated standalone observers. Both the clinical and CNN assessments suggest when using only 5% of the standard injected dose, self-supervised MR-guided MAPEM reconstruction matches the 100% MLEM case for overall performance. This makes the images far more clinically useful than standard MLEM.
Collapse
Affiliation(s)
| | - Sam Ellis
- Department of Biomedical Engineering, King's College London
| | - Anthime Flaus
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Colm J McGinnity
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Radhouene Neji
- Department of Biomedical Engineering, King's College London
| | | | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| |
Collapse
|
2
|
Pan B, Marsden PK, Reader AJ. Kinetic model-informed deep learning for multiplexed PET image separation. EJNMMI Phys 2024; 11:56. [PMID: 38951271 PMCID: PMC11555001 DOI: 10.1186/s40658-024-00660-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 05/24/2024] [Indexed: 07/03/2024] Open
Abstract
BACKGROUND Multiplexed positron emission tomography (mPET) imaging can measure physiological and pathological information from different tracers simultaneously in a single scan. Separation of the multiplexed PET signals within a single PET scan is challenging due to the fact that each tracer gives rise to indistinguishable 511 keV photon pairs, and thus no unique energy information for differentiating the source of each photon pair. METHODS Recently, many applications of deep learning for mPET image separation have been concentrated on pure data-driven methods, e.g., training a neural network to separate mPET images into single-tracer dynamic/static images. These methods use over-parameterized networks with only a very weak inductive prior. In this work, we improve the inductive prior of the deep network by incorporating a general kinetic model based on spectral analysis. The model is incorporated, along with deep networks, into an unrolled image-space version of an iterative fully 4D PET reconstruction algorithm. RESULTS The performance of the proposed method was evaluated on a simulated brain image dataset for dual-tracer [18 F]FDG+[11 C]MET PET image separation. The results demonstrate that the proposed method can achieve separation performance comparable to that obtained with single-tracer imaging. In addition, the proposed method outperformed the model-based separation methods (the conventional voxel-wise multi-tracer compartment modeling method (v-MTCM) and the image-space dual-tracer version of the fully 4D PET image reconstruction algorithm (IS-F4D)), as well as a pure data-driven separation [using a convolutional encoder-decoder (CED)], with fewer training examples. CONCLUSIONS This work proposes a kinetic model-informed unrolled deep learning method for mPET image separation. In simulation studies, the method proved able to outperform both the conventional v-MTCM method and a pure data-driven CED with less training data.
Collapse
Affiliation(s)
- Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Paul K Marsden
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
3
|
Sun X, Nong M, Meng F, Sun X, Jiang L, Li Z, Zhang P. Architecting the metabolic reprogramming survival risk framework in LUAD through single-cell landscape analysis: three-stage ensemble learning with genetic algorithm optimization. J Transl Med 2024; 22:353. [PMID: 38622716 PMCID: PMC11017668 DOI: 10.1186/s12967-024-05138-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 03/27/2024] [Indexed: 04/17/2024] Open
Abstract
Recent studies have increasingly revealed the connection between metabolic reprogramming and tumor progression. However, the specific impact of metabolic reprogramming on inter-patient heterogeneity and prognosis in lung adenocarcinoma (LUAD) still requires further exploration. Here, we introduced a cellular hierarchy framework according to a malignant and metabolic gene set, named malignant & metabolism reprogramming (MMR), to reanalyze 178,739 single-cell reference profiles. Furthermore, we proposed a three-stage ensemble learning pipeline, aided by genetic algorithm (GA), for survival prediction across 9 LUAD cohorts (n = 2066). Throughout the pipeline of developing the three stage-MMR (3 S-MMR) score, double training sets were implemented to avoid over-fitting; the gene-pairing method was utilized to remove batch effect; GA was harnessed to pinpoint the optimal basic learner combination. The novel 3 S-MMR score reflects various aspects of LUAD biology, provides new insights into precision medicine for patients, and may serve as a generalizable predictor of prognosis and immunotherapy response. To facilitate the clinical adoption of the 3 S-MMR score, we developed an easy-to-use web tool for risk scoring as well as therapy stratification in LUAD patients. In summary, we have proposed and validated an ensemble learning model pipeline within the framework of metabolic reprogramming, offering potential insights for LUAD treatment and an effective approach for developing prognostic models for other diseases.
Collapse
Affiliation(s)
- Xinti Sun
- Department of Cardiothoracic Surgery, Tianjin Medical University General Hospital, Tianjin, China
| | - Minyu Nong
- School of Clinical Medicine, Youjiang Medical University for Nationalities, Baise, Guangxi, China
| | - Fei Meng
- Department of Cardiothoracic Surgery, Tianjin Medical University General Hospital, Tianjin, China
| | - Xiaojuan Sun
- Department of Oncology, Qingdao University Affiliated Hospital, Qingdao, Shandong, China
| | - Lihe Jiang
- School of Clinical Medicine, Youjiang Medical University for Nationalities, Baise, Guangxi, China
| | - Zihao Li
- Department of Cardiothoracic Surgery, Tianjin Medical University General Hospital, Tianjin, China
| | - Peng Zhang
- Department of Cardiothoracic Surgery, Tianjin Medical University General Hospital, Tianjin, China.
| |
Collapse
|
4
|
Hopson JB, Neji R, Dunn JT, McGinnity CJ, Flaus A, Reader AJ, Hammers A. Pre-training via Transfer Learning and Pretext Learning a Convolutional Neural Network for Automated Assessments of Clinical PET Image Quality. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:372-381. [PMID: 37051163 PMCID: PMC7614424 DOI: 10.1109/trpms.2022.3231702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Positron emission tomography (PET) using a fraction of the usual injected dose would reduce the amount of radioligand needed, as well as the radiation dose to patients and staff, but would compromise reconstructed image quality. For performing the same clinical tasks with such images, a clinical (rather than numerical) image quality assessment is essential. This process can be automated with convolutional neural networks (CNNs). However, the scarcity of clinical quality readings is a challenge. We hypothesise that exploiting easily available quantitative information in pretext learning tasks or using established pre-trained networks could improve CNN performance for predicting clinical assessments with limited data. CNNs were pre-trained to predict injected dose from image patches extracted from eight real patient datasets, reconstructed using between 0.5%-100% of the available data. Transfer learning with seven different patients was used to predict three clinically-scored quality metrics ranging from 0-3: global quality rating, pattern recognition and diagnostic confidence. This was compared to pre-training via a VGG16 network at varying pre-training levels. Pre-training improved test performance for this task: the mean absolute error of 0.53 (compared to 0.87 without pre-training), was within clinical scoring uncertainty. Future work may include using the CNN for novel reconstruction methods performance assessment.
Collapse
Affiliation(s)
| | | | - Joel T Dunn
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Colm J McGinnity
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Anthime Flaus
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | | | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| |
Collapse
|
5
|
Hua S, Liu Q, Yin G, Guan X, Jiang N, Zhang Y. Research on 3D medical image surface reconstruction based on data mining and machine learning. INT J INTELL SYST 2021. [DOI: 10.1002/int.22735] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Shanshan Hua
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Qi Liu
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Guanxiang Yin
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Xiaohui Guan
- The National Engineering Research Center for Bioengineering Drugs and the Technologies Nanchang University Nanchang China
| | - Nan Jiang
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Yuejin Zhang
- School of Information Engineering East China Jiaotong University Nanchang China
| |
Collapse
|
6
|
Abstract
The ability to map and estimate the activity of radiological source distributions in unknown three-dimensional environments has applications in the prevention and response to radiological accidents or threats as well as the enforcement and verification of international nuclear non-proliferation agreements. Such a capability requires well-characterized detector response functions, accurate time-dependent detector position and orientation data, a digitized representation of the surrounding 3D environment, and appropriate image reconstruction and uncertainty quantification methods. We have previously demonstrated 3D mapping of gamma-ray emitters with free-moving detector systems on a relative intensity scale using a technique called Scene Data Fusion (SDF). Here we characterize the detector response of a multi-element gamma-ray imaging system using experimentally benchmarked Monte Carlo simulations and perform 3D mapping on an absolute intensity scale. We present experimental reconstruction results from hand-carried and airborne measurements with point-like and distributed sources in known configurations, demonstrating quantitative SDF in complex 3D environments.
Collapse
|
7
|
Munoz C, Ellis S, Nekolla SG, Kunze KP, Vitadello T, Neji R, Botnar RM, Schnabel JA, Reader AJ, Prieto C. MR-guided motion-corrected PET image reconstruction for cardiac PET-MR. J Nucl Med 2021; 62:jnumed.120.254235. [PMID: 34049978 PMCID: PMC8612202 DOI: 10.2967/jnumed.120.254235] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 03/09/2021] [Accepted: 03/09/2021] [Indexed: 11/16/2022] Open
Abstract
Simultaneous PET-MR imaging has shown potential for the comprehensive assessment of myocardial health from a single examination. Furthermore, MR-derived respiratory motion information has been shown to improve PET image quality by incorporating this information into the PET image reconstruction. Separately, MR-based anatomically guided PET image reconstruction has been shown to perform effective denoising, but this has been so far demonstrated mainly in brain imaging. To date the combined benefits of motion compensation and anatomical guidance have not been demonstrated for myocardial PET-MR imaging. This work addresses this by proposing a single cardiac PET-MR image reconstruction framework which fully utilises MR-derived information to allow both motion compensation and anatomical guidance within the reconstruction. Methods: Fifteen patients underwent a 18F-FDG cardiac PET-MR scan with a previously introduced acquisition framework. The MR data processing and image reconstruction pipeline produces respiratory motion fields and a high-resolution respiratory motion-corrected MR image with good tissue contrast. This MR-derived information was then included in a respiratory motion-corrected, cardiac-gated, anatomically guided image reconstruction of the simultaneously acquired PET data. Reconstructions were evaluated by measuring myocardial contrast and noise and compared to images from several comparative intermediate methods using the components of the proposed framework separately. Results: Including respiratory motion correction, cardiac gating, and anatomical guidance significantly increased contrast. In particular, myocardium-to-blood pool contrast increased by 143% on average (p<0.0001) compared to conventional uncorrected, non-guided PET images. Furthermore, anatomical guidance significantly reduced image noise compared to non-guided image reconstruction by 16.1% (p<0.0001). Conclusion: The proposed framework for MR-derived motion compensation and anatomical guidance of cardiac PET data was shown to significantly improve image quality compared to alternative reconstruction methods. Each component of the reconstruction pipeline was shown to have a positive impact on the final image quality. These improvements have the potential to improve clinical interpretability and diagnosis based on cardiac PET-MR images.
Collapse
Affiliation(s)
- Camila Munoz
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Sam Ellis
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Stephan G. Nekolla
- Nuklearmedizinische Klinik und Poliklinik, Technische Technical University of Munich, Munich, Germany
| | - Karl P. Kunze
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MR Research Collaborations, Siemens Healthcare, Frimley, United Kingdom
| | - Teresa Vitadello
- Department of Internal Medicine I, University Hospital Rechts der Isar, School of Medicine, Technical University of Munich, Munich, Germany; and
| | - Radhouene Neji
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MR Research Collaborations, Siemens Healthcare, Frimley, United Kingdom
| | - Rene M. Botnar
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Julia A. Schnabel
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Andrew J. Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
8
|
Reader AJ, Corda G, Mehranian A, Costa-Luis CD, Ellis S, Schnabel JA. Deep Learning for PET Image Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3014786] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|