1
|
Jiang Y, Wang Q, Feng J, Yin G, Han P, Ruan Q, Zhang J. A novel Al 18F-labelled NOTA-modified ubiquicidin 29-41 derivative as a bacterial infection PET imaging agent. Eur J Med Chem 2025; 289:117482. [PMID: 40058182 DOI: 10.1016/j.ejmech.2025.117482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2025] [Revised: 02/24/2025] [Accepted: 03/04/2025] [Indexed: 03/29/2025]
Abstract
The antimicrobial peptide ubiquicidin 29-41 (TGRAKRRMQYNRR) is a potential target for detecting bacterial infection. A novel UBI 29-41 derivative modified with 1,4,7-triazacyclononane-1,4,7-triacetic acid (NOTA) on the amino side of lysine was synthesized and radiolabelled with Al18F, named [18F]AlF-NOTA-UBI 29-41. The novel PET tracer maintained good in vitro stability in saline at room temperature and mouse serum at 37 °C. In vitro bacterial binding experiments indicated that the tracer specifically bound to Staphylococcus aureus. A significant difference in the uptake of [18F]AlF-NOTA-UBI 29-41 between infected muscle and inflamed muscle was observed in biodistribution. A PET imaging study in mouse models with bacterial infection and sterile inflammation showed apparent accumulation at the infection site, suggesting that the complex is a potential PET tracer for distinguishing bacterial infection from sterile inflammation.
Collapse
Affiliation(s)
- Yuhao Jiang
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China
| | - Qianna Wang
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China
| | - Junhong Feng
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China; Department of Nuclear Technology and Application, China Institute of Atomic Energy, Beijing, 102413, China
| | - Guangxing Yin
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China
| | - Peiwen Han
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China
| | - Qing Ruan
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China; Key Laboratory of Beam Technology of the Ministry of Education, School of Physics and Astronomy, Beijing Normal University, Beijing, 100875, China
| | - Junbo Zhang
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
2
|
Zhang C, Gao X, Zheng X, Xie J, Feng G, Bao Y, Gu P, He C, Wang R, Tian J. A fully automated, expert-perceptive image quality assessment system for whole-body [18F]FDG PET/CT. EJNMMI Res 2025; 15:42. [PMID: 40249445 PMCID: PMC12008089 DOI: 10.1186/s13550-025-01238-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2024] [Accepted: 04/05/2025] [Indexed: 04/19/2025] Open
Abstract
BACKGROUND The quality of clinical PET/CT images is critical for both accurate diagnosis and image-based research. However, current image quality assessment (IQA) methods predominantly rely on handcrafted features and region-specific analyses, thereby limiting automation in whole-body and multicenter evaluations. This study aims to develop an expert-perceptive deep learning-based IQA system for [18F]FDG PET/CT to tackle the lack of automated, interpretable assessments of clinical whole-body PET/CT image quality. METHODS This retrospective multicenter study included clinical whole-body [18F]FDG PET/CT scans from 718 patients. Automated identification and localization algorithms were applied to select predefined pairs of PET and CT slices from whole-body images. Fifteen experienced experts, trained to conduct blinded slice-level subjective assessments, provided average visual scores as reference standards. Using the MANIQA framework, the developed IQA model integrates the Vision Transformer, Transposed Attention, and Scale Swin Transformer Blocks to categorize PET and CT images into five quality classes. The model's correlation, consistency, and accuracy with expert evaluations on both PET and CT test sets were statistically analysed to assess the system's IQA performance. Additionally, the model's ability to distinguish high-quality images was evaluated using receiver operating characteristic (ROC) curves. RESULTS The IQA model demonstrated high accuracy in predicting image quality categories and showed strong concordance with expert evaluations of PET/CT image quality. In predicting slice-level image quality across all body regions, the model achieved an average accuracy of 0.832 for PET and 0.902 for CT. The model's scores showed substantial agreement with expert assessments, achieving average Spearman coefficients (ρ) of 0.891 for PET and 0.624 for CT, while the average Intraclass Correlation Coefficient (ICC) reached 0.953 for PET and 0.92 for CT. The PET IQA model demonstrated strong discriminative performance, achieving an area under the curve (AUC) of ≥ 0.88 for both the thoracic and abdominal regions. CONCLUSIONS This fully automated IQA system provides a robust and comprehensive framework for the objective evaluation of clinical image quality. Furthermore, it demonstrates significant potential as an impartial, expert-level tool for standardised multicenter clinical IQA.
Collapse
Affiliation(s)
- Cong Zhang
- Medical School of Chinese PLA, Beijing, China
- Department of Nuclear Medicine, The First Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Xin Gao
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai, China
| | - Xuebin Zheng
- Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Jun Xie
- Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Gang Feng
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai, China
| | - Yunchao Bao
- Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Pengchen Gu
- Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Chuan He
- Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Ruimin Wang
- Medical School of Chinese PLA, Beijing, China.
| | - Jiahe Tian
- Medical School of Chinese PLA, Beijing, China.
| |
Collapse
|
3
|
Qiao W, Wang T, Yi H, Li X, Lv Y, Xi C, Wu R, Wang Y, Yu Y, Xing Y, Zhao J. Impact of a deep progressive reconstruction algorithm on low-dose or fast-scan PET image quality and Deauville score in patients with lymphoma. EJNMMI Phys 2025; 12:33. [PMID: 40169444 PMCID: PMC11961783 DOI: 10.1186/s40658-025-00739-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 02/28/2025] [Indexed: 04/03/2025] Open
Abstract
BACKGROUND A deep progressive learning method for PET image reconstruction named deep progressive reconstruction (DPR) method was developed and presented in previous works. It has been shown in previous study that the DPR with one-third duration can maintain the image quality as OSEM with standard dose (3.7 MBq/kg). Subsequent studies have shown we can reduce the administered activity of 18F-FDG by up to 2/3 in a real-world deployment with DPR. The aim of this study is to assess the impact of the use of DPR on Deauville score (DS) and clinical interpretation of PET/CT in patients with lymphoma. METHODS A total of 87 lymphoma patients (age, 45.1 ± 14.9 years) who underwent 18F-FDG PET imaging for during or post-treatment follow-up from November 2020 to February 2024 were prospectively enrolled. The patients were randomly assigned to two groups, including the 1/3 standard dose group and the standard dose group. Forty-four patients were injected with 1/3 standard dose (1.23 MBq/kg) and scanned for 6 min per bed and were reconstructed: ordered-subsets expectation maximization (OSEM) with 6 min per bed (OSEM_6 min_1/3), OSEM_2 min_1/3 and DPR_2 min_1/3. Forty-three patients were scanned according to the standard protocol (3.7 MBq/kg) and were reconstructed: OSEM with 2 min per bed (OSEM_2 min_full), OSEM_40 s_full and DPR_40 s_full. Additionally, the conventional 5-point scale measurement analysis was performed and DS for lymphoma were determined in different groups. Wilcoxon signed-rank test was used to compare the mean values of liver SUVmax and mediastinal blood pool (MBP) SUVmax in each group. Likert scale and DS were evaluated using Wilcoxon signed rank test. RESULTS The patients with OSEM_6 min_1/3 and DPR_2 min_1/3 showed good image quality with 5(5,5) and 5(4,5) of Likert scoring, as well as the patients with OSEM_2 min_full and DPR_40 s_full. No significant difference was found between the OSEM_6 min_1/3 and DPR_2 min_1/3 groups in terms of liver SUVmax and MBP SUVmax (P = 0.452 and 0.430), as well as the patients with OSEM_2 min_full and DPR_40 s_full (P = 0.105 and 0.638). No significant difference was found between the OSEM_6 min_1/3 and DPR_2 min_1/3 groups in terms of lesion SUVmax (P = 0.080). There was a significant differences in lesion SUVmax between OSEM-2 min_full with DPR-40 s_full (P = 0.027). The DS results were consistent (100%) between OSEM-6 min_1/3 with DPR_2 min_1/3, and between OSEM-2 min_full with DPR-40 s_full, respectively. CONCLUSIONS DPR reconstruction demonstrated feasibility in reducing PET injection dose or scanning time, while ensuring the preservation of image quality and DS for during or post-treatment follow-up patients with lymphoma.
Collapse
Affiliation(s)
- Wenli Qiao
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Taisong Wang
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Hongyuan Yi
- United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Xuebing Li
- Jincheng People's Hospital, Shanxi, China
| | - Yang Lv
- United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Chen Xi
- United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Runze Wu
- United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Ying Wang
- United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Ye Yu
- Hospital Affairs Office, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China.
| | - Yan Xing
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China.
| | - Jinhua Zhao
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China.
| |
Collapse
|
4
|
Sun Q, Liu Z, Ding T, Shi C, Hou N, Sun C. Machine Learning-Based Objective Evaluation Model of CTPA Image Quality: A Multi-Center Study. Int J Gen Med 2025; 18:997-1005. [PMID: 40026813 PMCID: PMC11869754 DOI: 10.2147/ijgm.s510784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2024] [Accepted: 02/09/2025] [Indexed: 03/05/2025] Open
Abstract
Purpose This study aims to develop a machine learning-based model for the objective assessment of CT pulmonary angiography (CTPA) image quality. Patients and Methods A retrospective analysis was conducted using data from 99 patients who underwent CTPA between March 2022 and January 2023, alongside two public datasets, FUMPE (21 cases) and CAD-PE (30 cases). In total, 150 cases from multiple centers were included in this analysis. The dataset was randomly split into a training set (105 cases) and a testing set (45 cases) in a 7:3 ratio. CT values and their standard deviations (SD) were measured in 11 specific regions of interest, and two radiologists independently assigned anonymous random scores to the images. The average of their subjective scores was used as the target output for the model, which was the mean opinion score (MOS) for image quality. Feature selection was performed using the Lasso algorithm and Pearson correlation coefficient, and a random forest regression model was constructed. Model performance was evaluated using mean square error (MSE), coefficient of determination (R²), Pearson linear correlation coefficient (PLCC), Spearman rank correlation coefficient (SRCC), and Kendall rank correlation coefficient (KRCC). Results After feature selection, three key features were retained: main pulmonary artery CT value, ascending aorta CT value, and the difference in noise values between the left and right main pulmonary arteries. The random forest regression model constructed achieved MSE, R2_score, PLCC, SRCC, and KRCC values of 0.2001, 0.6695, 0.8682, 0.8694, 0.7363, respectively, on the testing set. Conclusion This study successfully developed an interpretable machine learning-based model for the objective assessment of CTPA image quality. The model offers effective support for improving image quality control efficiency and precision. However, the limited sample size may affect the model's generalizability, so it's essential to conduct further research with larger datasets.
Collapse
Affiliation(s)
- Qihang Sun
- Department of Medical Imaging, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, People’s Republic of China
| | - Zhongxiao Liu
- Department of Medical Imaging, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, People’s Republic of China
| | - Tao Ding
- Department of Medical Imaging, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, People’s Republic of China
| | - Changzhou Shi
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, People’s Republic of China
| | - Nailong Hou
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, People’s Republic of China
| | - Cunjie Sun
- Department of Medical Imaging, the Affiliated Hospital of Xuzhou Medical University, Xuzhou, People’s Republic of China
| |
Collapse
|
5
|
Scalia IG, Pathangey G, Abdelnabi M, Ibrahim OH, Abdelfattah FE, Pietri MP, Ibrahim R, Farina JM, Banerjee I, Tamarappoo BK, Arsanjani R, Ayoub C. Applications of Artificial Intelligence for the Prediction and Diagnosis of Cancer Therapy-Related Cardiac Dysfunction in Oncology Patients. Cancers (Basel) 2025; 17:605. [PMID: 40002200 PMCID: PMC11852369 DOI: 10.3390/cancers17040605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2025] [Revised: 02/04/2025] [Accepted: 02/06/2025] [Indexed: 02/27/2025] Open
Abstract
Cardiovascular diseases and cancer are the leading causes of morbidity and mortality in modern society. Expanding cancer therapies that have improved prognosis may also be associated with cardiotoxicity, and extended life span after survivorship is associated with the increasing prevalence of cardiovascular disease. As such, the field of cardio-oncology has been rapidly expanding, with an aim to identify cardiotoxicity and cardiac disease early in a patient who is receiving treatment for cancer or is in survivorship. Artificial intelligence is revolutionizing modern medicine with its ability to identify cardiac disease early. This article comprehensively reviews applications of artificial intelligence specifically applied to electrocardiograms, echocardiography, cardiac magnetic resonance imaging, and nuclear imaging to predict cardiac toxicity in the setting of cancer therapies, with a view to reduce early complications and cardiac side effects from cancer therapies such as chemotherapy, radiation therapy, or immunotherapy.
Collapse
Affiliation(s)
- Isabel G. Scalia
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Girish Pathangey
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Mahmoud Abdelnabi
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Omar H. Ibrahim
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Fatmaelzahraa E. Abdelfattah
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Milagros Pereyra Pietri
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Ramzi Ibrahim
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Juan M. Farina
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Imon Banerjee
- Department of Radiology, Mayo Clinic, Phoenix, AZ 85054, USA;
| | - Balaji K. Tamarappoo
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Reza Arsanjani
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| | - Chadi Ayoub
- Department of Cardiovascular Diseases, Mayo Clinic, Phoenix, AZ 85054, USA; (I.G.S.); (M.A.); (O.H.I.); (F.E.A.); (M.P.P.); (R.I.); (J.M.F.); (B.K.T.)
| |
Collapse
|
6
|
Sun F, Zhang L, Tong Z. Application progress of artificial intelligence in tumor diagnosis and treatment. Front Artif Intell 2025; 7:1487207. [PMID: 39845097 PMCID: PMC11753238 DOI: 10.3389/frai.2024.1487207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
The rapid advancement of artificial intelligence (AI) has introduced transformative opportunities in oncology, enhancing the precision and efficiency of tumor diagnosis and treatment. This review examines recent advancements in AI applications across tumor imaging diagnostics, pathological analysis, and treatment optimization, with a particular focus on breast cancer, lung cancer, and liver cancer. By synthesizing findings from peer-reviewed studies published over the past decade, this paper analyzes the role of AI in enhancing diagnostic accuracy, streamlining therapeutic decision-making, and personalizing treatment strategies. Additionally, this paper addresses challenges related to AI integration into clinical workflows and regulatory compliance. As AI continues to evolve, its applications in oncology promise further improvements in patient outcomes, though additional research is needed to address its limitations and ensure ethical and effective deployment.
Collapse
Affiliation(s)
- Fan Sun
- National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Li Zhang
- National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhongsheng Tong
- National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
7
|
Zhang Q, Huang Z, Jin Y, Li W, Zheng H, Liang D, Hu Z. Total-Body PET/CT: A Role of Artificial Intelligence? Semin Nucl Med 2025; 55:124-136. [PMID: 39368911 DOI: 10.1053/j.semnuclmed.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 09/06/2024] [Accepted: 09/09/2024] [Indexed: 10/07/2024]
Abstract
The purpose of this paper is to provide an overview of the cutting-edge applications of artificial intelligence (AI) technology in total-body positron emission tomography/computed tomography (PET/CT) scanning technology and its profound impact on the field of medical imaging. The introduction of total-body PET/CT scanners marked a major breakthrough in medical imaging, as their superior sensitivity and ultralong axial fields of view allowed for high-quality PET images of the entire body to be obtained in a single scan, greatly enhancing the efficiency and accuracy of diagnoses. However, this advancement is accompanied by the challenges of increasing data volumes and data complexity levels, which pose severe challenges for traditional image processing and analysis methods. Given the excellent ability of AI technology to process massive and high-dimensional data, the combination of AI technology and ultrasensitive PET/CT can be considered a complementary match, opening a new path for rapidly improving the efficiency of the PET-based medical diagnosis process. Recently, AI technology has demonstrated extraordinary potential in several key areas related to total-body PET/CT, including radiation dose reductions, dynamic parametric imaging refinements, quantitative analysis accuracy improvements, and significant image quality enhancements. The accelerated adoption of AI in clinical practice is of particular interest and is directly driven by the rapid progress made by AI technologies in terms of interpretability; i.e., the decision-making processes of algorithms and models have become more transparent and understandable. In the future, we believe that AI technology will fundamentally reshape the use of PET/CT, not only playing a more critical role in clinical diagnoses but also facilitating the customization and implementation of personalized healthcare solutions, providing patients with safer, more accurate, and more efficient healthcare experiences.
Collapse
Affiliation(s)
- Qiyang Zhang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhenxing Huang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuxi Jin
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenbo Li
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
8
|
Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM. Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings. J Magn Reson Imaging 2024. [PMID: 39690114 DOI: 10.1002/jmri.29672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Revised: 11/18/2024] [Accepted: 11/19/2024] [Indexed: 12/19/2024] Open
Abstract
BACKGROUND Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. PURPOSE To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. STUDY TYPE Retrospective. POPULATION A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. FIELD STRENGTH/SEQUENCE 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). ASSESSMENT Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. STATISTICAL TESTS Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. RESULTS Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement (94.9 % ± 2.4 % $$ 94.9\%\pm 2.4\% $$ ) and lower interobserver agreement (61.47 % ± 5.51 % $$ 61.47\%\pm 5.51\% $$ ). IQ-Net and EfficientNet accurately predicted rankings with a reference image (75.2 % ± 1.3 % $$ 75.2\%\pm 1.3\% $$ and79.2 % ± 1.7 % $$ 79.2\%\pm 1.7\% $$ ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. DATA CONCLUSION Image quality networks can be trained from image ranking and used to optimize DL tasks. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Chenwei Tang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Laura B Eisenmenger
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Leonardo Rivera-Rivera
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Eugene Huo
- Department of Radiology, University of California, San Francisco, California, USA
| | - Jacqueline C Junn
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Anthony D Kuner
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Thekla H Oechtering
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology and Nuclear Medicine, Universität zu Lübeck, Lübeck, Germany
| | - Anthony Peret
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Jitka Starekova
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Kevin M Johnson
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
9
|
Amini M, Salimi Y, Hajianfar G, Mainta I, Hervier E, Sanaat A, Rahmim A, Shiri I, Zaidi H. Fully Automated Region-Specific Human-Perceptive-Equivalent Image Quality Assessment: Application to 18 F-FDG PET Scans. Clin Nucl Med 2024; 49:1079-1090. [PMID: 39466652 DOI: 10.1097/rlu.0000000000005526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION We propose a fully automated framework to conduct a region-wise image quality assessment (IQA) on whole-body 18 F-FDG PET scans. This framework (1) can be valuable in daily clinical image acquisition procedures to instantly recognize low-quality scans for potential rescanning and/or image reconstruction, and (2) can make a significant impact in dataset collection for the development of artificial intelligence-driven 18 F-FDG PET analysis models by rejecting low-quality images and those presenting with artifacts, toward building clean datasets. PATIENTS AND METHODS Two experienced nuclear medicine physicians separately evaluated the quality of 174 18 F-FDG PET images from 87 patients, for each body region, based on a 5-point Likert scale. The body regisons included the following: (1) the head and neck, including the brain, (2) the chest, (3) the chest-abdomen interval (diaphragmatic region), (4) the abdomen, and (5) the pelvis. Intrareader and interreader reproducibility of the quality scores were calculated using 39 randomly selected scans from the dataset. Utilizing a binarized classification, images were dichotomized into low-quality versus high-quality for physician quality scores ≤3 versus >3, respectively. Inputting the 18 F-FDG PET/CT scans, our proposed fully automated framework applies 2 deep learning (DL) models on CT images to perform region identification and whole-body contour extraction (excluding extremities), then classifies PET regions as low and high quality. For classification, 2 mainstream artificial intelligence-driven approaches, including machine learning (ML) from radiomic features and DL, were investigated. All models were trained and evaluated on scores attributed by each physician, and the average of the scores reported. DL and radiomics-ML models were evaluated on the same test dataset. The performance evaluation was carried out on the same test dataset for radiomics-ML and DL models using the area under the curve, accuracy, sensitivity, and specificity and compared using the Delong test with P values <0.05 regarded as statistically significant. RESULTS In the head and neck, chest, chest-abdomen interval, abdomen, and pelvis regions, the best models achieved area under the curve, accuracy, sensitivity, and specificity of [0.97, 0.95, 0.96, and 0.95], [0.85, 0.82, 0.87, and 0.76], [0.83, 0.76, 0.68, and 0.80], [0.73, 0.72, 0.64, and 0.77], and [0.72, 0.68, 0.70, and 0.67], respectively. In all regions, models revealed highest performance, when developed on the quality scores with higher intrareader reproducibility. Comparison of DL and radiomics-ML models did not show any statistically significant differences, though DL models showed overall improved trends. CONCLUSIONS We developed a fully automated and human-perceptive equivalent model to conduct region-wise IQA over 18 F-FDG PET images. Our analysis emphasizes the necessity of developing separate models for body regions and performing data annotation based on multiple experts' consensus in IQA studies.
Collapse
Affiliation(s)
- Mehdi Amini
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | |
Collapse
|
10
|
Sun C, Salimi Y, Angeliki N, Boudabbous S, Zaidi H. An efficient dual-domain deep learning network for sparse-view CT reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108376. [PMID: 39173481 DOI: 10.1016/j.cmpb.2024.108376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 08/02/2024] [Accepted: 08/15/2024] [Indexed: 08/24/2024]
Abstract
BACKGROUND AND OBJECTIVE We develop an efficient deep-learning based dual-domain reconstruction method for sparse-view CT reconstruction with small training parameters and comparable running time. We aim to investigate the model's capability and its clinical value by performing objective and subjective quality assessments using clinical CT projection data acquired on commercial scanners. METHODS We designed two lightweight networks, namely Sino-Net and Img-Net, to restore the projection and image signal from the DD-Net reconstructed images in the projection and image domains, respectively. The proposed network has small training parameters and comparable running time among dual-domain based reconstruction networks and is easy to train (end-to-end). We prospectively collected clinical thoraco-abdominal CT projection data acquired on a Siemens Biograph 128 Edge CT scanner to train and validate the proposed network. Further, we quantitatively evaluated the CT Hounsfield unit (HU) values on 21 organs and anatomic structures, such as the liver, aorta, and ribcage. We also analyzed the noise properties and compared the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of the reconstructed images. Besides, two radiologists conducted the subjective qualitative evaluation including the confidence and conspicuity of anatomic structures, and the overall image quality using a 1-5 likert scoring system. RESULTS Objective and subjective evaluation showed that the proposed algorithm achieves competitive results in eliminating noise and artifacts, restoring fine structure details, and recovering edges and contours of anatomic structures using 384 views (1/6 sparse rate). The proposed method exhibited good computational cost performance on clinical projection data. CONCLUSION This work presents an efficient dual-domain learning network for sparse-view CT reconstruction on raw projection data from a commercial scanner. The study also provides insights for designing an organ-based image quality assessment pipeline for sparse-view reconstruction tasks, potentially benefiting organ-specific dose reduction by sparse-view imaging.
Collapse
Affiliation(s)
- Chang Sun
- Beijing University of Posts and Telecommunications, School of Information and Communication Engineering, 100876 Beijing, China; Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - Yazdan Salimi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - Neroladaki Angeliki
- Geneva University Hospital, Division of Radiology, CH-1211, Geneva, Switzerland
| | - Sana Boudabbous
- Geneva University Hospital, Division of Radiology, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
11
|
Zanoni L, Fortunati E, Cuzzani G, Malizia C, Lodi F, Cabitza VS, Brusa I, Emiliani S, Assenza M, Antonacci F, Giunchi F, Degiovanni A, Ferrari M, Natali F, Galasso T, Bandelli GP, Civollani S, Candoli P, D’Errico A, Solli P, Fanti S, Nanni C. [68Ga]Ga-FAPI-46 PET/CT for Staging Suspected/Confirmed Lung Cancer: Results on the Surgical Cohort Within a Monocentric Prospective Trial. Pharmaceuticals (Basel) 2024; 17:1468. [PMID: 39598380 PMCID: PMC11597145 DOI: 10.3390/ph17111468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 10/23/2024] [Accepted: 10/24/2024] [Indexed: 11/29/2024] Open
Abstract
BACKGROUND/OBJECTIVES To evaluate T&N-staging diagnostic performance of [68Ga]Ga-FAPI-46 PET/CT (FAPI) in a suspected/confirmed lung cancer surgical cohort. METHODS Patients were enrolled in a prospective monocentric trial (EudraCT: 2021-006570-23) to perform FAPI, in addition to conventional-staging-flow-chart (including [18F]F-FDG PET/CT-FDG). For the current purpose, only surgical patients were included. PET-semiquantitative parameters were measured for T&N: SUVmax, target-to-background-ratios (using mediastinal blood pool-MBP, liver-L and pulmonary-parenchyma-P). Visual and semiquantitative T&N PET/CT performances were analysed per patient and per region for both tracers, with surgical histopathology as standard-of-truth. RESULTS 63 FAPI scans were performed in 64 patients enrolled (26 May 2022-30 November 2023). A total of 50/63 patients underwent surgery and were included. Agreement (%) with histopathological-T&N-StagingAJCC8thEdition was slightly in favour of FAPI (T-66% vs. 58%, N-78% vs. 70%), increasing when T&N dichotomised (T-92% vs. 80%, N-78% vs. 72%). The performance of Visual-Criteria for T-per patient (n = 50) resulted higher FAPI than FDG. For N-per patient (n = 46), sensitivity and NPV were slightly lower with FAPI. Among 59 T-regions surgically examined, malignancy was excluded in 6/59 (10%). FAPI showed (vs. FDG): sensitivity 85% (vs. 72%), specificity 67% (vs. 50%), PPV 96% (vs. 93%), NPV 33% (vs. 17%), accuracy 83% (vs. 69%). Among 217 N-stations surgically assessed (overall 746 ln removed), only 15/217 (7%) resulted malignant; FAPI showed (vs. FDG): sensitivity 53% (vs. 60%), PPV 53% (vs. 26%), NPV 97% (vs. 97%), and significantly higher specificity (97% vs. 88%, p = 0.001) and accuracy (94% vs. 86%, p = 0.018). Semiquantitative-PET parameters performed similarly, better for N (p < 0.001) than for T, slightly in favour (although not significantly) of FAPI over FDG. CONCLUSIONS In a suspected/confirmed lung cancer surgical cohort, PET/CT performances for preoperative T&Nstaging were slightly in favour of FAPI than FDG (except for suboptimal N-sensitivity), significantly better only for N (region-based) specificity and accuracy using visual assessment. The trial's conventional follow-up is still ongoing; future analyses are pending, including non-surgical findings and theoretical impact on patient management.
Collapse
Affiliation(s)
- Lucia Zanoni
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Emilia Fortunati
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Giulia Cuzzani
- Nuclear Medicine, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy;
| | - Claudio Malizia
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Filippo Lodi
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Veronica Serena Cabitza
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Irene Brusa
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Stefano Emiliani
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Marta Assenza
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| | - Filippo Antonacci
- Division of Thoracic Surgery, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (F.A.); (P.S.)
| | - Francesca Giunchi
- Pathology, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (F.G.); (A.D.); (A.D.)
| | - Alessio Degiovanni
- Pathology, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (F.G.); (A.D.); (A.D.)
| | - Marco Ferrari
- Interventional Pulmonology Unit, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (M.F.); (F.N.); (T.G.); (G.P.B.); (P.C.)
| | - Filippo Natali
- Interventional Pulmonology Unit, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (M.F.); (F.N.); (T.G.); (G.P.B.); (P.C.)
| | - Thomas Galasso
- Interventional Pulmonology Unit, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (M.F.); (F.N.); (T.G.); (G.P.B.); (P.C.)
| | - Gian Piero Bandelli
- Interventional Pulmonology Unit, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (M.F.); (F.N.); (T.G.); (G.P.B.); (P.C.)
| | - Simona Civollani
- Department of Medical Physics, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy;
| | - Piero Candoli
- Interventional Pulmonology Unit, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (M.F.); (F.N.); (T.G.); (G.P.B.); (P.C.)
| | - Antonietta D’Errico
- Pathology, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (F.G.); (A.D.); (A.D.)
| | - Piergiorgio Solli
- Division of Thoracic Surgery, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (F.A.); (P.S.)
| | - Stefano Fanti
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
- Nuclear Medicine, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy;
| | - Cristina Nanni
- Nuclear Medicine, IRCCS Azienda Ospedaliero—Universitaria di Bologna, 40138 Bologna, Italy; (L.Z.); (E.F.); (C.M.); (F.L.); (V.S.C.); (I.B.); (S.E.); (M.A.); (C.N.)
| |
Collapse
|
12
|
Salimi Y, Mansouri Z, Amini M, Mainta I, Zaidi H. Explainable AI for automated respiratory misalignment detection in PET/CT imaging. Phys Med Biol 2024; 69:215036. [PMID: 39419113 DOI: 10.1088/1361-6560/ad8857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Accepted: 10/17/2024] [Indexed: 10/19/2024]
Abstract
Purpose.Positron emission tomography (PET) image quality can be affected by artifacts emanating from PET, computed tomography (CT), or artifacts due to misalignment between PET and CT images. Automated detection of misalignment artifacts can be helpful both in data curation and in facilitating clinical workflow. This study aimed to develop an explainable machine learning approach to detect misalignment artifacts in PET/CT imaging.Approach.This study included 1216 PET/CT images. All images were visualized and images with respiratory misalignment artifact (RMA) detected. Using previously trained models, four organs including the lungs, liver, spleen, and heart were delineated on PET and CT images separately. Data were randomly split into cross-validation (80%) and test set (20%), then two segmentations performed on PET and CT images were compared and the comparison metrics used as predictors for a random forest framework in a 10-fold scheme on cross-validation data. The trained models were tested on 20% test set data. The model's performance was calculated in terms of specificity, sensitivity, F1-Score and area under the curve (AUC).Main results.Sensitivity, specificity, and AUC of 0.82, 0.85, and 0.91 were achieved in ten-fold data split. F1_score, sensitivity, specificity, and AUC of 84.5 vs 82.3, 83.9 vs 83.8, 87.7 vs 83.5, and 93.2 vs 90.1 were achieved for cross-validation vs test set, respectively. The liver and lung were the most important organs selected after feature selection.Significance.We developed an automated pipeline to segment four organs from PET and CT images separately and used the match between these segmentations to decide about the presence of misalignment artifact. This methodology may follow the same logic as a reader detecting misalignment through comparing the contours of organs on PET and CT images. The proposed method can be used to clean large datasets or integrated into a clinical scanner to indicate artifactual cases.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
13
|
Yang T, Liu D, Zhang Z, Sa R, Guan F. Predicting T-Cell Lymphoma in Children From 18F-FDG PET-CT Imaging With Multiple Machine Learning Models. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:952-964. [PMID: 38321311 PMCID: PMC11169166 DOI: 10.1007/s10278-024-01007-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 02/08/2024]
Abstract
This study aimed to examine the feasibility of utilizing radiomics models derived from 18F-FDG PET/CT imaging to screen for T-cell lymphoma in children with lymphoma. All patients had undergone 18F-FDG PET/CT scans. Lesions were extracted from PET/CT and randomly divided into training and validation sets. Two different types of models were constructed as follows: features that are extracted from standardized uptake values (SUV)-associated parameters, and CT images were used to build SUV/CT-based model. Features that are derived from PET and CT images were used to build PET/CT-based model. Logistic regression (LR), linear support vector machine, support vector machine with the radial basis function kernel, neural networks, and adaptive boosting were performed as classifiers in each model. In the training sets, 77 patients, and 247 lesions were selected for building the models. In the validation sets, PET/CT-based model demonstrated better performance than that of SUV/CT-based model in the prediction of T-cell lymphoma. LR showed highest accuracy with 0.779 [0.697, 0.860], area under the receiver operating characteristic curve (AUC) with 0.863 [0.762, 0.963], and preferable goodness-of-fit in PET/CT-based model at the patient level. LR also showed best performance with accuracy of 0.838 [0.741, 0.936], AUC of 0.907 [0.839, 0.976], and preferable goodness-of-fit in PET/CT-based model at the lesion level. 18F-FDG PET/CT-based radiomics models with different machine learning classifiers were able to screen T-cell lymphoma in children with high accuracy, AUC, and preferable goodness-of-fit, providing incremental value compared with SUV-associated features.
Collapse
Affiliation(s)
- Taiyu Yang
- Department of Nuclear Medicine, The First Hospital of Jilin University, 1# Xinmin St, Changchun, 130021, China
| | - Danyan Liu
- Department of Radiology, The First Hospital of Jilin University, 1# Xinmin St, Changchun, 130021, China
| | - Zexu Zhang
- Department of Nuclear Medicine, The First Hospital of Jilin University, 1# Xinmin St, Changchun, 130021, China
| | - Ri Sa
- Department of Nuclear Medicine, The First Hospital of Jilin University, 1# Xinmin St, Changchun, 130021, China.
| | - Feng Guan
- Department of Nuclear Medicine, The First Hospital of Jilin University, 1# Xinmin St, Changchun, 130021, China.
| |
Collapse
|
14
|
Wu Y, Sun T, Ng YL, Liu J, Zhu X, Cheng Z, Xu B, Meng N, Zhou Y, Wang M. Clinical Implementation of Total-Body PET in China. J Nucl Med 2024; 65:64S-71S. [PMID: 38719242 DOI: 10.2967/jnumed.123.266977] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 02/13/2024] [Indexed: 07/16/2024] Open
Abstract
Total-body (TB) PET/CT is a groundbreaking tool that has brought about a revolution in both clinical application and scientific research. The transformative impact of TB PET/CT in the realms of clinical practice and scientific exploration has been steadily unfolding since its introduction in 2018, with implications for its implementation within the health care landscape of China. TB PET/CT's exceptional sensitivity enables the acquisition of high-quality images in significantly reduced time frames. Clinical applications have underscored its effectiveness across various scenarios, emphasizing the capacity to personalize dosage, scan duration, and image quality to optimize patient outcomes. TB PET/CT's ability to perform dynamic scans with high temporal and spatial resolution and to perform parametric imaging facilitates the exploration of radiotracer biodistribution and kinetic parameters throughout the body. The comprehensive TB coverage offers opportunities to study interconnections among organs, enhancing our understanding of human physiology and pathology. These insights have the potential to benefit applications requiring holistic TB assessments. The standard topics outlined in The Journal of Nuclear Medicine were used to categorized the reviewed articles into 3 sections: current clinical applications, scan protocol design, and advanced topics. This article delves into the bottleneck that impedes the full use of TB PET in China, accompanied by suggested solutions.
Collapse
Affiliation(s)
- Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China
- People's Hospital of Zhengzhou University, Zhengzhou, China
- Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| | - Tao Sun
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yee Ling Ng
- Central Research Institute, United Imaging Healthcare Group Co., Ltd., Shanghai, China
| | - Jianjun Liu
- Department of Nuclear Medicine, RenJi Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhaoping Cheng
- Department of Nuclear Medicine, First Affiliated Hospital of Shandong First Medical University and Shandong Provincial Qianfoshan Hospital, Jinan, China; and
| | - Baixuan Xu
- Department of Nuclear Medicine, Chinese PLA General Hospital, Beijing, China
| | - Nan Meng
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China
- People's Hospital of Zhengzhou University, Zhengzhou, China
- Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group Co., Ltd., Shanghai, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China;
- People's Hospital of Zhengzhou University, Zhengzhou, China
- Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| |
Collapse
|