1
|
Zhan Y, Hao Y, Wang X, Guo D. Advances of artificial intelligence in clinical application and scientific research of neuro-oncology: Current knowledge and future perspectives. Crit Rev Oncol Hematol 2025; 209:104682. [PMID: 40032186 DOI: 10.1016/j.critrevonc.2025.104682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 02/16/2025] [Accepted: 02/25/2025] [Indexed: 03/05/2025] Open
Abstract
Brain tumors refer to the abnormal growths that occur within the brain's tissue, comprising both primary neoplasms and metastatic lesions. Timely detection, precise staging, suitable treatment, and standardized management are of significant clinical importance for extending the survival rates of brain tumor patients. Artificial intelligence (AI), a discipline within computer science, is leveraging its robust capacity for information identification and combination to revolutionize traditional paradigms of oncology care, offering substantial potential for precision medicine. This article provides an overview of the current applications of AI in brain tumors, encompassing the primary AI technologies, their working mechanisms and working workflow, the contributions of AI to brain tumor diagnosis and treatment, as well as the role of AI in brain tumor scientific research, particularly in drug innovation and revealing tumor microenvironment. Finally, the paper addresses the existing challenges, potential solutions, and the future application prospects. This review aims to enhance our understanding of the application of AI in brain tumors and provide valuable insights for forthcoming clinical applications and scientific inquiries.
Collapse
Affiliation(s)
- Yankun Zhan
- First People's Hospital of Linping District; Linping Campus, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 311100, China
| | - Yanying Hao
- First People's Hospital of Linping District; Linping Campus, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 311100, China
| | - Xiang Wang
- First People's Hospital of Linping District; Linping Campus, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 311100, China.
| | - Duancheng Guo
- Cancer Institute, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China.
| |
Collapse
|
2
|
Hosseini SA, Shiri I, Ghaffarian P, Hajianfar G, Avval AH, Seyfi M, Servaes S, Rosa-Neto P, Zaidi H, Ay MR. The effect of harmonization on the variability of PET radiomic features extracted using various segmentation methods. Ann Nucl Med 2024; 38:493-507. [PMID: 38575814 PMCID: PMC11217131 DOI: 10.1007/s12149-024-01923-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/07/2024] [Indexed: 04/06/2024]
Abstract
PURPOSE This study aimed to examine the robustness of positron emission tomography (PET) radiomic features extracted via different segmentation methods before and after ComBat harmonization in patients with non-small cell lung cancer (NSCLC). METHODS We included 120 patients (positive recurrence = 46 and negative recurrence = 74) referred for PET scanning as a routine part of their care. All patients had a biopsy-proven NSCLC. Nine segmentation methods were applied to each image, including manual delineation, K-means (KM), watershed, fuzzy-C-mean, region-growing, local active contour (LAC), and iterative thresholding (IT) with 40, 45, and 50% thresholds. Diverse image discretizations, both without a filter and with different wavelet decompositions, were applied to PET images. Overall, 6741 radiomic features were extracted from each image (749 radiomic features from each segmented area). Non-parametric empirical Bayes (NPEB) ComBat harmonization was used to harmonize the features. Linear Support Vector Classifier (LinearSVC) with L1 regularization For feature selection and Support Vector Machine classifier (SVM) with fivefold nested cross-validation was performed using StratifiedKFold with 'n_splits' set to 5 to predict recurrence in NSCLC patients and assess the impact of ComBat harmonization on the outcome. RESULTS From 749 extracted radiomic features, 206 (27%) and 389 (51%) features showed excellent reliability (ICC ≥ 0.90) against segmentation method variation before and after NPEB ComBat harmonization, respectively. Among all, 39 features demonstrated poor reliability, which declined to 10 after ComBat harmonization. The 64 fixed bin widths (without any filter) and wavelets (LLL)-based radiomic features set achieved the best performance in terms of robustness against diverse segmentation techniques before and after ComBat harmonization. The first-order and GLRLM and also first-order and NGTDM feature families showed the largest number of robust features before and after ComBat harmonization, respectively. In terms of predicting recurrence in NSCLC, our findings indicate that using ComBat harmonization can significantly enhance machine learning outcomes, particularly improving the accuracy of watershed segmentation, which initially had fewer reliable features than manual contouring. Following the application of ComBat harmonization, the majority of cases saw substantial increase in sensitivity and specificity. CONCLUSION Radiomic features are vulnerable to different segmentation methods. ComBat harmonization might be considered a solution to overcome the poor reliability of radiomic features.
Collapse
Affiliation(s)
- Seyyed Ali Hosseini
- Translational Neuroimaging Laboratory, The McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, QC, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, QC, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Pardis Ghaffarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran
- PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | | | - Milad Seyfi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Stijn Servaes
- Translational Neuroimaging Laboratory, The McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, QC, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, QC, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Laboratory, The McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, QC, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, QC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
- University Research and Innovation Center, Óbudabuda University, Budapest, Hungary.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
3
|
Yang Z, Hu Z, Ji H, Lafata K, Vaios E, Floyd S, Yin FF, Wang C. A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation. Med Phys 2023; 50:4825-4838. [PMID: 36840621 PMCID: PMC10440249 DOI: 10.1002/mp.16286] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 01/26/2023] [Accepted: 01/30/2023] [Indexed: 02/26/2023] Open
Abstract
PURPOSE To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
Collapse
Affiliation(s)
- Zhenyu Yang
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Zongsheng Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Hangjie Ji
- Department of Mathematics, North Carolina State University, Raleigh, North Carolina, USA
| | - Kyle Lafata
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Department of Radiology, Duke University, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Eugene Vaios
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Scott Floyd
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Fang-Fang Yin
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Chunhao Wang
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|
4
|
Zukotynski K, Black SE, Kuo PH, Bhan A, Adamo S, Scott CJM, Lam B, Masellis M, Kumar S, Fischer CE, Tartaglia MC, Lang AE, Tang-Wai DF, Freedman M, Vasdev N, Gaudet V. Exploratory Assessment of K-means Clustering to Classify 18F-Flutemetamol Brain PET as Positive or Negative. Clin Nucl Med 2021; 46:616-620. [PMID: 33883495 DOI: 10.1097/rlu.0000000000003668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
RATIONALE We evaluated K-means clustering to classify amyloid brain PETs as positive or negative. PATIENTS AND METHODS Sixty-six participants (31 men, 35 women; age range, 52-81 years) were recruited through a multicenter observational study: 19 cognitively normal, 25 mild cognitive impairment, and 22 dementia (11 Alzheimer disease, 3 subcortical vascular cognitive impairment, and 8 Parkinson-Lewy Body spectrum disorder). As part of the neurocognitive and imaging evaluation, each participant had an 18F-flutemetamol (Vizamyl, GE Healthcare) brain PET. All studies were processed using Cortex ID software (General Electric Company, Boston, MA) to calculate SUV ratios in 19 regions of interest and clinically interpreted by 2 dual-certified radiologists/nuclear medicine physicians, using MIM software (MIM Software Inc, Cleveland, OH), blinded to the quantitative analysis, with final interpretation based on consensus. K-means clustering was retrospectively used to classify the studies from the quantitative data. RESULTS Based on clinical interpretation, 46 brain PETs were negative and 20 were positive for amyloid deposition. Of 19 cognitively normal participants, 1 (5%) had a positive 18F-flutemetamol brain PET. Of 25 participants with mild cognitive impairment, 9 (36%) had a positive 18F-flutemetamol brain PET. Of 22 participants with dementia, 10 (45%) had a positive 18F-flutemetamol brain PET; 7 of 11 participants with Alzheimer disease (64%), 1 of 3 participants with vascular cognitive impairment (33%), and 2 of 8 participants with Parkinson-Lewy Body spectrum disorder (25%) had a positive 18F-flutemetamol brain PET. Using clinical interpretation as the criterion standard, K-means clustering (K = 2) gave sensitivity of 95%, specificity of 98%, and accuracy of 97%. CONCLUSIONS K-means clustering may be a powerful algorithm for classifying amyloid brain PET.
Collapse
Affiliation(s)
| | | | - Phillip H Kuo
- Departments of Medical Imaging, Medicine, and Biomedical Engineering, University of Arizona Cancer Center, University of Arizona, Tucson, AZ
| | - Aparna Bhan
- LC Campbell Cognitive Neurology Research Unit, Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, University of Toronto
| | - Sabrina Adamo
- LC Campbell Cognitive Neurology Research Unit, Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, University of Toronto
| | - Christopher J M Scott
- LC Campbell Cognitive Neurology Research Unit, Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, University of Toronto
| | | | | | | | - Corinne E Fischer
- Keenan Research Centre for Biomedical Science, St Michael's Hospital, University of Toronto
| | | | | | | | | | | | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
5
|
Wu Y, Zhao Z, Wu W, Lin Y, Wang M. Automatic glioma segmentation based on adaptive superpixel. BMC Med Imaging 2019; 19:73. [PMID: 31443642 PMCID: PMC6708204 DOI: 10.1186/s12880-019-0369-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Accepted: 08/08/2019] [Indexed: 12/15/2022] Open
Abstract
Background The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. Methods The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. Results The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. Conclusions This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.
Collapse
Affiliation(s)
- Yaping Wu
- School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Zhe Zhao
- Collaborative Innovation Center for Internet Healthcare & School of Software and Applied Technology, Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Weiguo Wu
- School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China.
| | - Yusong Lin
- Collaborative Innovation Center for Internet Healthcare & School of Software and Applied Technology, Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Meiyun Wang
- Department of Radiology, Henan Provincial People's Hospital, Zhengzhou, 450003, Henan, China
| |
Collapse
|