1
|
Al-Rahbi A, Al-Mahrouqi O, Al-Saadi T. Uses of artificial intelligence in glioma: A systematic review. MEDICINE INTERNATIONAL 2024; 4:40. [PMID: 38827949 PMCID: PMC11140312 DOI: 10.3892/mi.2024.164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 04/26/2024] [Indexed: 06/05/2024]
Abstract
Glioma is the most prevalent type of primary brain tumor in adults. The use of artificial intelligence (AI) in glioma is increasing and has exhibited promising results. The present study performed a systematic review of the applications of AI in glioma as regards diagnosis, grading, prediction of genotype, progression and treatment response using different databases. The aim of the present study was to demonstrate the trends (main directions) of the recent applications of AI within the field of glioma, and to highlight emerging challenges in integrating AI within clinical practice. A search in four databases (Scopus, PubMed, Wiley and Google Scholar) yielded a total of 42 articles specifically using AI in glioma and glioblastoma. The articles were retrieved and reviewed, and the data were summarized and analyzed. The majority of the articles were from the USA (n=18) followed by China (n=11). The number of articles increased by year reaching the maximum number in 2022. The majority of the articles studied glioma as opposed to glioblastoma. In terms of grading, the majority of the articles were about both low-grade glioma (LGG) and high-grade glioma (HGG) (n=23), followed by HGG/glioblastoma (n=13). Additionally, three articles were about LGG only; two articles did not specify the grade. It was found that one article had the highest sample size among the other studies, reaching 897 samples. Despite the limitations and challenges that face AI, the use of AI in glioma has increased in recent years with promising results, with a variety of applications ranging from diagnosis, grading, prognosis prediction, and reaching to treatment and post-operative care.
Collapse
Affiliation(s)
- Adham Al-Rahbi
- College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Sultanate of Oman
| | - Omar Al-Mahrouqi
- College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Sultanate of Oman
| | - Tariq Al-Saadi
- Department of Neurosurgery, Khoula Hospital, Muscat 123, Sultanate of Oman
- Department of Neurology and Neurosurgery-Montreal Neurological Institute, Faculty of Medicine, McGill University, Montreal, QC H3A 2B4, Canada
| |
Collapse
|
2
|
Kolasa K, Admassu B, Hołownia-Voloskova M, Kędzior KJ, Poirrier JE, Perni S. Systematic reviews of machine learning in healthcare: a literature review. Expert Rev Pharmacoecon Outcomes Res 2024; 24:63-115. [PMID: 37955147 DOI: 10.1080/14737167.2023.2279107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
INTRODUCTION The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. METHODS A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. RESULTS In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). EXPERT OPINION The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kolasa
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | - Bisrat Admassu
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | | | | | | | | |
Collapse
|
3
|
Sun W, Song C, Tang C, Pan C, Xue P, Fan J, Qiao Y. Performance of deep learning algorithms to distinguish high-grade glioma from low-grade glioma: A systematic review and meta-analysis. iScience 2023; 26:106815. [PMID: 37250800 PMCID: PMC10209541 DOI: 10.1016/j.isci.2023.106815] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/23/2023] [Accepted: 05/02/2023] [Indexed: 05/31/2023] Open
Abstract
This study aims to evaluate deep learning (DL) performance in differentiating low- and high-grade glioma. Search online database for studies continuously published from 1st January 2015 until 16th August 2022. The random-effects model was used for synthesis, based on pooled sensitivity (SE), specificity (SP), and area under the curve (AUC). Heterogeneity was estimated using the Higgins inconsistency index (I2). 33 were ultimately included in the meta-analysis. The overall pooled SE and SP were 94% and 93%, with an AUC of 0.98. There was great heterogeneity in this field. Our evidence-based study shows DL achieves high accuracy in glioma grading. Subgroup analysis reveals several limitations in this field: 1) Diagnostic trials require standard method for data merging for AI; 2) small sample size; 3) poor-quality image preprocessing; 4) not standard algorithm development; 5) not standard data report; 6) different definition of HGG and LGG; and 7) poor extrapolation.
Collapse
Affiliation(s)
- Wanyi Sun
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cheng Song
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chao Tang
- Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Chenghao Pan
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jinhu Fan
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
4
|
Aboian M, Bousabarah K, Kazarian E, Zeevi T, Holler W, Merkaj S, Cassinelli Petersen G, Bahar R, Subramanian H, Sunku P, Schrickel E, Bhawnani J, Zawalich M, Mahajan A, Malhotra A, Payabvash S, Tocino I, Lin M, Westerhoff M. Clinical implementation of artificial intelligence in neuroradiology with development of a novel workflow-efficient picture archiving and communication system-based automated brain tumor segmentation and radiomic feature extraction. Front Neurosci 2022; 16:860208. [PMID: 36312024 PMCID: PMC9606757 DOI: 10.3389/fnins.2022.860208] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 07/13/2022] [Indexed: 11/18/2022] Open
Abstract
Purpose Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient’s medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction. Materials and methods An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations. Results UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study. Conclusion Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
Collapse
Affiliation(s)
- Mariam Aboian
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
- *Correspondence: Mariam Aboian,
| | | | - Eve Kazarian
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Tal Zeevi
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | | | - Sara Merkaj
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Gabriel Cassinelli Petersen
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Ryan Bahar
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Harry Subramanian
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Pranay Sunku
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Elizabeth Schrickel
- Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Jitendra Bhawnani
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Mathew Zawalich
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Amit Mahajan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Sam Payabvash
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - Irena Tocino
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, CT, United States
| | - MingDe Lin
- Department of Radiology, Yale University and Visage Imaging, New Haven, CT, United States
| | | |
Collapse
|