1
|
Wu R, Zhang Y, Huang P, Xie Y, Wang J, Wang S, Lin Q, Bai Y, Feng S, Cai N, Lu X. Prediction of Reactivation After Antivascular Endothelial Growth Factor Monotherapy for Retinopathy of Prematurity: Multimodal Machine Learning Model Study. J Med Internet Res 2025; 27:e60367. [PMID: 40267476 PMCID: PMC12063557 DOI: 10.2196/60367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 01/20/2025] [Accepted: 01/31/2025] [Indexed: 04/25/2025] Open
Abstract
BACKGROUND Retinopathy of prematurity (ROP) is the leading preventable cause of childhood blindness. A timely intravitreal injection of antivascular endothelial growth factor (anti-VEGF) is required to prevent retinal detachment with consequent vision impairment and loss. However, anti-VEGF has been reported to be associated with ROP reactivation. Therefore, an accurate prediction of reactivation after treatment is urgently needed. OBJECTIVE To develop and validate prediction models for reactivation after anti-VEGF intravitreal injection in infants with ROP using multimodal machine learning algorithms. METHODS Infants with ROP undergoing anti-VEGF treatment were recruited from 3 hospitals, and conventional machine learning, deep learning, and fusion models were constructed. The areas under the curve (AUCs), accuracy, sensitivity, and specificity were used to show the performances of the prediction models. RESULTS A total of 239 cases with anti-VEGF treatment were recruited, including 90 (37.66%) with reactivation and 149 (62.34%) nonreactivation cases. The AUCs for the conventional machine learning model were 0.806 and 0.805 in the internal validation and test groups, respectively. The average AUC, sensitivity, and specificity in the test for the deep learning model were 0.787, 0.800, and 0.570, respectively. The specificity, AUC, and sensitivity for the fusion model were 0.686, 0.822, and 0.800 in a test, separately. CONCLUSIONS We constructed 3 prediction models for ROP reactivation. The fusion model achieved the best performance. Using this prediction model, we could optimize strategies for treating ROP in infants and develop better screening plans after treatment.
Collapse
Affiliation(s)
- Rong Wu
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yu Zhang
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Peijie Huang
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Yiying Xie
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianxun Wang
- Department of Pediatric Ophthalmology, Guangzhou Children's Hospital and Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Shuangyong Wang
- Department of Ophthalmology, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Qiuxia Lin
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yichen Bai
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Songfu Feng
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Nian Cai
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Xiaohe Lu
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
2
|
Despotovic V, Kim SY, Hau AC, Kakoichankava A, Klamminger GG, Borgmann FBK, Frauenknecht KB, Mittelbronn M, Nazarov PV. Glioma subtype classification from histopathological images using in-domain and out-of-domain transfer learning: An experimental study. Heliyon 2024; 10:e27515. [PMID: 38562501 PMCID: PMC10982966 DOI: 10.1016/j.heliyon.2024.e27515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 02/29/2024] [Accepted: 03/01/2024] [Indexed: 04/04/2024] Open
Abstract
We provide in this paper a comprehensive comparison of various transfer learning strategies and deep learning architectures for computer-aided classification of adult-type diffuse gliomas. We evaluate the generalizability of out-of-domain ImageNet representations for a target domain of histopathological images, and study the impact of in-domain adaptation using self-supervised and multi-task learning approaches for pretraining the models using the medium-to-large scale datasets of histopathological images. A semi-supervised learning approach is furthermore proposed, where the fine-tuned models are utilized to predict the labels of unannotated regions of the whole slide images (WSI). The models are subsequently retrained using the ground-truth labels and weak labels determined in the previous step, providing superior performance in comparison to standard in-domain transfer learning with balanced accuracy of 96.91% and F1-score 97.07%, and minimizing the pathologist's efforts for annotation. Finally, we provide a visualization tool working at WSI level which generates heatmaps that highlight tumor areas; thus, providing insights to pathologists concerning the most informative parts of the WSI.
Collapse
Affiliation(s)
- Vladimir Despotovic
- Bioinformatics Platform, Department of Medical Informatics, Luxembourg Institute of Health, Strassen, Luxembourg
| | - Sang-Yoon Kim
- Bioinformatics Platform, Department of Medical Informatics, Luxembourg Institute of Health, Strassen, Luxembourg
| | - Ann-Christin Hau
- Dr. Senckenberg Institute of Neurooncology, University Hospital Frankfurt, Frankfurt am Main, Germany
- Edinger Institute, Institute of Neurology, Goethe University, Frankfurt am Main, Germany
- Frankfurt Cancer Institute, Goethe University, Frankfurt am Main, Germany
- University Cancer Center Frankfurt, Frankfurt am Main, Germany
- University Hospital, Goethe University, Frankfurt am Main, Germany
- Laboratoire national de santé, National Center of Pathology, Dudelange, Luxembourg
| | - Aliaksandra Kakoichankava
- Multi-Omics Data Science group, Department of Cancer Research, Luxembourg Institute of Health, Strassen, Luxembourg
| | - Gilbert Georg Klamminger
- Luxembourg Centre of Neuropathology, Dudelange, Luxembourg
- Klinik für Frauenheilkunde, Geburtshilfe und Reproduktionsmedizin, Saarland University, Homburg, Germany
| | - Felix Bruno Kleine Borgmann
- Luxembourg Centre of Neuropathology, Dudelange, Luxembourg
- Department of Cancer Research, Luxembourg Institute of Health, Strassen, Luxembourg
- Haupitaux Robert Schumann, Kirchberg, Luxembourg
| | - Katrin B.M. Frauenknecht
- Laboratoire national de santé, National Center of Pathology, Dudelange, Luxembourg
- Luxembourg Centre of Neuropathology, Dudelange, Luxembourg
| | - Michel Mittelbronn
- Laboratoire national de santé, National Center of Pathology, Dudelange, Luxembourg
- Luxembourg Centre of Neuropathology, Dudelange, Luxembourg
- Department of Cancer Research, Luxembourg Institute of Health, Strassen, Luxembourg
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, Belval, Luxembourg
- Department of Life Sciences and Medicine, University of Luxembourg, Esch-sur-Alzette, Luxembourg
- Faculty of Science, Technology and Medicine, University of Luxembourg, Esch-sur-Alzette, Luxembourg
| | - Petr V. Nazarov
- Bioinformatics Platform, Department of Medical Informatics, Luxembourg Institute of Health, Strassen, Luxembourg
- Multi-Omics Data Science group, Department of Cancer Research, Luxembourg Institute of Health, Strassen, Luxembourg
| |
Collapse
|
3
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
4
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
5
|
Kim GJ, Lee T, Ahn S, Uh Y, Kim SH. Efficient diagnosis of IDH-mutant gliomas: 1p/19qNET assesses 1p/19q codeletion status using weakly-supervised learning. NPJ Precis Oncol 2023; 7:94. [PMID: 37717080 PMCID: PMC10505231 DOI: 10.1038/s41698-023-00450-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 09/05/2023] [Indexed: 09/18/2023] Open
Abstract
Accurate identification of molecular alterations in gliomas is crucial for their diagnosis and treatment. Although, fluorescence in situ hybridization (FISH) allows for the observation of diverse and heterogeneous alterations, it is inherently time-consuming and challenging due to the limitations of the molecular method. Here, we report the development of 1p/19qNET, an advanced deep-learning network designed to predict fold change values of 1p and 19q chromosomes and classify isocitrate dehydrogenase (IDH)-mutant gliomas from whole-slide images. We trained 1p/19qNET on next-generation sequencing data from a discovery set (DS) of 288 patients and utilized a weakly-supervised approach with slide-level labels to reduce bias and workload. We then performed validation on an independent validation set (IVS) comprising 385 samples from The Cancer Genome Atlas, a comprehensive cancer genomics resource. 1p/19qNET outperformed traditional FISH, achieving R2 values of 0.589 and 0.547 for the 1p and 19q arms, respectively. As an IDH-mutant glioma classifier, 1p/19qNET attained AUCs of 0.930 and 0.837 in the DS and IVS, respectively. The weakly-supervised nature of 1p/19qNET provides explainable heatmaps for the results. This study demonstrates the successful use of deep learning for precise determination of 1p/19q codeletion status and classification of IDH-mutant gliomas as astrocytoma or oligodendroglioma. 1p/19qNET offers comparable results to FISH and provides informative spatial information. This approach has broader applications in tumor classification.
Collapse
Affiliation(s)
- Gi Jeong Kim
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Medicine, Yonsei University Graduate School, Seoul, Republic of Korea
| | - Tonghyun Lee
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Sangjeong Ahn
- Department of Pathology, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Youngjung Uh
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea.
| | - Se Hoon Kim
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
6
|
Sun W, Song C, Tang C, Pan C, Xue P, Fan J, Qiao Y. Performance of deep learning algorithms to distinguish high-grade glioma from low-grade glioma: A systematic review and meta-analysis. iScience 2023; 26:106815. [PMID: 37250800 PMCID: PMC10209541 DOI: 10.1016/j.isci.2023.106815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/23/2023] [Accepted: 05/02/2023] [Indexed: 05/31/2023] Open
Abstract
This study aims to evaluate deep learning (DL) performance in differentiating low- and high-grade glioma. Search online database for studies continuously published from 1st January 2015 until 16th August 2022. The random-effects model was used for synthesis, based on pooled sensitivity (SE), specificity (SP), and area under the curve (AUC). Heterogeneity was estimated using the Higgins inconsistency index (I2). 33 were ultimately included in the meta-analysis. The overall pooled SE and SP were 94% and 93%, with an AUC of 0.98. There was great heterogeneity in this field. Our evidence-based study shows DL achieves high accuracy in glioma grading. Subgroup analysis reveals several limitations in this field: 1) Diagnostic trials require standard method for data merging for AI; 2) small sample size; 3) poor-quality image preprocessing; 4) not standard algorithm development; 5) not standard data report; 6) different definition of HGG and LGG; and 7) poor extrapolation.
Collapse
Affiliation(s)
- Wanyi Sun
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cheng Song
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chao Tang
- Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Chenghao Pan
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jinhu Fan
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
7
|
Steyaert S, Pizurica M, Nagaraj D, Khandelwal P, Hernandez-Boussard T, Gentles AJ, Gevaert O. Multimodal data fusion for cancer biomarker discovery with deep learning. NAT MACH INTELL 2023; 5:351-362. [PMID: 37693852 PMCID: PMC10484010 DOI: 10.1038/s42256-023-00633-5] [Citation(s) in RCA: 82] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 02/17/2023] [Indexed: 09/12/2023]
Abstract
Technological advances now make it possible to study a patient from multiple angles with high-dimensional, high-throughput multi-scale biomedical data. In oncology, massive amounts of data are being generated ranging from molecular, histopathology, radiology to clinical records. The introduction of deep learning has significantly advanced the analysis of biomedical data. However, most approaches focus on single data modalities leading to slow progress in methods to integrate complementary data types. Development of effective multimodal fusion approaches is becoming increasingly important as a single modality might not be consistent and sufficient to capture the heterogeneity of complex diseases to tailor medical care and improve personalised medicine. Many initiatives now focus on integrating these disparate modalities to unravel the biological processes involved in multifactorial diseases such as cancer. However, many obstacles remain, including lack of usable data as well as methods for clinical validation and interpretation. Here, we cover these current challenges and reflect on opportunities through deep learning to tackle data sparsity and scarcity, multimodal interpretability, and standardisation of datasets.
Collapse
Affiliation(s)
- Sandra Steyaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | | | | | - Tina Hernandez-Boussard
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Andrew J Gentles
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| |
Collapse
|