1
|
Zhao Y, Li L, Yu X, Han K, Duan J, Liang D, Chai N, Li ZC. SurvGraph: A hybrid-graph attention network for survival prediction using whole slide pathological images in gastric cancer. Neural Netw 2025; 189:107607. [PMID: 40375420 DOI: 10.1016/j.neunet.2025.107607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 04/27/2025] [Accepted: 05/08/2025] [Indexed: 05/18/2025]
Abstract
Whole slide pathological images have shown significant potential for patient prognostication. Graph representation learning provides a robust framework for in-depth analysis of whole-slide images to construct predictive models. In this study, we introduce SurvGraph, an innovative graph-based deep learning network designed for gastric cancer survival prediction using whole slide pathological images. SurvGraph employs a hybrid graph construction approach that integrates multiple feature types, including color, texture, and deep learning features extracted from the pathological images to build node representations. SurvGraph utilizes a multi-head attention graph network, which performs survival prediction based on the graph structure. We evaluate the SurvGraph model on a large dataset of 708 gastric cancer patients from three independent cohorts for overall survival prediction. To assess the impact of various feature sets, we examine their performance when used individually and in combination. With five-fold cross-validation, our results demonstrate that the SurvGraph model achieves an average concordance index (C-index) of 0.706 with a standard deviation (SD) of 0.019. The proposed SurvGraph model has also attained a C-index of 0.708 (SD = 0.040) in the external testing set. In addition to baseline comparisons, we conducted a comprehensive benchmarking study comparing SurvGraph against established graph neural network architectures and multiple instance learning-based deep learning frameworks. The results indicate that the SurvGraph model outperforms the compared prediction models, suggesting its potential as a valuable tool for enhancing gastric cancer prognosis estimation.
Collapse
Affiliation(s)
- Yuanshen Zhao
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, PR China
| | - Longsong Li
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, PR China
| | - Xi Yu
- Department of Gastroenterology, Longgang District Central Hospital of Shenzhen, Shenzhen, PR China
| | - Ke Han
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, PR China
| | - Jingxian Duan
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, PR China; Pazhou Lab (Huangpu), Guangdong, PR China
| | - Dong Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, PR China; The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, State Key Laboratory of Biomedical Imaging Science and System, Shenzhen, PR China
| | - Ningli Chai
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, PR China; Pazhou Lab (Huangpu), Guangdong, PR China.
| | - Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, PR China; The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, State Key Laboratory of Biomedical Imaging Science and System, Shenzhen, PR China; University of Chinese Academy of Sciences, Beijing, PR China.
| |
Collapse
|
2
|
Tian C, Xi Y, Ma Y, Chen C, Wu C, Ru K, Li W, Zhao M. Harnessing Deep Learning for Accurate Pathological Assessment of Brain Tumor Cell Types. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:1098-1111. [PMID: 39150595 PMCID: PMC11950525 DOI: 10.1007/s10278-024-01107-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 03/11/2024] [Accepted: 03/27/2024] [Indexed: 08/17/2024]
Abstract
Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.
Collapse
Affiliation(s)
- Chongxuan Tian
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China
| | - Yue Xi
- Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Yuting Ma
- Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Cai Chen
- Shandong Institute of Advanced Technology, Chinese Academy of Sciences, Jinan, Shandong, China
| | - Cong Wu
- Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Kun Ru
- Department of Pathology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Wei Li
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China.
| | - Miaoqing Zhao
- Department of Pathology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China.
| |
Collapse
|
3
|
Vong CK, Wang A, Dragunow M, Park TIH, Shim V. Brain tumour histopathology through the lens of deep learning: A systematic review. Comput Biol Med 2025; 186:109642. [PMID: 39787663 DOI: 10.1016/j.compbiomed.2024.109642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 12/26/2024] [Accepted: 12/27/2024] [Indexed: 01/12/2025]
Abstract
PROBLEM Machine learning (ML)/Deep learning (DL) techniques have been evolving to solve more complex diseases, but it has been used relatively little in Glioblastoma (GBM) histopathological studies, which could benefit greatly due to the disease's complex pathogenesis. AIM Conduct a systematic review to investigate how ML/DL techniques have influenced the progression of brain tumour histopathological research, particularly in GBM. METHODS 54 eligible studies were collected from the PubMed and ScienceDirect databases, and their information about the types of brain tumour/s used, types of -omics data used with histopathological data, origins of the data, types of ML/DL and its training and evaluation methodologies, and the ML/DL task it was set to perform in the study were extracted to inform us of trends in GBM-related ML/DL-based research. RESULTS Only 8 GBM-related studies in the eligible utilised ML/DL methodologies to gain deeper insights into GBM pathogenesis by contextualising histological data with -omics data. However, we report that these studies have been published more recently. The most popular ML/DL models used in GBM-related research are the SVM classifier and ResNet-based CNN architecture. Still, a considerable number of studies failed to state training and evaluative methodologies clearly. CONCLUSION There is a growing trend towards using ML/DL approaches to uncover relationships between biological and histopathological data to bring new insights into GBM, thus pushing GBM research forward. Much work still needs to be done to properly report the ML/DL methodologies to showcase the models' robustness and generalizability and ensure the models are reproducible.
Collapse
Affiliation(s)
- Chun Kiet Vong
- Auckland Bioengineering Institute, The University of Auckland, New Zealand; Centre for Brain Research, The University of Auckland, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, New Zealand; Centre for Brain Research, The University of Auckland, New Zealand; Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Mike Dragunow
- Centre for Brain Research, The University of Auckland, New Zealand; Department of Pharmacology, The Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Thomas I-H Park
- Centre for Brain Research, The University of Auckland, New Zealand; Department of Pharmacology, The Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, The University of Auckland, New Zealand.
| |
Collapse
|
4
|
Amjad U, Raza A, Fahad M, Farid D, Akhunzada A, Abubakar M, Beenish H. Context aware machine learning techniques for brain tumor classification and detection - A review. Heliyon 2025; 11:e41835. [PMID: 39906822 PMCID: PMC11791217 DOI: 10.1016/j.heliyon.2025.e41835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 01/07/2025] [Accepted: 01/08/2025] [Indexed: 02/06/2025] Open
Abstract
Background Machine learning has tremendous potential in acute medical care, particularly in the field of precise medical diagnosis, prediction, and classification of brain tumors. Malignant gliomas, due to their aggressive growth and dismal prognosis, stand out among various brain tumor types. Recent advancements in understanding the genetic abnormalities that underlie these tumors have shed light on their histo-pathological and biological characteristics, which support in better classification and prognosis. Objectives This review aims to predict gene alterations and establish structured correlations among various tumor types, extending the prediction of genetic mutations and structures using the latest machine learning techniques. Specifically, it focuses on multi-modalities of Magnetic Resonance Imaging (MRI) and histopathology, utilizing Convolutional Neural Networks (CNN) for image processing and analysis. Methods The review encompasses the most recent developments in MRI, and histology image processing methods across multiple tumor classes, including Glioma, Meningioma, Pituitary, Oligodendroglioma, and Astrocytoma. It identifies challenges in tumor classification, segmentation, datasets, and modalities, employing various neural network architectures. A competitive analysis assesses the performance of CNN. Furthermore it also implies K-MEANS clustering to predict Genetic structure, Genes Clusters prediction and Molecular Alteration of various types and grades of tumors e.g. Glioma, Meningioma, Pituitary, Oligodendroglioma, and Astrocytoma. Results CNN and KNN structures, with their ability to extract highlights in image-based information, prove effective in tumor classification and segmentation, surmounting challenges in image analysis. Competitive analysis reveals that CNN and outperform others algorithms on publicly available datasets, suggesting their potential for precise tumor diagnosis and treatment planning. Conclusion Machine learning, especially through CNN and SVM algorithms, demonstrates significant potential in the accurate diagnosis and classification of brain tumors based on imaging and histo-pathological data. Further advancements in this area hold promise for improving the accuracy and efficiency of intra-operative tumor diagnosis and treatment.
Collapse
Affiliation(s)
- Usman Amjad
- NED University of Engineering and Technology, Karachi, Pakistan
| | - Asif Raza
- Sir Syed University of Engineering and Technology, Karachi, Pakistan
| | - Muhammad Fahad
- Karachi Institute of Economics and Technology, Karachi, Pakistan
| | | | - Adnan Akhunzada
- College of Computing and IT, University of Doha for Science and Technology, Qatar
| | - Muhammad Abubakar
- Muhammad Nawaz Shareef University of Engineering and Technology, Multan, Pakistan
| | - Hira Beenish
- Karachi Institute of Economics and Technology, Karachi, Pakistan
| |
Collapse
|
5
|
Hassan MH, Reiter E, Razzaq M. Automatic ovarian follicle detection using object detection models. Sci Rep 2024; 14:31856. [PMID: 39738599 PMCID: PMC11685387 DOI: 10.1038/s41598-024-82904-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 12/10/2024] [Indexed: 01/02/2025] Open
Abstract
Ovaries are of paramount importance in reproduction as they produce female gametes through a complex developmental process known as folliculogenesis. In the prospect of better understanding the mechanisms of folliculogenesis and of developing novel pharmacological approaches to control it, it is important to accurately and quantitatively assess the later stages of ovarian folliculogenesis (i.e. the formation of antral follicles and corpus lutea). Manual counting from histological sections is commonly employed to determine the number of these follicular structures, however it is a laborious and error prone task. In this work, we show the benefits of deep learning models for counting antral follicles and corpus lutea in ovarian histology sections. Here, we use various backbone architectures to build two one-stage object detection models, i.e. YOLO and RetinaNet. We employ transfer learning, early stopping, and data augmentation approaches to improve the generalizability of the object detectors. Furthermore, we use sampling strategy to mitigate the foreground-foreground class imbalance and focal loss to reduce the imbalance between the foreground-background classes. Our models were trained and validated using a dataset containing only 1000 images. With RetinaNet, we achieved a mean average precision of 83% whereas with YOLO of 75% on the testing dataset. Our results demonstrate that deep learning methods are useful to speed up the follicle counting process and improve accuracy by correcting manual counting errors.
Collapse
Affiliation(s)
- Maya Haj Hassan
- INRAE, CNRS, Université de Tours, PRC, Nouzilly, 37380, France
| | - Eric Reiter
- INRAE, CNRS, Université de Tours, PRC, Nouzilly, 37380, France
- Université Paris-Saclay, Inria, Inria Saclay-Île-de-France, Palaiseau, 91120, France
| | - Misbah Razzaq
- INRAE, CNRS, Université de Tours, PRC, Nouzilly, 37380, France.
| |
Collapse
|
6
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
7
|
Ancheta K, Le Calvez S, Williams J. The digital revolution in veterinary pathology. J Comp Pathol 2024; 214:19-31. [PMID: 39241697 DOI: 10.1016/j.jcpa.2024.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 06/14/2024] [Accepted: 08/01/2024] [Indexed: 09/09/2024]
Abstract
For the past two centuries, the use of traditional light microscopy to examine tissues to make diagnoses has remained relatively unchanged. While the fundamental concept of tissue slide analysis has stayed the same, our interaction with the microscope is undergoing significant changes. Digital pathology (DP) has gained momentum in veterinary science and is on the verge of becoming a vital tool in diagnostics, research and education. Many diagnostic laboratories have incorporated DP as a critical part of their workflows. Innovations in DP and whole slide image technology have made telediagnosis (the process of transmitting digital clinical data using telecommunication networks for distant diagnosis) more accessible, leading to improved patient care through streamlining of workflows and greater accessibility of second opinions. The integration of machine learning and artificial intelligence and human-in-the-loop protocols for DP workflows will further the development of computer-aided diagnosis and prognostic tools. Despite its present weaknesses, DP will progressively aid veterinary clinicians and pathologists in delivering more accurate and reliable diagnoses. Consistent incorporation of DP frontline advancements into routine veterinary diagnostic pipelines will assist in improving current tools and help prepare pathologists for the progression of digitalization in the field.
Collapse
Affiliation(s)
- Kenneth Ancheta
- The Royal Veterinary College, Hawkshead Campus, Hawkshead Lane, Hatfield, Hertfordshire AL9 7TA, UK
| | - Sophie Le Calvez
- IDEXX Laboratories Ltd, Grange House, Sandbeck Way, Wetherby, Yorkshire LS22 7DN, UK
| | - Jonathan Williams
- The Royal Veterinary College, Hawkshead Campus, Hawkshead Lane, Hatfield, Hertfordshire AL9 7TA, UK.
| |
Collapse
|
8
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1728-1751. [PMID: 38429563 PMCID: PMC11300721 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
9
|
Elazab N, Gab Allah W, Elmogy M. Computer-aided diagnosis system for grading brain tumor using histopathology images based on color and texture features. BMC Med Imaging 2024; 24:177. [PMID: 39030508 PMCID: PMC11264763 DOI: 10.1186/s12880-024-01355-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 07/03/2024] [Indexed: 07/21/2024] Open
Abstract
BACKGROUND Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results. METHODS In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features. RESULTS The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery. CONCLUSION The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patients for targeted therapy, and customize specific treatment schedules.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, 35516, Mansoura, Egypt
| | - Wael Gab Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, 35516, Mansoura, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, 35516, Mansoura, Egypt.
| |
Collapse
|
10
|
Wu Z, Yang Y, Chen M, Zha Y. Matrix metalloproteinase 9 expression and glioblastoma survival prediction using machine learning on digital pathological images. Sci Rep 2024; 14:15065. [PMID: 38956384 PMCID: PMC11220146 DOI: 10.1038/s41598-024-66105-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 06/27/2024] [Indexed: 07/04/2024] Open
Abstract
This study aimed to apply pathomics to predict Matrix metalloproteinase 9 (MMP9) expression in glioblastoma (GBM) and investigate the underlying molecular mechanisms associated with pathomics. Here, we included 127 GBM patients, 78 of whom were randomly allocated to the training and test cohorts for pathomics modeling. The prognostic significance of MMP9 was assessed using Kaplan-Meier and Cox regression analyses. PyRadiomics was used to extract the features of H&E-stained whole slide images. Feature selection was performed using the maximum relevance and minimum redundancy (mRMR) and recursive feature elimination (RFE) algorithms. Prediction models were created using support vector machines (SVM) and logistic regression (LR). The performance was assessed using ROC analysis, calibration curve assessment, and decision curve analysis. MMP9 expression was elevated in patients with GBM. This was an independent prognostic factor for GBM. Six features were selected for the pathomics model. The area under the curves (AUCs) of the training and test subsets were 0.828 and 0.808, respectively, for the SVM model and 0.778 and 0.754, respectively, for the LR model. The C-index and calibration plots exhibited effective estimation abilities. The pathomics score calculated using the SVM model was highly correlated with overall survival time. These findings indicate that MMP9 plays a crucial role in GBM development and prognosis. Our pathomics model demonstrated high efficacy for predicting MMP9 expression levels and prognosis of patients with GBM.
Collapse
Affiliation(s)
- Zijun Wu
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, 430000, China
| | - Yuan Yang
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, 430000, China
| | - Maojuan Chen
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, 430000, China
| | - Yunfei Zha
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, 430000, China.
| |
Collapse
|
11
|
Islam J, Turgeon M, Sladek R, Bhatnagar S. Case-Base Neural Network: Survival analysis with time-varying, higher-order interactions. MACHINE LEARNING WITH APPLICATIONS 2024; 16:100535. [PMID: 39802089 PMCID: PMC11720922 DOI: 10.1016/j.mlwa.2024.100535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025] Open
Abstract
In the context of survival analysis, data-driven neural network-based methods have been developed to model complex covariate effects. While these methods may provide better predictive performance than regression-based approaches, not all can model time-varying interactions and complex baseline hazards. To address this, we propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures. Using a novel sampling scheme and data augmentation to naturally account for censoring, we construct a feed-forward neural network that includes time as an input. CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function. We compare the performance of CBNNs to regression and neural network-based survival methods in a simulation and three case studies using two time-dependent metrics. First, we examine performance on a simulation involving a complex baseline hazard and time-varying interactions to assess all methods, with CBNN outperforming competitors. Then, we apply all methods to three real data applications, with CBNNs outperforming the competing models in two studies and showing similar performance in the third. Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes that estimates time-varying effects and a complex baseline hazard by design. An R package is available at https://github.com/Jesse-Islam/cbnn.
Collapse
Affiliation(s)
- Jesse Islam
- McGill University Department of Quantitative Life Sciences, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
| | - Maxime Turgeon
- University of Manitoba Department of Statistics, 50 Sifton Rd, Winnipeg, R3T2N2, Manitoba, Canada
| | - Robert Sladek
- McGill University Department of Quantitative Life Sciences, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
- McGill University Department of Human Genetics, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
| | - Sahir Bhatnagar
- McGill University Department of Biostatistics, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
| |
Collapse
|
12
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
13
|
Zadeh Shirazi A, Tofighi M, Gharavi A, Gomez GA. The Application of Artificial Intelligence to Cancer Research: A Comprehensive Guide. Technol Cancer Res Treat 2024; 23:15330338241250324. [PMID: 38775067 PMCID: PMC11113055 DOI: 10.1177/15330338241250324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 03/28/2024] [Accepted: 04/08/2024] [Indexed: 05/25/2024] Open
Abstract
Advancements in AI have notably changed cancer research, improving patient care by enhancing detection, survival prediction, and treatment efficacy. This review covers the role of Machine Learning, Soft Computing, and Deep Learning in oncology, explaining key concepts and algorithms (like SVM, Naïve Bayes, and CNN) in a clear, accessible manner. It aims to make AI advancements understandable to a broad audience, focusing on their application in diagnosing, classifying, and predicting various cancer types, thereby underlining AI's potential to better patient outcomes. Moreover, we present a tabular summary of the most significant advances from the literature, offering a time-saving resource for readers to grasp each study's main contributions. The remarkable benefits of AI-powered algorithms in cancer care underscore their potential for advancing cancer research and clinical practice. This review is a valuable resource for researchers and clinicians interested in the transformative implications of AI in cancer care.
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and the University of South Australia, Adelaide, SA, Australia
| | - Morteza Tofighi
- Department of Electrical Engineering, Faculty of Engineering, Bu-Ali Sina University, Hamedan, Iran
| | - Alireza Gharavi
- Department of Computer Science, Azad University, Mashhad Branch, Mashhad, Iran
| | - Guillermo A. Gomez
- Centre for Cancer Biology, SA Pathology and the University of South Australia, Adelaide, SA, Australia
| |
Collapse
|
14
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
15
|
Hörst F, Ting S, Liffers ST, Pomykala KL, Steiger K, Albertsmeier M, Angele MK, Lorenzen S, Quante M, Weichert W, Egger J, Siveke JT, Kleesiek J. Histology-Based Prediction of Therapy Response to Neoadjuvant Chemotherapy for Esophageal and Esophagogastric Junction Adenocarcinomas Using Deep Learning. JCO Clin Cancer Inform 2023; 7:e2300038. [PMID: 37527475 DOI: 10.1200/cci.23.00038] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/27/2023] [Accepted: 06/07/2023] [Indexed: 08/03/2023] Open
Abstract
PURPOSE Quantifying treatment response to gastroesophageal junction (GEJ) adenocarcinomas is crucial to provide an optimal therapeutic strategy. Routinely taken tissue samples provide an opportunity to enhance existing positron emission tomography-computed tomography (PET/CT)-based therapy response evaluation. Our objective was to investigate if deep learning (DL) algorithms are capable of predicting the therapy response of patients with GEJ adenocarcinoma to neoadjuvant chemotherapy on the basis of histologic tissue samples. METHODS This diagnostic study recruited 67 patients with I-III GEJ adenocarcinoma from the multicentric nonrandomized MEMORI trial including three German university hospitals TUM (University Hospital Rechts der Isar, Munich), LMU (Hospital of the Ludwig-Maximilians-University, Munich), and UME (University Hospital Essen, Essen). All patients underwent baseline PET/CT scans and esophageal biopsy before and 14-21 days after treatment initiation. Treatment response was defined as a ≥35% decrease in SUVmax from baseline. Several DL algorithms were developed to predict PET/CT-based responders and nonresponders to neoadjuvant chemotherapy using digitized histopathologic whole slide images (WSIs). RESULTS The resulting models were trained on TUM (n = 25 pretherapy, n = 47 on-therapy) patients and evaluated on our internal validation cohort from LMU and UME (n = 17 pretherapy, n = 15 on-therapy). Compared with multiple architectures, the best pretherapy network achieves an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% CI, 0.61 to 1.00), an area under the precision-recall curve (AUPRC) of 0.82 (95% CI, 0.61 to 1.00), a balanced accuracy of 0.78 (95% CI, 0.60 to 0.94), and a Matthews correlation coefficient (MCC) of 0.55 (95% CI, 0.18 to 0.88). The best on-therapy network achieves an AUROC of 0.84 (95% CI, 0.64 to 1.00), an AUPRC of 0.82 (95% CI, 0.56 to 1.00), a balanced accuracy of 0.80 (95% CI, 0.65 to 1.00), and a MCC of 0.71 (95% CI, 0.38 to 1.00). CONCLUSION Our results show that DL algorithms can predict treatment response to neoadjuvant chemotherapy using WSI with high accuracy even before therapy initiation, suggesting the presence of predictive morphologic tissue biomarkers.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Saskia Ting
- Institute of Pathology, University Hospital Essen (AöR), University of Duisburg-Essen, Essen, Germany
- Current address: Institute of Pathology Nordhessen, Kassel, Germany
| | - Sven-Thorsten Liffers
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK, Partner site Essen) and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Kelsey L Pomykala
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Katja Steiger
- Institute of Pathology, Technical University of Munich (TUM), Munich, Germany
| | - Markus Albertsmeier
- Department of General, Visceral and Transplantation Surgery, LMU University Hospital, Ludwig-Maximilians-Universität (LMU) Munich, Munich, Germany
| | - Martin K Angele
- Department of General, Visceral and Transplantation Surgery, LMU University Hospital, Ludwig-Maximilians-Universität (LMU) Munich, Munich, Germany
| | - Sylvie Lorenzen
- Clinic for Internal Medicine III, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Michael Quante
- Clinic for Internal Medicine II, Gastrointestinal Oncology, University Medical Center of Freiburg, Freiburg, Germany
- Department of Internal Medicine II, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Wilko Weichert
- Institute of Pathology, Technical University of Munich (TUM), Munich, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Jens T Siveke
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK, Partner site Essen) and German Cancer Research Center (DKFZ), Heidelberg, Germany
- West German Cancer Center, Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University Duisburg-Essen, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK, Partner site Essen), Heidelberg, Germany
| |
Collapse
|
16
|
Lee M. Recent Advancements in Deep Learning Using Whole Slide Imaging for Cancer Prognosis. Bioengineering (Basel) 2023; 10:897. [PMID: 37627783 PMCID: PMC10451210 DOI: 10.3390/bioengineering10080897] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/21/2023] [Accepted: 07/27/2023] [Indexed: 08/26/2023] Open
Abstract
This review furnishes an exhaustive analysis of the latest advancements in deep learning techniques applied to whole slide images (WSIs) in the context of cancer prognosis, focusing specifically on publications from 2019 through 2023. The swiftly maturing field of deep learning, in combination with the burgeoning availability of WSIs, manifests significant potential in revolutionizing the predictive modeling of cancer prognosis. In light of the swift evolution and profound complexity of the field, it is essential to systematically review contemporary methodologies and critically appraise their ramifications. This review elucidates the prevailing landscape of this intersection, cataloging major developments, evaluating their strengths and weaknesses, and providing discerning insights into prospective directions. In this paper, a comprehensive overview of the field aims to be presented, which can serve as a critical resource for researchers and clinicians, ultimately enhancing the quality of cancer care outcomes. This review's findings accentuate the need for ongoing scrutiny of recent studies in this rapidly progressing field to discern patterns, understand breakthroughs, and navigate future research trajectories.
Collapse
Affiliation(s)
- Minhyeok Lee
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|
17
|
Sozer A, Borcek AO, Sagiroglu S, Poshtkouh A, Demirtas Z, Karaaslan MM, Kuzucu P, Celtikci E. The first case of glioma detected by an artificial intelligence algorithm running on real-time data in neurosurgery: illustrative case. JOURNAL OF NEUROSURGERY. CASE LESSONS 2023; 5:CASE22536. [PMID: 37158388 PMCID: PMC10550689 DOI: 10.3171/case22536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 04/06/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND The aim of this paper is to report one of the significant applications of artificial intelligence (AI) and how it affects everyday clinical practice in neurosurgery. The authors present a case in which a patient was diagnosed via an AI algorithm during ongoing magnetic resonance imaging (MRI). According to this algorithm, the corresponding physicians were immediately warned, and the patient received prompt appropriate treatment. OBSERVATIONS A 46-year-old female presenting with nonspecific headache was admitted to undergo MRI. Scanning revealed an intraparenchymal mass that was detected by an AI algorithm running on real-time patient data while the patient was still in the MRI scanner. The day after MRI, a stereotactic biopsy was performed. The pathology report confirmed an isocitrate dehydrogenase wild-type diffuse glioma. The patient was referred to the oncology department for evaluation and immediate treatment. LESSONS This is the first report of a glioma diagnosed by an AI algorithm and a subsequent prompt operation in the literature-the first of many and an example of how AI will enhance clinical practice.
Collapse
Affiliation(s)
| | - Alp Ozgun Borcek
- Department of Neurosurgery, Division of Pediatric Neurosurgery, Gazi University Faculty of Medicine, and
| | - Seref Sagiroglu
- Department of Computer Engineering, Gazi University Faculty of Engineering, Ankara, Türkiye
| | | | | | | | | | | |
Collapse
|
18
|
Shinde RK, Alam MS, Hossain MB, Md Imtiaz S, Kim J, Padwal AA, Kim N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers (Basel) 2022; 15:cancers15010012. [PMID: 36612010 PMCID: PMC9817940 DOI: 10.3390/cancers15010012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/15/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
Cancer remains a deadly disease. We developed a lightweight, accurate, general-purpose deep learning algorithm for skin cancer classification. Squeeze-MNet combines a Squeeze algorithm for digital hair removal during preprocessing and a MobileNet deep learning model with predefined weights. The Squeeze algorithm extracts important image features from the image, and the black-hat filter operation removes noise. The MobileNet model (with a dense neural network) was developed using the International Skin Imaging Collaboration (ISIC) dataset to fine-tune the model. The proposed model is lightweight; the prototype was tested on a Raspberry Pi 4 Internet of Things device with a Neo pixel 8-bit LED ring; a medical doctor validated the device. The average precision (AP) for benign and malignant diagnoses was 99.76% and 98.02%, respectively. Using our approach, the required dataset size decreased by 66%. The hair removal algorithm increased the accuracy of skin cancer detection to 99.36% with the ISIC dataset. The area under the receiver operating curve was 98.9%.
Collapse
Affiliation(s)
- Rupali Kiran Shinde
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Md. Biddut Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Shariar Md Imtiaz
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - JoonHyun Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Nam Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
- Correspondence:
| |
Collapse
|
19
|
Chen X, Peng Y, Guo Y, Sun J, Li D, Cui J. MLRD-Net: 3D multiscale local cross-channel residual denoising network for MRI-based brain tumor segmentation. Med Biol Eng Comput 2022; 60:3377-3395. [DOI: 10.1007/s11517-022-02673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 09/17/2022] [Indexed: 11/11/2022]
|
20
|
Hou J, Jia X, Xie Y, Qin W. Integrative Histology-Genomic Analysis Predicts Hepatocellular Carcinoma Prognosis Using Deep Learning. Genes (Basel) 2022; 13:genes13101770. [PMID: 36292654 PMCID: PMC9601633 DOI: 10.3390/genes13101770] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/25/2022] [Accepted: 09/28/2022] [Indexed: 11/04/2022] Open
Abstract
Cancer prognosis analysis is of essential interest in clinical practice. In order to explore the prognostic power of computational histopathology and genomics, this paper constructs a multi-modality prognostic model for survival prediction. We collected 346 patients diagnosed with hepatocellular carcinoma (HCC) from The Cancer Genome Atlas (TCGA), each patient has 1-3 whole slide images (WSIs) and an mRNA expression file. WSIs were processed by a multi-instance deep learning model to obtain the patient-level survival risk scores; mRNA expression data were processed by weighted gene co-expression network analysis (WGCNA), and the top hub genes of each module were extracted as risk factors. Information from two modalities was integrated by Cox proportional hazard model to predict patient outcomes. The overall survival predictions of the multi-modality model (Concordance index (C-index): 0.746, 95% confidence interval (CI): ±0.077) outperformed these based on histopathology risk score or hub genes, respectively. Furthermore, in the prediction of 1-year and 3-year survival, the area under curve of the model achieved 0.816 and 0.810. In conclusion, this paper provides an effective workflow for multi-modality prognosis of HCC, the integration of histopathology and genomic information has the potential to assist clinical prognosis management.
Collapse
Affiliation(s)
- Jiaxin Hou
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaoqi Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenjian Qin
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Correspondence:
| |
Collapse
|
21
|
di Noia C, Grist JT, Riemer F, Lyasheva M, Fabozzi M, Castelli M, Lodi R, Tonon C, Rundo L, Zaccagna F. Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics (Basel) 2022; 12:diagnostics12092125. [PMID: 36140526 PMCID: PMC9497964 DOI: 10.3390/diagnostics12092125] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/05/2022] [Accepted: 08/17/2022] [Indexed: 11/24/2022] Open
Abstract
Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.
Collapse
Affiliation(s)
- Christian di Noia
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Frank Riemer
- Mohn Medical Imaging and Visualization Centre (MMIV), Department of Radiology, Haukeland University Hospital, N-5021 Bergen, Norway
| | - Maria Lyasheva
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK
| | - Miriana Fabozzi
- Centro Medico Polispecialistico (CMO), 80058 Torre Annunziata, Italy
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
- Correspondence: ; Tel.: +39-0514969951
| |
Collapse
|
22
|
Sabitha P, Meeragandhi G. A dual stage AlexNet-HHO-DrpXLM archetype for an effective feature extraction, classification and prediction of liver cancer based on histopathology images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
23
|
Bankhead P. Developing image analysis methods for digital pathology. J Pathol 2022; 257:391-402. [PMID: 35481680 PMCID: PMC9324951 DOI: 10.1002/path.5921] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/22/2022] [Accepted: 04/25/2022] [Indexed: 12/04/2022]
Abstract
The potential to use quantitative image analysis and artificial intelligence is one of the driving forces behind digital pathology. However, despite novel image analysis methods for pathology being described across many publications, few become widely adopted and many are not applied in more than a single study. The explanation is often straightforward: software implementing the method is simply not available, or is too complex, incomplete, or dataset‐dependent for others to use. The result is a disconnect between what seems already possible in digital pathology based upon the literature, and what actually is possible for anyone wishing to apply it using currently available software. This review begins by introducing the main approaches and techniques involved in analysing pathology images. I then examine the practical challenges inherent in taking algorithms beyond proof‐of‐concept, from both a user and developer perspective. I describe the need for a collaborative and multidisciplinary approach to developing and validating meaningful new algorithms, and argue that openness, implementation, and usability deserve more attention among digital pathology researchers. The review ends with a discussion about how digital pathology could benefit from interacting with and learning from the wider bioimage analysis community, particularly with regard to sharing data, software, and ideas. © 2022 The Author. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Peter Bankhead
- Edinburgh Pathology, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK.,Centre for Genomic & Experimental Medicine, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK.,Cancer Research UK Edinburgh Centre, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
24
|
Huang J, Shlobin NA, DeCuypere M, Lam SK. Deep Learning for Outcome Prediction in Neurosurgery: A Systematic Review of Design, Reporting, and Reproducibility. Neurosurgery 2022; 90:16-38. [PMID: 34982868 DOI: 10.1227/neu.0000000000001736] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 08/18/2021] [Indexed: 02/06/2023] Open
Abstract
Deep learning (DL) is a powerful machine learning technique that has increasingly been used to predict surgical outcomes. However, the large quantity of data required and lack of model interpretability represent substantial barriers to the validity and reproducibility of DL models. The objective of this study was to systematically review the characteristics of DL studies involving neurosurgical outcome prediction and to assess their bias and reporting quality. Literature search using the PubMed, Scopus, and Embase databases identified 1949 records of which 35 studies were included. Of these, 32 (91%) developed and validated a DL model while 3 (9%) validated a pre-existing model. The most commonly represented subspecialty areas were oncology (16 of 35, 46%), spine (8 of 35, 23%), and vascular (6 of 35, 17%). Risk of bias was low in 18 studies (51%), unclear in 5 (14%), and high in 12 (34%), most commonly because of data quality deficiencies. Adherence to transparent reporting of a multivariable prediction model for individual prognosis or diagnosis reporting standards was low, with a median of 12 transparent reporting of a multivariable prediction model for individual prognosis or diagnosis items (39%) per study not reported. Model transparency was severely limited because code was provided in only 3 studies (9%) and final models in 2 (6%). With the exception of public databases, no study data sets were readily available. No studies described DL models as ready for clinical use. The use of DL for neurosurgical outcome prediction remains nascent. Lack of appropriate data sets poses a major concern for bias. Although studies have demonstrated promising results, greater transparency in model development and reporting is needed to facilitate reproducibility and validation.
Collapse
Affiliation(s)
- Jonathan Huang
- Ann and Robert H. Lurie Children's Hospital, Division of Pediatric Neurosurgery, Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | | | | | | |
Collapse
|
25
|
Tran KA, Kondrashova O, Bradley A, Williams ED, Pearson JV, Waddell N. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Med 2021; 13:152. [PMID: 34579788 PMCID: PMC8477474 DOI: 10.1186/s13073-021-00968-x] [Citation(s) in RCA: 363] [Impact Index Per Article: 90.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Accepted: 09/12/2021] [Indexed: 12/13/2022] Open
Abstract
Deep learning is a subdiscipline of artificial intelligence that uses a machine learning technique called artificial neural networks to extract patterns and make predictions from large data sets. The increasing adoption of deep learning across healthcare domains together with the availability of highly characterised cancer datasets has accelerated research into the utility of deep learning in the analysis of the complex biology of cancer. While early results are promising, this is a rapidly evolving field with new knowledge emerging in both cancer biology and deep learning. In this review, we provide an overview of emerging deep learning techniques and how they are being applied to oncology. We focus on the deep learning applications for omics data types, including genomic, methylation and transcriptomic data, as well as histopathology-based genomic inference, and provide perspectives on how the different data types can be integrated to develop decision support tools. We provide specific examples of how deep learning may be applied in cancer diagnosis, prognosis and treatment management. We also assess the current limitations and challenges for the application of deep learning in precision oncology, including the lack of phenotypically rich data and the need for more explainable deep learning models. Finally, we conclude with a discussion of how current obstacles can be overcome to enable future clinical utilisation of deep learning.
Collapse
Affiliation(s)
- Khoa A. Tran
- Department of Genetics and Computational Biology, QIMR Berghofer Medical Research Institute, Brisbane, 4006 Australia
- School of Biomedical Sciences, Faculty of Health, Queensland University of Technology (QUT), Brisbane, 4059 Australia
| | - Olga Kondrashova
- Department of Genetics and Computational Biology, QIMR Berghofer Medical Research Institute, Brisbane, 4006 Australia
| | - Andrew Bradley
- Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, 4000 Australia
| | - Elizabeth D. Williams
- School of Biomedical Sciences, Faculty of Health, Queensland University of Technology (QUT), Brisbane, 4059 Australia
- Australian Prostate Cancer Research Centre - Queensland (APCRC-Q) and Queensland Bladder Cancer Initiative (QBCI), Brisbane, 4102 Australia
| | - John V. Pearson
- Department of Genetics and Computational Biology, QIMR Berghofer Medical Research Institute, Brisbane, 4006 Australia
| | - Nicola Waddell
- Department of Genetics and Computational Biology, QIMR Berghofer Medical Research Institute, Brisbane, 4006 Australia
| |
Collapse
|
26
|
Zadeh Shirazi A, McDonnell MD, Fornaciari E, Bagherian NS, Scheer KG, Samuel MS, Yaghoobi M, Ormsby RJ, Poonnoose S, Tumes DJ, Gomez GA. A deep convolutional neural network for segmentation of whole-slide pathology images identifies novel tumour cell-perivascular niche interactions that are associated with poor survival in glioblastoma. Br J Cancer 2021; 125:337-350. [PMID: 33927352 PMCID: PMC8329064 DOI: 10.1038/s41416-021-01394-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 03/16/2021] [Accepted: 04/08/2021] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Glioblastoma is the most aggressive type of brain cancer with high-levels of intra- and inter-tumour heterogeneity that contribute to its rapid growth and invasion within the brain. However, a spatial characterisation of gene signatures and the cell types expressing these in different tumour locations is still lacking. METHODS We have used a deep convolutional neural network (DCNN) as a semantic segmentation model to segment seven different tumour regions including leading edge (LE), infiltrating tumour (IT), cellular tumour (CT), cellular tumour microvascular proliferation (CTmvp), cellular tumour pseudopalisading region around necrosis (CTpan), cellular tumour perinecrotic zones (CTpnz) and cellular tumour necrosis (CTne) in digitised glioblastoma histopathological slides from The Cancer Genome Atlas (TCGA). Correlation analysis between segmentation results from tumour images together with matched RNA expression data was performed to identify genetic signatures that are specific to different tumour regions. RESULTS We found that spatially resolved gene signatures were strongly correlated with survival in patients with defined genetic mutations. Further in silico cell ontology analysis along with single-cell RNA sequencing data from resected glioblastoma tissue samples showed that these tumour regions had different gene signatures, whose expression was driven by different cell types in the regional tumour microenvironment. Our results further pointed to a key role for interactions between microglia/pericytes/monocytes and tumour cells that occur in the IT and CTmvp regions, which may contribute to poor patient survival. CONCLUSIONS This work identified key histopathological features that correlate with patient survival and detected spatially associated genetic signatures that contribute to tumour-stroma interactions and which should be investigated as new targets in glioblastoma. The source codes and datasets used are available in GitHub: https://github.com/amin20/GBM_WSSM .
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA, Australia
| | - Mark D McDonnell
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA, Australia
| | - Eric Fornaciari
- Department of Mathematics of Computation, University of California, Los Angeles (UCLA), CA, USA
| | | | - Kaitlin G Scheer
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
| | - Michael S Samuel
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
- Adelaide Medical School, University of Adelaide, Adelaide, SA, Australia
| | - Mahdi Yaghoobi
- Electrical and Computer Engineering Department, Department of Artificial Intelligence, Islamic Azad University, Mashhad Branch, Mashhad, Iran
| | - Rebecca J Ormsby
- Flinders Health and Medical Research Institute, College of Medicine & Public Health, Flinders University, Adelaide, SA, Australia
| | - Santosh Poonnoose
- Flinders Health and Medical Research Institute, College of Medicine & Public Health, Flinders University, Adelaide, SA, Australia
- Department of Neurosurgery, Flinders Medical Centre, Bedford Park, SA, Australia
| | - Damon J Tumes
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia
| | - Guillermo A Gomez
- Centre for Cancer Biology, SA Pathology and University of South Australia, Adelaide, SA, Australia.
| |
Collapse
|
27
|
Li H, Zhao Q, Zhang Y, Sai K, Xu L, Mou Y, Xie Y, Ren J, Jiang X. Image-driven classification of functioning and nonfunctioning pituitary adenoma by deep convolutional neural networks. Comput Struct Biotechnol J 2021; 19:3077-3086. [PMID: 34136106 PMCID: PMC8178077 DOI: 10.1016/j.csbj.2021.05.023] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 05/05/2021] [Accepted: 05/13/2021] [Indexed: 11/28/2022] Open
Abstract
The secreting function of pituitary adenomas (PAs) plays a critical role in making the treatment strategies. However, Magnetic Resonance Imaging (MRI) analysis for pituitary adenomas is labor intensive and highly variable among radiologists. In this work, by applying convolutional neural network (CNN), we built a segmentation and classification model to help distinguish functioning pituitary adenomas from non-functioning subtypes with 3D MRI images from 185 patients with PAs (two centers). Specifically, the classification model adopts the concept of transfer learning and uses the pre-trained segmentation model to extract deep features from conventional MRI images. As a result, both segmentation and classification models obtained high performance in two internal validation datasets and an external testing dataset (for segmentation model: Dice score = 0.8188, 0.8091 and 0.8093 respectively; for classification model: AUROC = 0.8063, 0.7881 and 0.8478, respectively). In addition, the classification model considers the attention mechanism for better model interpretation. Taken together, this work provides the first deep learning-based tumor region segmentation and classification models of PAs, which enables early diagnosis and subtyping PAs from MRI images.
Collapse
Affiliation(s)
- Hongyu Li
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Qi Zhao
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yihua Zhang
- The Department of Neurosurgery, Daping Hospital, Army Medical University, Chongqing 400042, China
| | - Ke Sai
- Department of Neurosurgery/Neuro-oncology, Sun Yat-sen University Cancer Center. State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Lunshan Xu
- The Department of Neurosurgery, Daping Hospital, Army Medical University, Chongqing 400042, China
| | - Yonggao Mou
- Department of Neurosurgery/Neuro-oncology, Sun Yat-sen University Cancer Center. State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Yubin Xie
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Jian Ren
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xiaobing Jiang
- State Key Laboratory of Oncology in South China, Cancer Center, Collaborative Innovation Center for Cancer Medicine, School of Life Science, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
- Department of Neurosurgery/Neuro-oncology, Sun Yat-sen University Cancer Center. State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
- Jiangmen Central Hospital, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| |
Collapse
|
28
|
Banegas-Luna AJ, Peña-García J, Iftene A, Guadagni F, Ferroni P, Scarpato N, Zanzotto FM, Bueno-Crespo A, Pérez-Sánchez H. Towards the Interpretability of Machine Learning Predictions for Medical Applications Targeting Personalised Therapies: A Cancer Case Survey. Int J Mol Sci 2021; 22:4394. [PMID: 33922356 PMCID: PMC8122817 DOI: 10.3390/ijms22094394] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 04/16/2021] [Accepted: 04/20/2021] [Indexed: 12/18/2022] Open
Abstract
Artificial Intelligence is providing astonishing results, with medicine being one of its favourite playgrounds. Machine Learning and, in particular, Deep Neural Networks are behind this revolution. Among the most challenging targets of interest in medicine are cancer diagnosis and therapies but, to start this revolution, software tools need to be adapted to cover the new requirements. In this sense, learning tools are becoming a commodity but, to be able to assist doctors on a daily basis, it is essential to fully understand how models can be interpreted. In this survey, we analyse current machine learning models and other in-silico tools as applied to medicine-specifically, to cancer research-and we discuss their interpretability, performance and the input data they are fed with. Artificial neural networks (ANN), logistic regression (LR) and support vector machines (SVM) have been observed to be the preferred models. In addition, convolutional neural networks (CNNs), supported by the rapid development of graphic processing units (GPUs) and high-performance computing (HPC) infrastructures, are gaining importance when image processing is feasible. However, the interpretability of machine learning predictions so that doctors can understand them, trust them and gain useful insights for the clinical practice is still rarely considered, which is a factor that needs to be improved to enhance doctors' predictive capacity and achieve individualised therapies in the near future.
Collapse
Affiliation(s)
- Antonio Jesús Banegas-Luna
- Structural Bioinformatics and High-Performance Computing Research Group (BIO-HPC), Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.P.-G.); (A.B.-C.)
| | - Jorge Peña-García
- Structural Bioinformatics and High-Performance Computing Research Group (BIO-HPC), Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.P.-G.); (A.B.-C.)
| | - Adrian Iftene
- Faculty of Computer Science, Universitatea Alexandru Ioan Cuza (UAIC), 700505 Jashi, Romania;
| | - Fiorella Guadagni
- Interinstitutional Multidisciplinary Biobank (BioBIM), IRCCS San Raffaele Roma, 00166 Rome, Italy; (F.G.); (P.F.)
- Department of Human Sciences and Promotion of the Quality of Life, San Raffaele Roma Open University, 00166 Rome, Italy;
| | - Patrizia Ferroni
- Interinstitutional Multidisciplinary Biobank (BioBIM), IRCCS San Raffaele Roma, 00166 Rome, Italy; (F.G.); (P.F.)
- Department of Human Sciences and Promotion of the Quality of Life, San Raffaele Roma Open University, 00166 Rome, Italy;
| | - Noemi Scarpato
- Department of Human Sciences and Promotion of the Quality of Life, San Raffaele Roma Open University, 00166 Rome, Italy;
| | - Fabio Massimo Zanzotto
- Dipartimento di Ingegneria dell’Impresa “Mario Lucertini”, University of Rome Tor Vergata, 00133 Rome, Italy;
| | - Andrés Bueno-Crespo
- Structural Bioinformatics and High-Performance Computing Research Group (BIO-HPC), Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.P.-G.); (A.B.-C.)
| | - Horacio Pérez-Sánchez
- Structural Bioinformatics and High-Performance Computing Research Group (BIO-HPC), Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.P.-G.); (A.B.-C.)
| |
Collapse
|
29
|
Echle A, Rindtorff NT, Brinker TJ, Luedde T, Pearson AT, Kather JN. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br J Cancer 2021; 124:686-696. [PMID: 33204028 PMCID: PMC7884739 DOI: 10.1038/s41416-020-01122-x] [Citation(s) in RCA: 295] [Impact Index Per Article: 73.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 09/06/2020] [Accepted: 09/30/2020] [Indexed: 12/14/2022] Open
Abstract
Clinical workflows in oncology rely on predictive and prognostic molecular biomarkers. However, the growing number of these complex biomarkers tends to increase the cost and time for decision-making in routine daily oncology practice; furthermore, biomarkers often require tumour tissue on top of routine diagnostic material. Nevertheless, routinely available tumour tissue contains an abundance of clinically relevant information that is currently not fully exploited. Advances in deep learning (DL), an artificial intelligence (AI) technology, have enabled the extraction of previously hidden information directly from routine histology images of cancer, providing potentially clinically useful information. Here, we outline emerging concepts of how DL can extract biomarkers directly from histology images and summarise studies of basic and advanced image analysis for cancer histology. Basic image analysis tasks include detection, grading and subtyping of tumour tissue in histology images; they are aimed at automating pathology workflows and consequently do not immediately translate into clinical decisions. Exceeding such basic approaches, DL has also been used for advanced image analysis tasks, which have the potential of directly affecting clinical decision-making processes. These advanced approaches include inference of molecular features, prediction of survival and end-to-end prediction of therapy response. Predictions made by such DL systems could simplify and enrich clinical decision-making, but require rigorous external validation in clinical settings.
Collapse
Affiliation(s)
- Amelie Echle
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | | | - Titus Josef Brinker
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tom Luedde
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Duesseldorf, Düsseldorf, Germany
| | - Alexander Thomas Pearson
- Section of Hematology/Oncology, Department of Medicine, The University of Chicago, Chicago, IL, USA
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
30
|
Zadeh Shirazi A, Fornaciari E, McDonnell MD, Yaghoobi M, Cevallos Y, Tello-Oquendo L, Inca D, Gomez GA. The Application of Deep Convolutional Neural Networks to Brain Cancer Images: A Survey. J Pers Med 2020; 10:E224. [PMID: 33198332 PMCID: PMC7711876 DOI: 10.3390/jpm10040224] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/10/2020] [Accepted: 11/10/2020] [Indexed: 12/15/2022] Open
Abstract
In recent years, improved deep learning techniques have been applied to biomedical image processing for the classification and segmentation of different tumors based on magnetic resonance imaging (MRI) and histopathological imaging (H&E) clinical information. Deep Convolutional Neural Networks (DCNNs) architectures include tens to hundreds of processing layers that can extract multiple levels of features in image-based data, which would be otherwise very difficult and time-consuming to be recognized and extracted by experts for classification of tumors into different tumor types, as well as segmentation of tumor images. This article summarizes the latest studies of deep learning techniques applied to three different kinds of brain cancer medical images (histology, magnetic resonance, and computed tomography) and highlights current challenges in the field for the broader applicability of DCNN in personalized brain cancer care by focusing on two main applications of DCNNs: classification and segmentation of brain cancer tumors images.
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and the University of South of Australia, Adelaide, SA 5000, Australia;
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA 5095, Australia;
| | - Eric Fornaciari
- Department of Mathematics of Computation, University of California, Los Angeles (UCLA), Los Angeles, CA 90095, USA;
| | - Mark D. McDonnell
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA 5095, Australia;
| | - Mahdi Yaghoobi
- Electrical and Computer Engineering Department, Islamic Azad University, Mashhad Branch, Mashad 917794-8564, Iran;
| | - Yesenia Cevallos
- College of Engineering, Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador; (Y.C.); (L.T.-O.); (D.I.)
| | - Luis Tello-Oquendo
- College of Engineering, Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador; (Y.C.); (L.T.-O.); (D.I.)
| | - Deysi Inca
- College of Engineering, Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador; (Y.C.); (L.T.-O.); (D.I.)
| | - Guillermo A. Gomez
- Centre for Cancer Biology, SA Pathology and the University of South of Australia, Adelaide, SA 5000, Australia;
| |
Collapse
|