1
|
De Carvalho T, Kader R, Brandao P, Lovat LB, Mountney P, Stoyanov D. NICE polyp feature classification for colonoscopy screening. Int J Comput Assist Radiol Surg 2025; 20:1015-1024. [PMID: 40075052 DOI: 10.1007/s11548-025-03338-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 02/12/2025] [Indexed: 03/14/2025]
Abstract
PURPOSE Colorectal cancer is one of the most prevalent cancers worldwide, highlighting the critical need for early and accurate diagnosis to reduce patient risks. Inaccurate diagnoses not only compromise patient outcomes but also lead to increased costs and additional time burdens for clinicians. Enhancing diagnostic accuracy is essential, and this study focuses on improving the accuracy of polyp classification using the NICE classification, which evaluates three key features: colour, vessels, and surface pattern. METHODS A multiclass classifier was developed and trained to independently classify each of the three features in the NICE classification. The approach prioritizes clinically relevant features rather than relying on handcrafted or obscure deep learning features, ensuring transparency and reliability for clinical use. The classifier was trained on internal datasets and tested on both internal and public datasets. RESULTS The classifier successfully classified the three polyp features, achieving an accuracy of over 92% on internal datasets and exceeding 88% on a public dataset. The high classification accuracy demonstrates the system's effectiveness in identifying the key features from the NICE classification. CONCLUSION This study underscores the potential of using an independent classification approach for NICE features to enhance clinical decision-making in colorectal cancer diagnosis. The method shows promise in improving diagnostic accuracy, which could lead to better patient outcomes and more efficient clinical workflows.
Collapse
Affiliation(s)
- Thomas De Carvalho
- Odin Vision, London, UK.
- Department of Computer Science, UCL Hawkes Institute, University College London, London, UK.
| | - Rawen Kader
- Division of Surgery and Interventional Science, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| | | | - Laurence B Lovat
- Division of Surgery and Interventional Science, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| | | | - Danail Stoyanov
- Department of Computer Science, UCL Hawkes Institute, University College London, London, UK
| |
Collapse
|
2
|
Frascarelli C, Venetis K, Marra A, Mane E, Ivanova M, Cursano G, Porta FM, Concardi A, Ceol AGM, Farina A, Criscitiello C, Curigliano G, Guerini-Rocco E, Fusco N. Deep learning algorithm on H&E whole slide images to characterize TP53 alterations frequency and spatial distribution in breast cancer. Comput Struct Biotechnol J 2024; 23:4252-4259. [PMID: 39678362 PMCID: PMC11638532 DOI: 10.1016/j.csbj.2024.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/20/2024] [Accepted: 11/21/2024] [Indexed: 12/17/2024] Open
Abstract
The tumor suppressor TP53 is frequently mutated in hormone receptor-negative, HER2-positive breast cancer (BC), contributing to tumor aggressiveness. Traditional ancillary methods like immunohistochemistry (IHC) to assess TP53 functionality face pre- and post-analytical challenges. This proof-of-concept study employed a deep learning (DL) algorithm to predict TP53 mutational status from H&E-stained whole slide images (WSIs) of BC tissue. Using a pre-trained convolutional neural network, the model identified tumor areas and predicted TP53 mutations with a Dice coefficient score of 0.82. Predictions were validated through IHC and next-generation sequencing (NGS), confirming TP53 aberrant expression in 92 % of the tumor area, closely matching IHC findings (90 %). The DL model exhibited high accuracy in tissue quantification and TP53 status prediction, outperforming traditional methods in terms of precision and efficiency. DL-based approaches offer significant promise for enhancing biomarker testing and precision oncology by reducing intra- and inter-observer variability, but further validation is required to optimize their integration into real-world clinical workflows. This study underscores the potential of DL algorithms to predict key genetic alterations, such as TP53 mutations, in BC. DL-based histopathological analysis represents a valuable tool for improving patient management and tailoring treatment approaches based on molecular biomarker status.
Collapse
Affiliation(s)
- Chiara Frascarelli
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | | | - Antonio Marra
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy
| | - Eltjona Mane
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
| | - Mariia Ivanova
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
| | - Giulia Cursano
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | | | - Alberto Concardi
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
| | - Arnaud Gerard Michel Ceol
- Department of Information and Communications Technology, European Institute of Oncology IRCCS, Milan, Italy
| | - Annarosa Farina
- Department of Information and Communications Technology, European Institute of Oncology IRCCS, Milan, Italy
| | - Carmen Criscitiello
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy
| | - Giuseppe Curigliano
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy
| | - Elena Guerini-Rocco
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Nicola Fusco
- Division of Pathology, European Institute of Oncology IRCCS, Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| |
Collapse
|
3
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
4
|
Pozzi M, Noei S, Robbi E, Cima L, Moroni M, Munari E, Torresani E, Jurman G. Generating and evaluating synthetic data in digital pathology through diffusion models. Sci Rep 2024; 14:28435. [PMID: 39557989 PMCID: PMC11574254 DOI: 10.1038/s41598-024-79602-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 11/11/2024] [Indexed: 11/20/2024] Open
Abstract
Synthetic data is becoming a valuable tool for computational pathologists, aiding in tasks like data augmentation and addressing data scarcity and privacy. However, its use necessitates careful planning and evaluation to prevent the creation of clinically irrelevant artifacts.This manuscript introduces a comprehensive pipeline for generating and evaluating synthetic pathology data using a diffusion model. The pipeline features a multifaceted evaluation strategy with an integrated explainability procedure, addressing two key aspects of synthetic data use in the medical domain.The evaluation of the generated data employs an ensemble-like approach. The first step includes assessing the similarity between real and synthetic data using established metrics. The second step involves evaluating the usability of the generated images in deep learning models accompanied with explainable AI methods. The final step entails verifying their histopathological realism through questionnaires answered by professional pathologists. We show that each of these evaluation steps are necessary as they provide complementary information on the generated data's quality.The pipeline is demonstrated on the public GTEx dataset of 650 Whole Slide Images (WSIs), including five different tissues. An equal number of tiles from each tissue are generated and their reliability is assessed using the proposed evaluation pipeline, yielding promising results.In summary, the proposed workflow offers a comprehensive solution for generative AI in digital pathology, potentially aiding the community in their transition towards digitalization and data-driven modeling.
Collapse
Affiliation(s)
- Matteo Pozzi
- Data Science for Health Unit, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, 38123, Italy
- Department for Computational and Integrative Biology, Università degli Studi di Trento, Via Sommarive, 9, Povo, Trento, 38123, Italy
| | - Shahryar Noei
- Data Science for Health Unit, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, 38123, Italy
| | - Erich Robbi
- Data Science for Health Unit, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, 38123, Italy
- Department of Information Engineering and Computer Science, Università degli Studi di Trento, Via Sommarive, 9, Povo, Trento, 38123, Italy
| | - Luca Cima
- Department of Diagnostic and Public Health, Section of Pathology, University and Hospital Trust of Verona, Verona, Italy
| | - Monica Moroni
- Data Science for Health Unit, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, 38123, Italy
| | - Enrico Munari
- Department of Diagnostic and Public Health, Section of Pathology, University and Hospital Trust of Verona, Verona, Italy
| | - Evelin Torresani
- Pathology Unit, Department of Laboratory Medicine, Santa Chiara Hospital, APSS, Trento, Italy
| | - Giuseppe Jurman
- Data Science for Health Unit, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, 38123, Italy.
| |
Collapse
|
5
|
Vanitha K, R MT, Sree SS, Guluwadi S. Deep learning ensemble approach with explainable AI for lung and colon cancer classification using advanced hyperparameter tuning. BMC Med Inform Decis Mak 2024; 24:222. [PMID: 39112991 PMCID: PMC11304580 DOI: 10.1186/s12911-024-02628-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 08/01/2024] [Indexed: 08/11/2024] Open
Abstract
Lung and colon cancers are leading contributors to cancer-related fatalities globally, distinguished by unique histopathological traits discernible through medical imaging. Effective classification of these cancers is critical for accurate diagnosis and treatment. This study addresses critical challenges in the diagnostic imaging of lung and colon cancers, which are among the leading causes of cancer-related deaths worldwide. Recognizing the limitations of existing diagnostic methods, which often suffer from overfitting and poor generalizability, our research introduces a novel deep learning framework that synergistically combines the Xception and MobileNet architectures. This innovative ensemble model aims to enhance feature extraction, improve model robustness, and reduce overfitting.Our methodology involves training the hybrid model on a comprehensive dataset of histopathological images, followed by validation against a balanced test set. The results demonstrate an impressive classification accuracy of 99.44%, with perfect precision and recall in identifying certain cancerous and non-cancerous tissues, marking a significant improvement over traditional approach.The practical implications of these findings are profound. By integrating Gradient-weighted Class Activation Mapping (Grad-CAM), the model offers enhanced interpretability, allowing clinicians to visualize the diagnostic reasoning process. This transparency is vital for clinical acceptance and enables more personalized, accurate treatment planning. Our study not only pushes the boundaries of medical imaging technology but also sets the stage for future research aimed at expanding these techniques to other types of cancer diagnostics.
Collapse
Affiliation(s)
- K Vanitha
- Department of Computer Science and Engineering, Faculty of Engineering, Karpagam Academy of Higher Education (Deemed to Be University), Coimbatore, India
| | - Mahesh T R
- Department of Computer Science and Engineering, JAIN (Deemed-to-Be University), Bengaluru, 562112, India
| | - S Sathea Sree
- Department of Computer Science and Engineering, School of Engineering, Vels Institute of Science, Technology & Advanced Studies (VISTAS), Chennai, India
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
6
|
Xu J, Jiang W, Wu J, Zhang W, Zhu Z, Xin J, Zheng N, Wang B. Hepatic and portal vein segmentation with dual-stream deep neural network. Med Phys 2024; 51:5441-5456. [PMID: 38648676 DOI: 10.1002/mp.17090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 02/16/2024] [Accepted: 03/01/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Liver lesions mainly occur inside the liver parenchyma, which are difficult to locate and have complicated relationships with essential vessels. Thus, preoperative planning is crucial for the resection of liver lesions. Accurate segmentation of the hepatic and portal veins (PVs) on computed tomography (CT) images is of great importance for preoperative planning. However, manually labeling the mask of vessels is laborious and time-consuming, and the labeling results of different clinicians are prone to inconsistencies. Hence, developing an automatic segmentation algorithm for hepatic and PVs on CT images has attracted the attention of researchers. Unfortunately, existing deep learning based automatic segmentation methods are prone to misclassifying peripheral vessels into wrong categories. PURPOSE This study aims to provide a fully automatic and robust semantic segmentation algorithm for hepatic and PVs, guiding subsequent preoperative planning. In addition, to address the deficiency of the public dataset for hepatic and PV segmentation, we revise the annotations of the Medical Segmentation Decathlon (MSD) hepatic vessel segmentation dataset and add the masks of the hepatic veins (HVs) and PVs. METHODS We proposed a structure with a dual-stream encoder combining convolution and Transformer block, named Dual-stream Hepatic Portal Vein segmentation Network, to extract local features and long-distance spatial information, thereby extracting anatomical information of hepatic and portal vein, avoiding misdivisions of adjacent peripheral vessels. Besides, a multi-scale feature fusion block based on dilated convolution is proposed to extract multi-scale features on expanded perception fields for local features, and a multi-level fusing attention module is introduced for efficient context information extraction. Paired t-test is conducted to evaluate the significant difference in dice between the proposed methods and the comparing methods. RESULTS Two datasets are constructed from the original MSD dataset. For each dataset, 50 cases are randomly selected for model evaluation in the scheme of 5-fold cross-validation. The results show that our method outperforms the state-of-the-art Convolutional Neural Network-based and transformer-based methods. Specifically, for the first dataset, our model reaches 0.815, 0.830, and 0.807 at overall dice, precision, and sensitivity. The dice of the hepatic and PVs are 0.835 and 0.796, which also exceed the numeric result of the comparing methods. Almost all the p-values of paired t-tests on the proposed approach and comparing approaches are smaller than 0.05. On the second dataset, the proposed algorithm achieves 0.749, 0.762, 0.726, 0.835, and 0.796 for overall dice, precision, sensitivity, dice for HV, and dice for PV, among which the first four numeric results exceed comparing methods. CONCLUSIONS The proposed method is effective in solving the problem of misclassifying interlaced peripheral veins for the HV and PV segmentation task and outperforming the comparing methods on the relabeled dataset.
Collapse
Affiliation(s)
- Jichen Xu
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Wei Jiang
- Research Center of Artificial Intelligence of Shangluo, Shangluo University, Shangluo, China
| | - Jiayi Wu
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Wei Zhang
- Beijing Jingzhen Medical Technology Ltd., Beijing, China
- Xi'an Zhizhenzhineng Technology Ltd., Xi'an, China
- School of Telecommunications Engineering, Xidian University, Xi'an, China
| | - Zhenyu Zhu
- Hepatobiliary Surgery Center, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Jingmin Xin
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Nanning Zheng
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Bo Wang
- Beijing Jingzhen Medical Technology Ltd., Beijing, China
- Xi'an Zhizhenzhineng Technology Ltd., Xi'an, China
- Huazhong University of Science and Technology, the Institute of Medical Equipment Science and Engineering, Wuhan, China
| |
Collapse
|
7
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1728-1751. [PMID: 38429563 PMCID: PMC11300721 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
8
|
Paul S, Yener B, Lund AW. C2P-GCN: Cell-to-Patch Graph Convolutional Network for Colorectal Cancer Grading. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039760 DOI: 10.1109/embc53108.2024.10782435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Graph-based learning approaches, due to their ability to encode tissue/organ structure information, are increasingly favored for grading colorectal cancer histology images. Recent graph-based techniques involve dividing whole slide images (WSIs) into smaller or medium-sized patches, and then building graphs on each patch for direct use in training. This method, however, fails to capture the tissue structure information present in an entire WSI and relies on training from a significantly large dataset of image patches. In this paper, we propose a novel cell-to-patch graph convolutional network (C2P-GCN), which is a two-stage graph formation-based approach. In the first stage, it forms a patch-level graph based on the cell organization on each patch of a WSI. In the second stage, it forms an image-level graph based on a similarity measure between patches of a WSI considering each patch as a node of a graph. This graph representation is then fed into a multi-layer GCN-based classification network. Our approach, through its dual-phase graph construction, effectively gathers local structural details from individual patches and establishes a meaningful connection among all patches across a WSI. As C2P-GCN integrates the structural data of an entire WSI into a single graph, it allows our model to work with significantly fewer training data compared to the latest models for colorectal cancer. Experimental validation of C2P-GCN on two distinct colorectal cancer datasets demonstrates the effectiveness of our method.
Collapse
|
9
|
Lotter W, Hassett MJ, Schultz N, Kehl KL, Van Allen EM, Cerami E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov 2024; 14:711-726. [PMID: 38597966 PMCID: PMC11131133 DOI: 10.1158/2159-8290.cd-23-1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/29/2024] [Accepted: 02/28/2024] [Indexed: 04/11/2024]
Abstract
Artificial intelligence (AI) in oncology is advancing beyond algorithm development to integration into clinical practice. This review describes the current state of the field, with a specific focus on clinical integration. AI applications are structured according to cancer type and clinical domain, focusing on the four most common cancers and tasks of detection, diagnosis, and treatment. These applications encompass various data modalities, including imaging, genomics, and medical records. We conclude with a summary of existing challenges, evolving solutions, and potential future directions for the field. SIGNIFICANCE AI is increasingly being applied to all aspects of oncology, where several applications are maturing beyond research and development to direct clinical integration. This review summarizes the current state of the field through the lens of clinical translation along the clinical care continuum. Emerging areas are also highlighted, along with common challenges, evolving solutions, and potential future directions for the field.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Michael J. Hassett
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Nikolaus Schultz
- Marie-Josée and Henry R. Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center; New York, NY, USA
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kenneth L. Kehl
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Eliezer M. Van Allen
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
- Cancer Program, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ethan Cerami
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
10
|
Zamanitajeddin N, Jahanifar M, Bilal M, Eastwood M, Rajpoot N. Social network analysis of cell networks improves deep learning for prediction of molecular pathways and key mutations in colorectal cancer. Med Image Anal 2024; 93:103071. [PMID: 38199068 DOI: 10.1016/j.media.2023.103071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 11/14/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
Colorectal cancer (CRC) is a primary global health concern, and identifying the molecular pathways, genetic subtypes, and mutations associated with CRC is crucial for precision medicine. However, traditional measurement techniques such as gene sequencing are costly and time-consuming, while most deep learning methods proposed for this task lack interpretability. This study offers a new approach to enhance the state-of-the-art deep learning methods for molecular pathways and key mutation prediction by incorporating cell network information. We build cell graphs with nuclei as nodes and nuclei connections as edges of the network and leverage Social Network Analysis (SNA) measures to extract abstract, perceivable, and interpretable features that explicitly describe the cell network characteristics in an image. Our approach does not rely on precise nuclei segmentation or feature extraction, is computationally efficient, and is easily scalable. In this study, we utilize the TCGA-CRC-DX dataset, comprising 499 patients and 502 diagnostic slides from primary colorectal tumours, sourced from 36 distinct medical centres in the United States. By incorporating the SNA features alongside deep features in two multiple instance learning frameworks, we demonstrate improved performance for chromosomal instability (CIN), hypermutated tumour (HM), TP53 gene, BRAF gene, and Microsatellite instability (MSI) status prediction tasks (2.4%-4% and 7-8.8% improvement in AUROC and AUPRC on average). Additionally, our method achieves outstanding performance on MSI prediction in an external PAIP dataset (99% AUROC and 98% AUPRC), demonstrating its generalizability. Our findings highlight the discrimination power of SNA features and how they can be beneficial to deep learning models' performance and provide insights into the correlation of cell network profiles with molecular pathways and key mutations.
Collapse
Affiliation(s)
- Neda Zamanitajeddin
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mark Eastwood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Histofy Ltd., Birmingham, UK.
| |
Collapse
|
11
|
He Y, Duan L, Dong G, Chen F, Li W. Computational pathology-based weakly supervised prediction model for MGMT promoter methylation status in glioblastoma. Front Neurol 2024; 15:1345687. [PMID: 38385046 PMCID: PMC10880091 DOI: 10.3389/fneur.2024.1345687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 01/19/2024] [Indexed: 02/23/2024] Open
Abstract
Introduction The methylation status of oxygen 6-methylguanine-DNA methyltransferase (MGMT) is closely related to the treatment and prognosis of glioblastoma. However, there are currently some challenges in detecting the methylation status of MGMT promoters. The hematoxylin and eosin (H&E)-stained histopathological slides have always been the gold standard for tumor diagnosis. Methods In this study, based on the TCGA database and H&E-stained Whole slide images (WSI) of Beijing Tiantan Hospital, we constructed a weakly supervised prediction model of MGMT promoter methylation status in glioblastoma by using two Transformer structure models. Results The accuracy scores of this model in the TCGA dataset and our independent dataset were 0.79 (AUC = 0.86) and 0.76 (AUC = 0.83), respectively. Conclusion The model demonstrates effective prediction of MGMT promoter methylation status in glioblastoma and exhibits some degree of generalization capability. At the same time, our study also shows that adding Patches automatic screening module to the computational pathology research framework of glioma can significantly improve the model effect.
Collapse
Affiliation(s)
- Yongqi He
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Ling Duan
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Gehong Dong
- Department of Pathology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Feng Chen
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Wenbin Li
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
12
|
Graham S, Vu QD, Jahanifar M, Weigert M, Schmidt U, Zhang W, Zhang J, Yang S, Xiang J, Wang X, Rumberger JL, Baumann E, Hirsch P, Liu L, Hong C, Aviles-Rivero AI, Jain A, Ahn H, Hong Y, Azzuni H, Xu M, Yaqub M, Blache MC, Piégu B, Vernay B, Scherr T, Böhland M, Löffler K, Li J, Ying W, Wang C, Snead D, Raza SEA, Minhas F, Rajpoot NM. CoNIC Challenge: Pushing the frontiers of nuclear detection, segmentation, classification and counting. Med Image Anal 2024; 92:103047. [PMID: 38157647 DOI: 10.1016/j.media.2023.103047] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/19/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | | | - Wenhua Zhang
- The Department of Computer Science, The University of Hong Kong, Hong Kong
| | | | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jinxi Xiang
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Josef Lorenz Rumberger
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany; Charité University Medicine, Berlin, Germany
| | | | - Peter Hirsch
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany
| | - Lihao Liu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Chenyang Hong
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Ayushi Jain
- Softsensor.ai, Bridgewater, NJ, United States of America; PRR.ai, TX, United States of America
| | - Heeyoung Ahn
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Yiyu Hong
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Hussam Azzuni
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Min Xu
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Benoît Piégu
- CNRS, IFCE, INRAE, Université de Tours, PRC, 3780, Nouzilly, France
| | - Bertrand Vernay
- Institut de Génétique et de Biologie Moléculaire et Cellulaire, Illkirch, France; Centre National de la Recherche Scientifique, UMR7104, Illkirch, France; Institut National de la Santé et de la Recherche Médicale, INSERM, U1258, Illkirch, France; Université de Strasbourg, Strasbourg, France
| | - Tim Scherr
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Moritz Böhland
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Jiachen Li
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Weiqin Ying
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Chixin Wang
- School of software engineering, South China University of Technology, Guangzhou, China
| | - David Snead
- Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom; Division of Biomedical Sciences, Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom
| |
Collapse
|
13
|
Leo M, Carcagnì P, Signore L, Corcione F, Benincasa G, Laukkanen MO, Distante C. Convolutional Neural Networks in the Diagnosis of Colon Adenocarcinoma. AI 2024; 5:324-341. [DOI: 10.3390/ai5010016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2025] Open
Abstract
Colorectal cancer is one of the most lethal cancers because of late diagnosis and challenges in the selection of therapy options. The histopathological diagnosis of colon adenocarcinoma is hindered by poor reproducibility and a lack of standard examination protocols required for appropriate treatment decisions. In the current study, using state-of-the-art approaches on benchmark datasets, we analyzed different architectures and ensembling strategies to develop the most efficient network combinations to improve binary and ternary classification. We propose an innovative two-stage pipeline approach to diagnose colon adenocarcinoma grading from histological images in a similar manner to a pathologist. The glandular regions were first segmented by a transformer architecture with subsequent classification using a convolutional neural network (CNN) ensemble, which markedly improved the learning efficiency and shortened the learning time. Moreover, we prepared and published a dataset for clinical validation of the developed artificial neural network, which suggested the discovery of novel histological phenotypic alterations in adenocarcinoma sections that could have prognostic value. Therefore, AI could markedly improve the reproducibility, efficiency, and accuracy of colon cancer diagnosis, which are required for precision medicine to personalize the treatment of cancer patients.
Collapse
Affiliation(s)
- Marco Leo
- Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council (CNR) of Italy, 73100 Lecce, Italy
| | - Pierluigi Carcagnì
- Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council (CNR) of Italy, 73100 Lecce, Italy
| | - Luca Signore
- Dipartimento di Ingegneria per L’Innovazione, Università del Salento, 73100 Lecce, Italy
| | | | | | - Mikko O. Laukkanen
- Department of Translational Medical Sciences, University of Naples Federico II, 80131 Naples, Italy
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council (CNR) of Italy, 73100 Lecce, Italy
- Dipartimento di Ingegneria per L’Innovazione, Università del Salento, 73100 Lecce, Italy
| |
Collapse
|
14
|
Roetzer-Pejrimovsky T, Nenning KH, Kiesel B, Klughammer J, Rajchl M, Baumann B, Langs G, Woehrer A. Deep learning links localized digital pathology phenotypes with transcriptional subtype and patient outcome in glioblastoma. Gigascience 2024; 13:giae057. [PMID: 39185700 PMCID: PMC11345537 DOI: 10.1093/gigascience/giae057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 05/13/2024] [Accepted: 07/20/2024] [Indexed: 08/27/2024] Open
Abstract
BACKGROUND Deep learning has revolutionized medical image analysis in cancer pathology, where it had a substantial clinical impact by supporting the diagnosis and prognostic rating of cancer. Among the first available digital resources in the field of brain cancer is glioblastoma, the most common and fatal brain cancer. At the histologic level, glioblastoma is characterized by abundant phenotypic variability that is poorly linked with patient prognosis. At the transcriptional level, 3 molecular subtypes are distinguished with mesenchymal-subtype tumors being associated with increased immune cell infiltration and worse outcome. RESULTS We address genotype-phenotype correlations by applying an Xception convolutional neural network to a discovery set of 276 digital hematozylin and eosin (H&E) slides with molecular subtype annotation and an independent The Cancer Genome Atlas-based validation cohort of 178 cases. Using this approach, we achieve high accuracy in H&E-based mapping of molecular subtypes (area under the curve for classical, mesenchymal, and proneural = 0.84, 0.81, and 0.71, respectively; P < 0.001) and regions associated with worse outcome (univariable survival model P < 0.001, multivariable P = 0.01). The latter were characterized by higher tumor cell density (P < 0.001), phenotypic variability of tumor cells (P < 0.001), and decreased T-cell infiltration (P = 0.017). CONCLUSIONS We modify a well-known convolutional neural network architecture for glioblastoma digital slides to accurately map the spatial distribution of transcriptional subtypes and regions predictive of worse outcome, thereby showcasing the relevance of artificial intelligence-enabled image mining in brain cancer.
Collapse
Affiliation(s)
- Thomas Roetzer-Pejrimovsky
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, 1090 Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, 1090 Vienna, Austria
| | - Karl-Heinz Nenning
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, 1090 Vienna, Austria
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University of Vienna, 1090 Vienna, Austria
| | - Johanna Klughammer
- Gene Center and Department of Biochemistry, Ludwig-Maximilians-Universität München, 80539 Munich, Germany
| | - Martin Rajchl
- Department of Computing and Medicine, Imperial College London, London SW7 2AZ, UK
| | - Bernhard Baumann
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, 1090 Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, 1090 Vienna, Austria
| | - Adelheid Woehrer
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, 1090 Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, 1090 Vienna, Austria
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|
15
|
Xu Z, Lim S, Lu Y, Jung SW. Reversed domain adaptation for nuclei segmentation-based pathological image classification. Comput Biol Med 2024; 168:107726. [PMID: 37984206 DOI: 10.1016/j.compbiomed.2023.107726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/01/2023] [Accepted: 11/15/2023] [Indexed: 11/22/2023]
Abstract
Despite the fact that digital pathology has provided a new paradigm for modern medicine, the insufficiency of annotations for training remains a significant challenge. Due to the weak generalization abilities of deep-learning models, their performance is notably constrained in domains without sufficient annotations. Our research aims to enhance the model's generalization ability through domain adaptation, increasing the prediction ability for the target domain data while only using the source domain labels for training. To further enhance classification performance, we introduce nuclei segmentation to provide the classifier with more diagnostically valuable nuclei information. In contrast to the general domain adaptation that generates source-like results in the target domain, we propose a reversed domain adaptation strategy that generates target-like results in the source domain, enabling the classification model to be more robust to inaccurate segmentation results. The proposed reversed unsupervised domain adaptation can effectively reduce the disparities in nuclei segmentation between the source and target domains without any target domain labels, leading to improved image classification performance in the target domain. The whole framework is designed in a unified manner so that the segmentation and classification modules can be trained jointly. Extensive experiments demonstrate that the proposed method significantly improves the classification performance in the target domain and outperforms existing general domain adaptation methods.
Collapse
Affiliation(s)
- Zhixin Xu
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Seohoon Lim
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Yucheng Lu
- Education and Research Center for Socialware IT, Korea University, Seoul, Republic of Korea
| | - Seung-Won Jung
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
16
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
17
|
Cen M, Li X, Guo B, Jonnagaddala J, Zhang H, Xu XS. A Novel and Efficient Digital Pathology Classifier for Predicting Cancer Biomarkers Using Sequencer Architecture. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2122-2132. [PMID: 37775043 DOI: 10.1016/j.ajpath.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 08/16/2023] [Accepted: 09/01/2023] [Indexed: 10/01/2023]
Abstract
In digital pathology tasks, transformers have achieved state-of-the-art results, surpassing convolutional neural networks (CNNs). However, transformers are usually complex and resource intensive. This study developed a novel and efficient digital pathology classifier called DPSeq to predict cancer biomarkers through fine-tuning a sequencer architecture integrating horizontal and vertical bidirectional long short-term memory networks. Using hematoxylin and eosin-stained histopathologic images of colorectal cancer from two international data sets (The Cancer Genome Atlas and Molecular and Cellular Oncology), the predictive performance of DPSeq was evaluated in a series of experiments. DPSeq demonstrated exceptional performance for predicting key biomarkers in colorectal cancer (microsatellite instability status, hypermutation, CpG island methylator phenotype status, BRAF mutation, TP53 mutation, and chromosomal instability), outperforming most published state-of-the-art classifiers in a within-cohort internal validation and a cross-cohort external validation. In addition, under the same experimental conditions using the same set of training and testing data sets, DPSeq surpassed four CNNs (ResNet18, ResNet50, MobileNetV2, and EfficientNet) and two transformer (Vision Transformer and Swin Transformer) models, achieving the highest area under the receiver operating characteristic curve and area under the precision-recall curve values in predicting microsatellite instability status, BRAF mutation, and CpG island methylator phenotype status. Furthermore, DPSeq required less time for both training and prediction because of its simple architecture. Therefore, DPSeq appears to be the preferred choice over transformer and CNN models for predicting cancer biomarkers.
Collapse
Affiliation(s)
- Min Cen
- School of Data Science, University of Science and Technology of China, Hefei, China
| | - Xingyu Li
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, China
| | - Bangwei Guo
- School of Data Science, University of Science and Technology of China, Hefei, China
| | - Jitendra Jonnagaddala
- School of Population Health, University of New South Wales, Sydney, New South Wales, Australia
| | - Hong Zhang
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, China.
| | - Xu Steven Xu
- Clinical Pharmacology and Quantitative Science, Genmab Inc., Princeton, New Jersey.
| |
Collapse
|
18
|
Li Y, Shen Y, Zhang J, Song S, Li Z, Ke J, Shen D. A Hierarchical Graph V-Net With Semi-Supervised Pre-Training for Histological Image Based Breast Cancer Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3907-3918. [PMID: 37725717 DOI: 10.1109/tmi.2023.3317132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Numerous patch-based methods have recently been proposed for histological image based breast cancer classification. However, their performance could be highly affected by ignoring spatial contextual information in the whole slide image (WSI). To address this issue, we propose a novel hierarchical Graph V-Net by integrating 1) patch-level pre-training and 2) context-based fine-tuning, with a hierarchical graph network. Specifically, a semi-supervised framework based on knowledge distillation is first developed to pre-train a patch encoder for extracting disease-relevant features. Then, a hierarchical Graph V-Net is designed to construct a hierarchical graph representation from neighboring/similar individual patches for coarse-to-fine classification, where each graph node (corresponding to one patch) is attached with extracted disease-relevant features and its target label during training is the average label of all pixels in the corresponding patch. To evaluate the performance of our proposed hierarchical Graph V-Net, we collect a large WSI dataset of 560 WSIs, with 30 labeled WSIs from the BACH dataset (through our further refinement), 30 labeled WSIs and 500 unlabeled WSIs from Yunnan Cancer Hospital. Those 500 unlabeled WSIs are employed for patch-level pre-training to improve feature representation, while 60 labeled WSIs are used to train and test our proposed hierarchical Graph V-Net. Both comparative assessment and ablation studies demonstrate the superiority of our proposed hierarchical Graph V-Net over state-of-the-art methods in classifying breast cancer from WSIs. The source code and our annotations for the BACH dataset have been released at https://github.com/lyhkevin/Graph-V-Net.
Collapse
|
19
|
Lee J, Han C, Kim K, Park GH, Kwak JT. CaMeL-Net: Centroid-aware metric learning for efficient multi-class cancer classification in pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107749. [PMID: 37579551 DOI: 10.1016/j.cmpb.2023.107749] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/25/2023] [Accepted: 08/05/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Cancer grading in pathology image analysis is a major task due to its importance in patient care, treatment, and management. The recent developments in artificial neural networks for computational pathology have demonstrated great potential to improve the accuracy and quality of cancer diagnosis. These improvements are generally ascribable to the advance in the architecture of the networks, often leading to increase in the computation and resources. In this work, we propose an efficient convolutional neural network that is designed to conduct multi-class cancer classification in an accurate and robust manner via metric learning. METHODS We propose a centroid-aware metric learning network for an improved cancer grading in pathology images. The proposed network utilizes centroids of different classes within the feature embedding space to optimize the relative distances between pathology images, which manifest the innate similarities/dissimilarities between them. For improved optimization, we introduce a new loss function and a training strategy that are tailored to the proposed network and metric learning. RESULTS We evaluated the proposed approach on multiple datasets of colorectal and gastric cancers. For the colorectal cancer, two different datasets were employed that were collected from different acquisition settings. the proposed method achieved an accuracy, F1-score, quadratic weighted kappa of 88.7%, 0.849, and 0.946 for the first dataset and 83.3%, 0.764, and 0.907 for the second dataset, respectively. For the gastric cancer, the proposed method obtained an accuracy of 85.9%, F1-score of 0.793, and quadratic weighted kappa of 0.939. We also found that the proposed method outperforms other competing models and is computationally efficient. CONCLUSIONS The experimental results demonstrate that the prediction results by the proposed network are both accurate and reliable. The proposed network not only outperformed other related methods in cancer classification but also achieved superior computational efficiency during training and inference. The future study will entail further development of the proposed method and the application of the method to other problems and domains.
Collapse
Affiliation(s)
- Jaeung Lee
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Chiwon Han
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Kyungeun Kim
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Gi-Ho Park
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
20
|
Wang H, Huang G, Zhao Z, Cheng L, Juncker-Jensen A, Nagy ML, Lu X, Zhang X, Chen DZ. CCF-GNN: A Unified Model Aggregating Appearance, Microenvironment, and Topology for Pathology Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3179-3193. [PMID: 37027573 DOI: 10.1109/tmi.2023.3249343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Pathology images contain rich information of cell appearance, microenvironment, and topology features for cancer analysis and diagnosis. Among such features, topology becomes increasingly important in analysis for cancer immunotherapy. By analyzing geometric and hierarchically structured cell distribution topology, oncologists can identify densely-packed and cancer-relevant cell communities (CCs) for making decisions. Compared to commonly-used pixel-level Convolution Neural Network (CNN) features and cell-instance-level Graph Neural Network (GNN) features, CC topology features are at a higher level of granularity and geometry. However, topological features have not been well exploited by recent deep learning (DL) methods for pathology image classification due to lack of effective topological descriptors for cell distribution and gathering patterns. In this paper, inspired by clinical practice, we analyze and classify pathology images by comprehensively learning cell appearance, microenvironment, and topology in a fine-to-coarse manner. To describe and exploit topology, we design Cell Community Forest (CCF), a novel graph that represents the hierarchical formulation process of big-sparse CCs from small-dense CCs. Using CCF as a new geometric topological descriptor of tumor cells in pathology images, we propose CCF-GNN, a GNN model that successively aggregates heterogeneous features (e.g., appearance, microenvironment) from cell-instance-level, cell-community-level, into image-level for pathology image classification. Extensive cross-validation experiments show that our method significantly outperforms alternative methods on H&E-stained and immunofluorescence images for disease grading tasks with multiple cancer types. Our proposed CCF-GNN establishes a new topological data analysis (TDA) based method, which facilitates integrating multi-level heterogeneous features of point clouds (e.g., for cells) into a unified DL framework.
Collapse
|
21
|
Gunesli GN, Bilal M, Raza SEA, Rajpoot NM. A Federated Learning Approach to Tumor Detection in Colon Histology Images. J Med Syst 2023; 47:99. [PMID: 37715855 DOI: 10.1007/s10916-023-01994-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/07/2023] [Indexed: 09/18/2023]
Abstract
Federated learning (FL), a relatively new area of research in medical image analysis, enables collaborative learning of a federated deep learning model without sharing the data of participating clients. In this paper, we propose FedDropoutAvg, a new federated learning approach for detection of tumor in images of colon tissue slides. The proposed method leverages the power of dropout, a commonly employed scheme to avoid overfitting in neural networks, in both client selection and federated averaging processes. We examine FedDropoutAvg against other FL benchmark algorithms for two different image classification tasks using a publicly available multi-site histopathology image dataset. We train and test the proposed model on a large dataset consisting of 1.2 million image tiles from 21 different sites. For testing the generalization of all models, we select held-out test sets from sites that were not used during training. We show that the proposed approach outperforms other FL methods and reduces the performance gap (to less than 3% in terms of AUC on independent test sites) between FL and a central deep learning model that requires all data to be shared for centralized training, demonstrating the potential of the proposed FedDropoutAvg model to be more generalizable than other state-of-the-art federated models. To the best of our knowledge, ours is the first study to effectively utilize the dropout strategy in a federated setting for tumor detection in histology images.
Collapse
Affiliation(s)
- Gozde N Gunesli
- The Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
| | - Mohsin Bilal
- The Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Shan E Ahmed Raza
- The Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir M Rajpoot
- The Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
| |
Collapse
|
22
|
Gabralla LA, Hussien AM, AlMohimeed A, Saleh H, Alsekait DM, El-Sappagh S, Ali AA, Refaat Hassan M. Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence. Diagnostics (Basel) 2023; 13:2939. [PMID: 37761306 PMCID: PMC10529133 DOI: 10.3390/diagnostics13182939] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/23/2023] [Accepted: 08/31/2023] [Indexed: 09/29/2023] Open
Abstract
Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).
Collapse
Affiliation(s)
- Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ali Mohamed Hussien
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| | - Abdulaziz AlMohimeed
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia
| | - Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt
| | - Deema Mohammed Alsekait
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez 34511, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| | - Abdelmgeid A. Ali
- Faculty of Computers and Information, Minia University, Minia 61519, Egypt
| | - Moatamad Refaat Hassan
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
23
|
Shao Z, Dai L, Jonnagaddala J, Chen Y, Wang Y, Fang Z, Zhang Y. Generalizability of Self-Supervised Training Models for Digital Pathology: A Multicountry Comparison in Colorectal Cancer. JCO Clin Cancer Inform 2023; 7:e2200178. [PMID: 37703507 DOI: 10.1200/cci.22.00178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 06/15/2023] [Accepted: 06/27/2023] [Indexed: 09/15/2023] Open
Abstract
PURPOSE In this multicountry study, we aim to explore the effectiveness of self-supervised learning (SSL) in colorectal cancer (CRC)-related predictive tasks using large amount of unlabeled digital pathology imaging data. METHODS We adopted SimSiam to conduct self-supervised pretraining on two large whole-slide image CRC data sets from the United States and Australia. The SSL pretrained encoder is then used in several predictive tasks, including supervised predictive tasks (tissue classification, microsatellite instability v microsatellite stability classification), and weakly supervised predictive tasks (polyp type classification and adenoma grading, and 5-year survival prediction). Performance on the tasks was compared between models using SSL pretraining and those using ImageNet pretraining, and performance for one-country pretraining was compared with two-country pretraining. RESULTS We demonstrate that SSL pretraining outperforms ImageNet pretraining in predictive tasks, that is, SSL pretraining outperforms the ImageNet pretraining by 3.01% of F 1 score on average over supervised predictive tasks and 1.53% of AUC on average over weakly supervised predictive tasks. Furthermore, two-country SSL pretraining has shown more stable performance than single-country pretraining, that is, two-country pretraining outperforms at least one of the single-country pretrainings by 1.93% of F 1 on average over supervised predictive tasks and 1.36% of AUC on average over weakly-supervised predictive tasks. CONCLUSION We find that using unlabeled image data for SSL pretraining in CRC related tasks is more effective than using ImageNet pretraining. Furthermore, SSL pretraining using data from multiple countries achieve more stable performance and better generalization than single-country pretraining.
Collapse
Affiliation(s)
- Zhuchen Shao
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Liuxi Dai
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China
| | | | - Yang Chen
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Yifeng Wang
- School of Science, Harbin Institute of Technology, Shenzhen, China
| | - Zijie Fang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China
| |
Collapse
|
24
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
25
|
Bashir RMS, Shephard AJ, Mahmood H, Azarmehr N, Raza SEA, Khurram SA, Rajpoot NM. A digital score of peri-epithelial lymphocytic activity predicts malignant transformation in oral epithelial dysplasia. J Pathol 2023; 260:431-442. [PMID: 37294162 PMCID: PMC10952946 DOI: 10.1002/path.6094] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/15/2023] [Accepted: 05/02/2023] [Indexed: 06/10/2023]
Abstract
Oral squamous cell carcinoma (OSCC) is amongst the most common cancers, with more than 377,000 new cases worldwide each year. OSCC prognosis remains poor, related to cancer presentation at a late stage, indicating the need for early detection to improve patient prognosis. OSCC is often preceded by a premalignant state known as oral epithelial dysplasia (OED), which is diagnosed and graded using subjective histological criteria leading to variability and prognostic unreliability. In this work, we propose a deep learning approach for the development of prognostic models for malignant transformation and their association with clinical outcomes in histology whole slide images (WSIs) of OED tissue sections. We train a weakly supervised method on OED cases (n = 137) with malignant transformation (n = 50) and mean malignant transformation time of 6.51 years (±5.35 SD). Stratified five-fold cross-validation achieved an average area under the receiver-operator characteristic curve (AUROC) of 0.78 for predicting malignant transformation in OED. Hotspot analysis revealed various features of nuclei in the epithelium and peri-epithelial tissue to be significant prognostic factors for malignant transformation, including the count of peri-epithelial lymphocytes (PELs) (p < 0.05), epithelial layer nuclei count (NC) (p < 0.05), and basal layer NC (p < 0.05). Progression-free survival (PFS) using the epithelial layer NC (p < 0.05, C-index = 0.73), basal layer NC (p < 0.05, C-index = 0.70), and PELs count (p < 0.05, C-index = 0.73) all showed association of these features with a high risk of malignant transformation in our univariate analysis. Our work shows the application of deep learning for the prognostication and prediction of PFS of OED for the first time and offers potential to aid patient management. Further evaluation and testing on multi-centre data is required for validation and translation to clinical practice. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Adam J Shephard
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Hanya Mahmood
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Neda Azarmehr
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Syed Ali Khurram
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
26
|
Zhou J, Foroughi Pour A, Deirawan H, Daaboul F, Aung TN, Beydoun R, Ahmed FS, Chuang JH. Integrative deep learning analysis improves colon adenocarcinoma patient stratification at risk for mortality. EBioMedicine 2023; 94:104726. [PMID: 37499603 PMCID: PMC10388166 DOI: 10.1016/j.ebiom.2023.104726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 06/19/2023] [Accepted: 07/10/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND Colorectal cancers are the fourth most diagnosed cancer and the second leading cancer in number of deaths. Many clinical variables, pathological features, and genomic signatures are associated with patient risk, but reliable patient stratification in the clinic remains a challenging task. Here we assess how image, clinical, and genomic features can be combined to predict risk. METHODS We developed and evaluated integrative deep learning models combining formalin-fixed, paraffin-embedded (FFPE) whole slide images (WSIs), clinical variables, and mutation signatures to stratify colon adenocarcinoma (COAD) patients based on their risk of mortality. Our models were trained using a dataset of 108 patients from The Cancer Genome Atlas (TCGA), and were externally validated on newly generated dataset from Wayne State University (WSU) of 123 COAD patients and rectal adenocarcinoma (READ) patients in TCGA (N = 52). FINDINGS We first observe that deep learning models trained on FFPE WSIs of TCGA-COAD separate high-risk (OS < 3 years, N = 38) and low-risk (OS > 5 years, N = 25) patients (AUC = 0.81 ± 0.08, 5 year survival p < 0.0001, 5 year relative risk = 1.83 ± 0.04) though such models are less effective at predicting overall survival (OS) for moderate-risk (3 years < OS < 5 years, N = 45) patients (5 year survival p-value = 0.5, 5 year relative risk = 1.05 ± 0.09). We find that our integrative models combining WSIs, clinical variables, and mutation signatures can improve patient stratification for moderate-risk patients (5 year survival p < 0.0001, 5 year relative risk = 1.87 ± 0.07). Our integrative model combining image and clinical variables is also effective on an independent pathology dataset (WSU-COAD, N = 123) generated by our team (5 year survival p < 0.0001, 5 year relative risk = 1.52 ± 0.08), and the TCGA-READ data (5 year survival p < 0.0001, 5 year relative risk = 1.18 ± 0.17). Our multicenter integrative image and clinical model trained on combined TCGA-COAD and WSU-COAD is effective in predicting risk on TCGA-READ (5 year survival p < 0.0001, 5 year relative risk = 1.82 ± 0.13) data. Pathologist review of image-based heatmaps suggests that nuclear size pleomorphism, intense cellularity, and abnormal structures are associated with high-risk, while low-risk regions have more regular and small cells. Quantitative analysis shows high cellularity, high ratios of tumor cells, large tumor nuclei, and low immune infiltration are indicators of high-risk tiles. INTERPRETATION The improved stratification of colorectal cancer patients from our computational methods can be beneficial for treatment plans and enrollment of patients in clinical trials. FUNDING This study was supported by the National Cancer Institutes (Grant No. R01CA230031 and P30CA034196). The funders had no roles in study design, data collection and analysis or preparation of the manuscript.
Collapse
Affiliation(s)
- Jie Zhou
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, UCONN Health, Farmington, CT, USA
| | | | - Hany Deirawan
- Department of Pathology, Wayne State University, Detroit, MI, USA; Department of Dermatology, Wayne State University, Detroit, MI, USA
| | - Fayez Daaboul
- Department of Pathology, Wayne State University, Detroit, MI, USA
| | - Thazin Nwe Aung
- Department of Pathology, Yale University, New Haven, CT, USA
| | - Rafic Beydoun
- Department of Pathology, Wayne State University, Detroit, MI, USA
| | | | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, UCONN Health, Farmington, CT, USA.
| |
Collapse
|
27
|
Asif A, Rajpoot K, Graham S, Snead D, Minhas F, Rajpoot N. Unleashing the potential of AI for pathology: challenges and recommendations. J Pathol 2023; 260:564-577. [PMID: 37550878 PMCID: PMC10952719 DOI: 10.1002/path.6168] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 08/09/2023]
Abstract
Computational pathology is currently witnessing a surge in the development of AI techniques, offering promise for achieving breakthroughs and significantly impacting the practices of pathology and oncology. These AI methods bring with them the potential to revolutionize diagnostic pipelines as well as treatment planning and overall patient care. Numerous peer-reviewed studies reporting remarkable performance across diverse tasks serve as a testimony to the potential of AI in the field. However, widespread adoption of these methods in clinical and pre-clinical settings still remains a challenge. In this review article, we present a detailed analysis of the major obstacles encountered during the development of effective models and their deployment in practice. We aim to provide readers with an overview of the latest developments, assist them with insights into identifying some specific challenges that may require resolution, and suggest recommendations and potential future research directions. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Amina Asif
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Kashif Rajpoot
- School of Computer ScienceUniversity of BirminghamBirminghamUK
| | - Simon Graham
- Histofy Ltd, Birmingham Business ParkBirminghamUK
| | - David Snead
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Department of PathologyUniversity Hospitals Coventry & Warwickshire NHS TrustCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Cancer Research CentreUniversity of WarwickCoventryUK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Cancer Research CentreUniversity of WarwickCoventryUK
- The Alan Turing InstituteLondonUK
| |
Collapse
|
28
|
Xu J, Xin J, Shi P, Wu J, Cao Z, Feng X, Zheng N. Lymphoma Recognition in Histology Image of Gastric Mucosal Biopsy with Prototype Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083432 DOI: 10.1109/embc40787.2023.10340697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Lymphomas are a group of malignant tumors developed from lymphocytes, which may occur in many organs. Therefore, accurately distinguishing lymphoma from solid tumors is of great clinical significance. Due to the strong ability of graph structure to capture the topology of the micro-environment of cells, graph convolutional networks (GCNs) have been widely used in pathological image processing. Nevertheless, the softmax classification layer of the graph convolutional models cannot drive learned representations compact enough to distinguish some types of lymphomas and solid tumors with strong morphological analogies on H&E-stained images. To alleviate this problem, a prototype learning based model is proposed, namely graph convolutional prototype network (GCPNet). Specifically, the method follows the patch-to-slide architecture first to perform patch-level classification and obtain image-level results by fusing patch-level predictions. The classification model is assembled with a graph convolutional feature extractor and prototype-based classification layer to build more robust feature representations for classification. For model training, a dynamic prototype loss is proposed to give the model different optimization priorities at different stages of training. Besides, a prototype reassignment operation is designed to prevent the model from getting stuck in local minima during optimization. Experiments are conducted on a dataset of 183 Whole slide images (WSI) of gastric mucosa biopsy. The proposed method achieved superior performance than existing methods.Clinical relevance- The work proposed a new deep learning framework tailored to lymphoma recognition on pathological image of gastric mucosal biopsy to differentiate lymphoma, adenocarcinoma and inflammation.
Collapse
|
29
|
Bilal M, Jewsbury R, Wang R, AlGhamdi HM, Asif A, Eastwood M, Rajpoot N. An aggregation of aggregation methods in computational pathology. Med Image Anal 2023; 88:102885. [PMID: 37423055 DOI: 10.1016/j.media.2023.102885] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/02/2023] [Accepted: 06/28/2023] [Indexed: 07/11/2023]
Abstract
Image analysis and machine learning algorithms operating on multi-gigapixel whole-slide images (WSIs) often process a large number of tiles (sub-images) and require aggregating predictions from the tiles in order to predict WSI-level labels. In this paper, we present a review of existing literature on various types of aggregation methods with a view to help guide future research in the area of computational pathology (CPath). We propose a general CPath workflow with three pathways that consider multiple levels and types of data and the nature of computation to analyse WSIs for predictive modelling. We categorize aggregation methods according to the context and representation of the data, features of computational modules and CPath use cases. We compare and contrast different methods based on the principle of multiple instance learning, perhaps the most commonly used aggregation method, covering a wide range of CPath literature. To provide a fair comparison, we consider a specific WSI-level prediction task and compare various aggregation methods for that task. Finally, we conclude with a list of objectives and desirable attributes of aggregation methods in general, pros and cons of the various approaches, some recommendations and possible future directions.
Collapse
Affiliation(s)
- Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; School of Computing, National University of Computer and Emerging Sciences, Islamabad, Pakistan
| | - Robert Jewsbury
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Ruoyu Wang
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Hammam M AlGhamdi
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Amina Asif
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Mark Eastwood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, UK.
| |
Collapse
|
30
|
Carcagnì P, Leo M, Signore L, Distante C. An Investigation about Modern Deep Learning Strategies for Colon Carcinoma Grading. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094556. [PMID: 37177764 PMCID: PMC10181531 DOI: 10.3390/s23094556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 04/29/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023]
Abstract
Developing computer-aided approaches for cancer diagnosis and grading is currently receiving an increasing demand: this could take over intra- and inter-observer inconsistency, speed up the screening process, increase early diagnosis, and improve the accuracy and consistency of the treatment-planning processes.The third most common cancer worldwide and the second most common in women is colorectal cancer (CRC). Grading CRC is a key task in planning appropriate treatments and estimating the response to them. Unfortunately, it has not yet been fully demonstrated how the most advanced models and methodologies of machine learning can impact this crucial task.This paper systematically investigates the use of advanced deep models (convolutional neural networks and transformer architectures) to improve colon carcinoma detection and grading from histological images. To the best of our knowledge, this is the first attempt at using transformer architectures and ensemble strategies for exploiting deep learning paradigms for automatic colon cancer diagnosis. Results on the largest publicly available dataset demonstrated a substantial improvement with respect to the leading state-of-the-art methods. In particular, by exploiting a transformer architecture, it was possible to observe a 3% increase in accuracy in the detection task (two-class problem) and up to a 4% improvement in the grading task (three-class problem) by also integrating an ensemble strategy.
Collapse
Affiliation(s)
- Pierluigi Carcagnì
- Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council (CNR), Via Monteroni snc University Campus, 73100 Lecce, Italy
| | - Marco Leo
- Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council (CNR), Via Monteroni snc University Campus, 73100 Lecce, Italy
| | - Luca Signore
- Dipartimento di Ingegneria per L'Innovazione, Università del Salento, Via Monteorni snc University Campus, 73100 Lecce, Italy
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council (CNR), Via Monteroni snc University Campus, 73100 Lecce, Italy
- Dipartimento di Ingegneria per L'Innovazione, Università del Salento, Via Monteorni snc University Campus, 73100 Lecce, Italy
| |
Collapse
|
31
|
Lan J, Chen M, Wang J, Du M, Wu Z, Zhang H, Xue Y, Wang T, Chen L, Xu C, Han Z, Hu Z, Zhou Y, Zhou X, Tong T, Chen G. Using less annotation workload to establish a pathological auxiliary diagnosis system for gastric cancer. Cell Rep Med 2023; 4:101004. [PMID: 37044091 PMCID: PMC10140598 DOI: 10.1016/j.xcrm.2023.101004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/20/2022] [Accepted: 03/17/2023] [Indexed: 04/14/2023]
Abstract
Pathological diagnosis of gastric cancer requires pathologists to have extensive clinical experience. To help pathologists improve diagnostic accuracy and efficiency, we collected 1,514 cases of stomach H&E-stained specimens with complete diagnostic information to establish a pathological auxiliary diagnosis system based on deep learning. At the slide level, our system achieves a specificity of 0.8878 while maintaining a high sensitivity close to 1.0 on 269 biopsy specimens (147 malignancies) and 163 surgical specimens (80 malignancies). The classified accuracy of our system is 0.9034 at the slide level for 352 biopsy specimens (201 malignancies) from 50 medical centers. With the help of our system, the pathologists' average false-negative rate and average false-positive rate on 100 biopsy specimens (50 malignancies) are reduced to 1/5 and 1/2 of the original rates, respectively. At the same time, the average uncertainty rate and the average diagnosis time are reduced by approximately 22% and 20%, respectively.
Collapse
Affiliation(s)
- Junlin Lan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Musheng Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Jianchao Wang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zhida Wu
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Hejun Zhang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Yuyang Xue
- School of Engineering, University of Edinburgh, Edinburgh EH8 9JU, UK
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Lifan Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Chaohui Xu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zixin Han
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Ziwei Hu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Yuanbo Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Xiaogen Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China; Imperial Vision Technology, Fuzhou, Fujian 350100, China.
| | - Gang Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China.
| |
Collapse
|
32
|
Javed S, Mahmood A, Qaiser T, Werghi N. Knowledge Distillation in Histology Landscape by Multi-Layer Features Supervision. IEEE J Biomed Health Inform 2023; 27:2037-2046. [PMID: 37021915 DOI: 10.1109/jbhi.2023.3237749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Automatic tissue classification is a fundamental task in computational pathology for profiling tumor micro-environments. Deep learning has advanced tissue classification performance at the cost of significant computational power. Shallow networks have also been end-to-end trained using direct supervision however their performance degrades because of the lack of capturing robust tissue heterogeneity. Knowledge distillation has recently been employed to improve the performance of the shallow networks used as student networks by using additional supervision from deep neural networks used as teacher networks. In the current work, we propose a novel knowledge distillation algorithm to improve the performance of shallow networks for tissue phenotyping in histology images. For this purpose, we propose multi-layer feature distillation such that a single layer in the student network gets supervision from multiple teacher layers. In the proposed algorithm, the size of the feature map of two layers is matched by using a learnable multi-layer perceptron. The distance between the feature maps of the two layers is then minimized during the training of the student network. The overall objective function is computed by summation of the loss over multiple layers combination weighted with a learnable attention-based parameter. The proposed algorithm is named as Knowledge Distillation for Tissue Phenotyping (KDTP). Experiments are performed on five different publicly available histology image classification datasets using several teacher-student network combinations within the KDTP algorithm. Our results demonstrate a significant performance increase in the student networks by using the proposed KDTP algorithm compared to direct supervision-based training methods.
Collapse
|
33
|
Wang H, Xian M, Vakanski A, Shareef B. SIAN: STYLE-GUIDED INSTANCE-ADAPTIVE NORMALIZATION FOR MULTI-ORGAN HISTOPATHOLOGY IMAGE SYNTHESIS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230507. [PMID: 38572450 PMCID: PMC10989245 DOI: 10.1109/isbi53787.2023.10230507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Existing deep neural networks for histopathology image synthesis cannot generate image styles that align with different organs, and cannot produce accurate boundaries of clustered nuclei. To address these issues, we propose a style-guided instance-adaptive normalization (SIAN) approach to synthesize realistic color distributions and textures for histopathology images from different organs. SIAN contains four phases, semantization, stylization, instantiation, and modulation. The first two phases synthesize image semantics and styles by using semantic maps and learned image style vectors. The instantiation module integrates geometrical and topological information and generates accurate nuclei boundaries. We validate the proposed approach on a multiple-organ dataset, Extensive experimental results demonstrate that the proposed method generates more realistic histopathology images than four state-of-the-art approaches for five organs. By incorporating synthetic images from the proposed approach to model training, an instance segmentation network can achieve state-of-the-art performance.
Collapse
Affiliation(s)
- Haotian Wang
- Department of Computer Science, University of Idaho, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, USA
| | | | - Bryar Shareef
- Department of Computer Science, University of Idaho, USA
| |
Collapse
|
34
|
Su L, Wang Z, Shi Y, Li A, Wang M. Local augmentation based consistency learning for semi-supervised pathology image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107446. [PMID: 36871546 DOI: 10.1016/j.cmpb.2023.107446] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Labeling pathology images is often costly and time-consuming, which is quite detrimental for supervised pathology image classification that relies heavily on sufficient labeled data during training. Exploring semi-supervised methods based on image augmentation and consistency regularization may effectively alleviate this problem. Nevertheless, traditional image-based augmentation (e.g., flip) produces only a single enhancement to an image, whereas combining multiple image sources may mix unimportant image regions resulting in poor performance. In addition, the regularization losses used in these augmentation approaches typically enforce the consistency of image level predictions, and meanwhile simply require each prediction of augmented image to be consistent bilaterally, which may force pathology image features with better predictions to be wrongly aligned towards the features with worse predictions. METHODS To tackle these problems, we propose a novel semi-supervised method called Semi-LAC for pathology image classification. Specifically, we first present local augmentation technique to randomly apply different augmentations produces to each local pathology patch, which can boost the diversity of pathology image and avoid mixing unimportant regions in other images. Moreover, we further propose the directional consistency loss to enforce restrictions on the consistency of both features and prediction results, thus improving the ability of the network to obtain robust representations and achieve accurate predictions. RESULTS The proposed method is evaluated on Bioimaging2015 and BACH datasets, and the extensive experiments show the superior performance of our Semi-LAC compared with state-of-the-art methods for pathology image classification. CONCLUSIONS We conclude that using the Semi-LAC method can effectively reduce the cost for annotating pathology images, and enhance the ability of classification networks to represent pathology images by using local augmentation techniques and directional consistency loss.
Collapse
Affiliation(s)
- Lei Su
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Zhi Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Yi Shi
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
35
|
Liang M, Chen Q, Li B, Wang L, Wang Y, Zhang Y, Wang R, Jiang X, Zhang C. Interpretable classification of pathology whole-slide images using attention based context-aware graph convolutional neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107268. [PMID: 36495811 DOI: 10.1016/j.cmpb.2022.107268] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/23/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Whole slide image (WSI) classification and lesion localization within giga-pixel slide are challenging tasks in computational pathology that requires context-aware representations of histological features to adequately infer nidus. The existing weakly supervised learning methods mainly treat different locations in the slide as independent regions and cannot learn potential nonlinear interactions between instances based on i.i.d assumption, resulting in the model unable to effectively utilize context-ware information to predict the labels of WSIs and locate the region of interest (ROI). METHODS Here, we propose an interpretable classification model named bidirectional Attention-based Multiple Instance Learning Graph Convolutional Network (ABMIL-GCN), which hierarchically aggregates context-aware features of instances into a global representation in a topology fashion to predict the slide labels and localize the region of lymph node metastasis in WSIs. RESULTS We verified the superiority of this method on the Camelyon16 dataset, and the results show that the average predicted ACC and AUC of the proposed model after flooding optimization can reach 90.89% and 0.9149, respectively. The average accuracy and ACC score are improved by more than 7% and 4% compared with the existing state-of-the-art algorithms. CONCLUSIONS The results demonstrate that context-aware GCN outperforms existing weakly supervised learning methods by introducing spatial correlations between the neighbor image patches, which also addresses the 'accuracy-interpretability trade-off' problem. The framework provides a novel paradigm for the clinical application of computer-aided diagnosis and intelligent systems.
Collapse
Affiliation(s)
- Meiyan Liang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China.
| | - Qinghui Chen
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Bo Li
- Department of Rehabilitation Treatment, Shanxi Rongjun Hospital, Taiyuan 030000, China
| | - Lin Wang
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Ying Wang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Yu Zhang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Ru Wang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Xing Jiang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Cunlin Zhang
- Beijing Key Laboratory for Terahertz Spectroscopy and Imaging, Key Laboratory of Terahertz, Optoelectronics, Ministry of Education, Capital Normal University, Beijing 100048, China
| |
Collapse
|
36
|
Karagoz MA, Akay B, Basturk A, Karaboga D, Nalbantoglu OU. An unsupervised transfer learning model based on convolutional auto encoder for non-alcoholic steatohepatitis activity scoring and fibrosis staging of liver histopathological images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08252-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
37
|
Dogar GM, Shahzad M, Fraz MM. Attention augmented distance regression and classification network for nuclei instance segmentation and type classification in histology images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
38
|
Gao Z, Hong B, Li Y, Zhang X, Wu J, Wang C, Zhang X, Gong T, Zheng Y, Meng D, Li C. A semi-supervised multi-task learning framework for cancer classification with weak annotation in whole-slide images. Med Image Anal 2023; 83:102652. [PMID: 36327654 DOI: 10.1016/j.media.2022.102652] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 09/15/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Cancer region detection (CRD) and subtyping are two fundamental tasks in digital pathology image analysis. The development of data-driven models for CRD and subtyping on whole-slide images (WSIs) would mitigate the burden of pathologists and improve their accuracy in diagnosis. However, the existing models are facing two major limitations. Firstly, they typically require large-scale datasets with precise annotations, which contradicts with the original intention of reducing labor effort. Secondly, for the subtyping task, the non-cancerous regions are treated as the same as cancerous regions within a WSI, which confuses a subtyping model in its training process. To tackle the latter limitation, the previous research proposed to perform CRD first for ruling out the non-cancerous region, then train a subtyping model based on the remaining cancerous patches. However, separately training ignores the interaction of these two tasks, also leads to propagating the error of the CRD task to the subtyping task. To address these issues and concurrently improve the performance on both CRD and subtyping tasks, we propose a semi-supervised multi-task learning (MTL) framework for cancer classification. Our framework consists of a backbone feature extractor, two task-specific classifiers, and a weight control mechanism. The backbone feature extractor is shared by two task-specific classifiers, such that the interaction of CRD and subtyping tasks can be captured. The weight control mechanism preserves the sequential relationship of these two tasks and guarantees the error back-propagation from the subtyping task to the CRD task under the MTL framework. We train the overall framework in a semi-supervised setting, where datasets only involve small quantities of annotations produced by our minimal point-based (min-point) annotation strategy. Extensive experiments on four large datasets with different cancer types demonstrate the effectiveness of the proposed framework in both accuracy and generalization.
Collapse
Affiliation(s)
- Zeyu Gao
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Bangyang Hong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yang Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Xianli Zhang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Jialun Wu
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chunbao Wang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an 710061, China
| | - Xiangrong Zhang
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Tieliang Gong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yefeng Zheng
- Tencent Jarvis Lab, Shenzhen, Guangdong 518075, China
| | - Deyu Meng
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chen Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China.
| |
Collapse
|
39
|
Graham S, Vu QD, Jahanifar M, Raza SEA, Minhas F, Snead D, Rajpoot N. One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification. Med Image Anal 2023; 83:102685. [PMID: 36410209 DOI: 10.1016/j.media.2022.102685] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 10/20/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - David Snead
- Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| |
Collapse
|
40
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
41
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
42
|
Zhou P, Cao Y, Li M, Ma Y, Chen C, Gan X, Wu J, Lv X, Chen C. HCCANet: histopathological image grading of colorectal cancer using CNN based on multichannel fusion attention mechanism. Sci Rep 2022; 12:15103. [PMID: 36068309 PMCID: PMC9448811 DOI: 10.1038/s41598-022-18879-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/22/2022] [Indexed: 12/17/2022] Open
Abstract
Histopathological image analysis is the gold standard for pathologists to grade colorectal cancers of different differentiation types. However, the diagnosis by pathologists is highly subjective and prone to misdiagnosis. In this study, we constructed a new attention mechanism named MCCBAM based on channel attention mechanism and spatial attention mechanism, and developed a computer-aided diagnosis (CAD) method based on CNN and MCCBAM, called HCCANet. In this study, 630 histopathology images processed with Gaussian filtering denoising were included and gradient-weighted class activation map (Grad-CAM) was used to visualize regions of interest in HCCANet to improve its interpretability. The experimental results show that the proposed HCCANet model outperforms four advanced deep learning (ResNet50, MobileNetV2, Xception, and DenseNet121) and four classical machine learning (KNN, NB, RF, and SVM) techniques, achieved 90.2%, 85%, and 86.7% classification accuracy for colorectal cancers with high, medium, and low differentiation levels, respectively, with an overall accuracy of 87.3% and an average AUC value of 0.9.In addition, the MCCBAM constructed in this study outperforms several commonly used attention mechanisms SAM, SENet, SKNet, Non_Local, CBAM, and BAM on the backbone network. In conclusion, the HCCANet model proposed in this study is feasible for postoperative adjuvant diagnosis and grading of colorectal cancer.
Collapse
Affiliation(s)
- Panyun Zhou
- College of Software, Xinjiang University, Urumqi, 830046, China
| | - Yanzhen Cao
- The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.,Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China
| | - Yuhua Ma
- Department of Oncology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China.,Karamay Central Hospital of Xinjiang Karamay, Karamay, Xinjiang Uygur Autonomous Region, Department of Pathology, Karamay, 834000, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.,Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China
| | - Xiaojing Gan
- The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Jianying Wu
- College of Physics and Electronic Engineering, Xinjiang Normal University, Urumqi, 830054, China
| | - Xiaoyi Lv
- College of Software, Xinjiang University, Urumqi, 830046, China. .,College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China. .,Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China. .,Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China. .,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830046, China.
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, 830046, China.
| |
Collapse
|
43
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
44
|
Shmatko A, Ghaffari Laleh N, Gerstung M, Kather JN. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. NATURE CANCER 2022; 3:1026-1038. [PMID: 36138135 DOI: 10.1038/s43018-022-00436-4] [Citation(s) in RCA: 179] [Impact Index Per Article: 59.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.
Collapse
Affiliation(s)
- Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK
| | | | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK.
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
45
|
Lai Q, Vong CM, Wong PK, Wang ST, Yan T, Choi IC, Yu HH. Multi-scale Multi-instance Multi-feature Joint Learning Broad Network (M3JLBN) for gastric intestinal metaplasia subtype classification. Knowl Based Syst 2022; 249:108960. [DOI: 10.1016/j.knosys.2022.108960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
46
|
Yan R, Yang Z, Li J, Zheng C, Zhang F. Divide-and-Attention Network for HE-Stained Pathological Image Classification. BIOLOGY 2022; 11:982. [PMID: 36101363 PMCID: PMC9311575 DOI: 10.3390/biology11070982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 06/20/2022] [Accepted: 06/27/2022] [Indexed: 11/19/2022]
Abstract
Since pathological images have some distinct characteristics that are different from natural images, the direct application of a general convolutional neural network cannot achieve good classification performance, especially for fine-grained classification problems (such as pathological image grading). Inspired by the clinical experience that decomposing a pathological image into different components is beneficial for diagnosis, in this paper, we propose a Divide-and-Attention Network (DANet) for Hematoxylin-and-Eosin (HE)-stained pathological image classification. The DANet utilizes a deep-learning method to decompose a pathological image into nuclei and non-nuclei parts. With such decomposed pathological images, the DANet first performs feature learning independently in each branch, and then focuses on the most important feature representation through the branch selection attention module. In this way, the DANet can learn representative features with respect to different tissue structures and adaptively focus on the most important ones, thereby improving classification performance. In addition, we introduce deep canonical correlation analysis (DCCA) constraints in the feature fusion process of different branches. The DCCA constraints play the role of branch fusion attention, so as to maximize the correlation of different branches and ensure that the fused branches emphasize specific tissue structures. The experimental results of three datasets demonstrate the superiority of the DANet, with an average classification accuracy of 92.5% on breast cancer classification, 95.33% on colorectal cancer grading, and 91.6% on breast cancer grading tasks.
Collapse
Affiliation(s)
- Rui Yan
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Zhidong Yang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| | - Jintao Li
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| | - Chunhou Zheng
- School of Artificial Intelligence, Anhui University, Hefei 230093, China
| | - Fa Zhang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| |
Collapse
|
47
|
A Novel Hybrid Convolutional Neural Network Approach for the Stomach Intestinal Early Detection Cancer Subtype Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7325064. [PMID: 35785096 PMCID: PMC9249475 DOI: 10.1155/2022/7325064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 06/05/2022] [Accepted: 06/10/2022] [Indexed: 11/18/2022]
Abstract
There may be different types of cancer that cause fatal effects in the human body. In general, cancer is nothing but the unnatural growth of blood cells in different parts of the body and is named accordingly. It may be skin cancer, breast cancer, uterus cancer, intestinal cancer, stomach cancer, etc. However, every type of cancer consists of unwanted blood cells which cause issues in the body starting from the minor to death. Cancer cells have the common features in them, and these common features we have used in our work for the processing. Cancer has a significant death rate; however, it is frequently curable with simple surgery if detected in its early stages. A quick and correct diagnosis may be extremely beneficial to both doctors and patients. In several medical domains, the latest deep-learning-based model’s performance is comparable to or even exceeds that of human specialists. We have proposed a novel methodology based on a convolutional neural network that may be used for almost all types of cancer detection. We have collected different datasets of different types of common cancer from different sources and used 90% of the sample data for the training purpose, then we reduced it by 10%, and an 80% image set was used for the validation of the model. After that for testing purposes, we fed a sample dataset and obtain the results. The final output clearly shows that the proposed model outperforms the previous model when we compared our methodology with the existing work.
Collapse
|
48
|
Park Y, Kim M, Ashraf M, Ko YS, Yi MY. MixPatch: A New Method for Training Histopathology Image Classifiers. Diagnostics (Basel) 2022; 12:diagnostics12061493. [PMID: 35741303 PMCID: PMC9221905 DOI: 10.3390/diagnostics12061493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Mujin Kim
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Murtaza Ashraf
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul 04805, Korea;
| | - Mun Yong Yi
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
- Correspondence:
| |
Collapse
|
49
|
Qaiser T, Lee CY, Vandenberghe M, Yeh J, Gavrielides MA, Hipp J, Scott M, Reischl J. Usability of deep learning and H&E images predict disease outcome-emerging tool to optimize clinical trials. NPJ Precis Oncol 2022; 6:37. [PMID: 35705792 PMCID: PMC9200764 DOI: 10.1038/s41698-022-00275-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 04/27/2022] [Indexed: 11/24/2022] Open
Abstract
Understanding factors that impact prognosis for cancer patients have high clinical relevance for treatment decisions and monitoring of the disease outcome. Advances in artificial intelligence (AI) and digital pathology offer an exciting opportunity to capitalize on the use of whole slide images (WSIs) of hematoxylin and eosin (H&E) stained tumor tissue for objective prognosis and prediction of response to targeted therapies. AI models often require hand-delineated annotations for effective training which may not be readily available for larger data sets. In this study, we investigated whether AI models can be trained without region-level annotations and solely on patient-level survival data. We present a weakly supervised survival convolutional neural network (WSS-CNN) approach equipped with a visual attention mechanism for predicting overall survival. The inclusion of visual attention provides insights into regions of the tumor microenvironment with the pathological interpretation which may improve our understanding of the disease pathomechanism. We performed this analysis on two independent, multi-center patient data sets of lung (which is publicly available data) and bladder urothelial carcinoma. We perform univariable and multivariable analysis and show that WSS-CNN features are prognostic of overall survival in both tumor indications. The presented results highlight the significance of computational pathology algorithms for predicting prognosis using H&E stained images alone and underpin the use of computational methods to improve the efficiency of clinical trial studies.
Collapse
Affiliation(s)
- Talha Qaiser
- Precision Medicine and Biosamples, Oncology R&D, AstraZeneca, Cambridge, UK.
| | | | | | - Joe Yeh
- AetherAI, Taipei City, Taiwan
| | | | - Jason Hipp
- Early Oncology, Oncology R&D, AstraZeneca, Cambridge, UK
| | - Marietta Scott
- Precision Medicine and Biosamples, Oncology R&D, AstraZeneca, Cambridge, UK
| | - Joachim Reischl
- Precision Medicine and Biosamples, Oncology R&D, AstraZeneca, Cambridge, UK
| |
Collapse
|
50
|
Laleh NG, Muti HS, Loeffler CML, Echle A, Saldanha OL, Mahmood F, Lu MY, Trautwein C, Langer R, Dislich B, Buelow RD, Grabsch HI, Brenner H, Chang-Claude J, Alwers E, Brinker TJ, Khader F, Truhn D, Gaisa NT, Boor P, Hoffmeister M, Schulz V, Kather JN. Benchmarking weakly-supervised deep learning pipelines for whole slide classification in computational pathology. Med Image Anal 2022; 79:102474. [DOI: 10.1016/j.media.2022.102474] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 04/07/2022] [Accepted: 05/03/2022] [Indexed: 02/07/2023]
|