1
|
Wang YL, Gao S, Xiao Q, Li C, Grzegorzek M, Zhang YY, Li XH, Kang Y, Liu FH, Huang DH, Gong TT, Wu QJ. Role of artificial intelligence in digital pathology for gynecological cancers. Comput Struct Biotechnol J 2024; 24:205-212. [PMID: 38510535 PMCID: PMC10951449 DOI: 10.1016/j.csbj.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/08/2024] [Accepted: 03/09/2024] [Indexed: 03/22/2024] Open
Abstract
The diagnosis of cancer is typically based on histopathological sections or biopsies on glass slides. Artificial intelligence (AI) approaches have greatly enhanced our ability to extract quantitative information from digital histopathology images as a rapid growth in oncology data. Gynecological cancers are major diseases affecting women's health worldwide. They are characterized by high mortality and poor prognosis, underscoring the critical importance of early detection, treatment, and identification of prognostic factors. This review highlights the various clinical applications of AI in gynecological cancers using digitized histopathology slides. Particularly, deep learning models have shown promise in accurately diagnosing, classifying histopathological subtypes, and predicting treatment response and prognosis. Furthermore, the integration with transcriptomics, proteomics, and other multi-omics techniques can provide valuable insights into the molecular features of diseases. Despite the considerable potential of AI, substantial challenges remain. Further improvements in data acquisition and model optimization are required, and the exploration of broader clinical applications, such as the biomarker discovery, need to be explored.
Collapse
Affiliation(s)
- Ya-Li Wang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Information Center, The Fourth Affiliated Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qian Xiao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Ying-Ying Zhang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiao-Han Li
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ye Kang
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Fang-Hua Liu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Dong-Hui Huang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qi-Jun Wu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
- NHC Key Laboratory of Advanced Reproductive Medicine and Fertility (China Medical University), National Health Commission, Shenyang, China
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Wang H, Jin Q, Li S, Liu S, Wang M, Song Z. A comprehensive survey on deep active learning in medical image analysis. Med Image Anal 2024; 95:103201. [PMID: 38776841 DOI: 10.1016/j.media.2024.103201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 04/25/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024]
Abstract
Deep learning has achieved widespread success in medical image analysis, leading to an increasing demand for large-scale expert-annotated medical image datasets. Yet, the high cost of annotating medical images severely hampers the development of deep learning in this field. To reduce annotation costs, active learning aims to select the most informative samples for annotation and train high-performance models with as few labeled samples as possible. In this survey, we review the core methods of active learning, including the evaluation of informativeness and sampling strategy. For the first time, we provide a detailed summary of the integration of active learning with other label-efficient techniques, such as semi-supervised, self-supervised learning, and so on. We also summarize active learning works that are specifically tailored to medical image analysis. Additionally, we conduct a thorough comparative analysis of the performance of different AL methods in medical image analysis with experiments. In the end, we offer our perspectives on the future trends and challenges of active learning and its applications in medical image analysis. An accompanying paper list and code for the comparative analysis is available in https://github.com/LightersWang/Awesome-Active-Learning-for-Medical-Image-Analysis.
Collapse
Affiliation(s)
- Haoran Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Qiuye Jin
- Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
| | - Shiman Li
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Siyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China.
| |
Collapse
|
4
|
Lecuelle J, Truntzer C, Basile D, Laghi L, Greco L, Ilie A, Rageot D, Emile JF, Bibeau F, Taïeb J, Derangere V, Lepage C, Ghiringhelli F. Machine learning evaluation of immune infiltrate through digital tumour score allows prediction of survival outcome in a pooled analysis of three international stage III colon cancer cohorts. EBioMedicine 2024; 105:105207. [PMID: 38880067 DOI: 10.1016/j.ebiom.2024.105207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 05/18/2024] [Accepted: 06/03/2024] [Indexed: 06/18/2024] Open
Abstract
BACKGROUND T-cell immune infiltrates are robust prognostic variables in localised colon cancer. Evaluation of prognosis using artificial intelligence is an emerging field. We evaluated whether machine learning analysis improved prediction of patient outcome in comparison with analysis of T cell infiltrate only or in association with clinical variables. METHODS We used data from two phase III clinical trials (Prodige-13 and PETACC08) and one retrospective Italian cohort (HARMONY). Cohorts were split into training (N = 692), internal validation (N = 297) and external validation (N = 672) sets. Tumour slides were stained with CD3mAb. CD3 Machine Learning (CD3ML) score was computed using graphical parameters within the tumour tiles obtained from CD3 slides. CD3 infiltrates in tumour core and invasive margin were automatically detected. Associations of CD3 infiltrates and CD3ML with 5-year Disease-Free Survival (DFS) were examined using univariate and multivariable survival models by Cox regression. FINDINGS CD3 density both in the invasive margin and the tumour core were significantly associated with DFS in the different sets. Similarly, CD3ML score was significantly associated with DFS in all sets. CD3 assessment did not provide added value on top of CD3ML assessment (Likelihood Ratio Test (LRT), p = 0.13). In contrast, CD3ML improved prediction of DFS when combined with a clinical risk stage (LRT, p = 0.001). Stratified by clinical risk score (High or Low), patients with low CD3ML score had better DFS. INTERPRETATION In all tested sets, machine learning analysis of tumour cells improved prediction of prognosis compared to clinical parameters. Adding tumour-infiltrating lymphocytes assessment did not improve prognostic determination. FUNDING This research received no external funding.
Collapse
Affiliation(s)
- Julie Lecuelle
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; Cancer Biology Transfer Platform, Centre Georges-François Leclerc, Dijon, France
| | - Caroline Truntzer
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; Cancer Biology Transfer Platform, Centre Georges-François Leclerc, Dijon, France; Genetic and Immunology Medical Institute, Dijon, France
| | - Debora Basile
- Department of Medical Oncology, San Giovanni di Dio Hospital, Crotone, Italy
| | - Luigi Laghi
- Department of Medicine and Surgery, University of Parma, Parma, Italy; Molecular Gastroenterology Laboratory, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Luana Greco
- Molecular Gastroenterology Laboratory, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Alis Ilie
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; Cancer Biology Transfer Platform, Centre Georges-François Leclerc, Dijon, France
| | - David Rageot
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; Cancer Biology Transfer Platform, Centre Georges-François Leclerc, Dijon, France
| | - Jean-François Emile
- Paris-Saclay University, Versailles SQY University (UVSQ), EA4340-BECCOH, Assistance Publique-Hôpitaux de Paris (AP-HP), Ambroise Paré Hospital, Smart Imaging, Service de Pathologie, Boulogne, France
| | - Fréderic Bibeau
- Service d'Anatomie et Cytologie Pathologiques, CHU Côte de Nacre, Normandie Université, Caen, France; Department of Pathology, Besançon University Hospital, Besançon, France
| | - Julien Taïeb
- Institut du Cancer Paris Cancer Research for Personalized Medicine, Assistance Publique-Hôpitaux de Paris (AP-HP), Hôpital Européen Georges Pompidou, Paris, France; Centre de Recherche des Cordeliers, Institut National de la Santé et de la Recherche Médicale (INSERM), Centre National de la Recherche Scientifique, Sorbonne Université, Université Sorbonne Paris Cité, Université de Paris, Paris, France; Department of Gastroenterology and Digestive Oncology, Georges Pompidou European Hospital, AP-HP Centre, Université Paris Cité, Paris, France
| | - Valentin Derangere
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; Cancer Biology Transfer Platform, Centre Georges-François Leclerc, Dijon, France; Genetic and Immunology Medical Institute, Dijon, France; University of Burgundy Franche-Comté, Dijon, France
| | - Come Lepage
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; University of Burgundy Franche-Comté, Dijon, France; Fédération Francophone de Cancérologie Digestive, Centre de Randomisation Gestion Analyse, EPICAD LNC 1231, Dijon, France; Service d'Hépato-gastroentérologie et Oncologie digestive, CHU de Dijon, France
| | - François Ghiringhelli
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France; Cancer Biology Transfer Platform, Centre Georges-François Leclerc, Dijon, France; Genetic and Immunology Medical Institute, Dijon, France; University of Burgundy Franche-Comté, Dijon, France; Department of Medical Oncology, Centre Georges-François Leclerc, Dijon, France.
| |
Collapse
|
5
|
Guo X, Zhang Y, Peng L, Wang Y, He CW, Li K, Hao K, Li K, Wang Z, Huang H, Miao X. Collagen synthase P4HA3 as a novel biomarker for colorectal cancer correlates with prognosis and immune infiltration. Heliyon 2024; 10:e31695. [PMID: 38832271 PMCID: PMC11145334 DOI: 10.1016/j.heliyon.2024.e31695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 04/18/2024] [Accepted: 05/20/2024] [Indexed: 06/05/2024] Open
Abstract
Objective In this study, we aimed to determine whether proly4-hydroxylase-III (P4HA3) could be used as a biomarker for the diagnosis of colorectal cancer (CRC) as well as for determining prognosis. Methods We used The Cancer Genome Atlas (TCGA) database to analyze P4HA3 expression in CRC and further investigated the association between P4HA3 and clinicopathological parameters, immune infiltration, and prognosis of patients with CRC. Enrichment analysis was conducted to investigate the potential biological role of P4HA3 in CRC. To verify the results of TCGA analysis, we performed immunohistochemical staining of 180 clinical CRC tissue samples to probe into the relationship of P4HA3 expression with lymphocyte infiltration and immune checkpoints expression. Results The expression of P4HA3 was significantly higher in CRC tissues and associated with a higher degree of malignancy and poorer prognosis in CRC. The results of enrichment analysis indicated that P4HA3 may be associated with the epithelial-mesenchymal transition process and the immune response. Immunohistochemical staining results showed that high P4HA3 expression was associated with high infiltration levels of CD8+ and Foxp3+ TILs and high PD-1/PD- L1 expression. Lastly, patients with CRC co-expressing P4HA3 and PD-1 had a significantly worse prognosis. Conclusion High expression of P4HA3 is associated with adverse clinical features and immune cell infiltration in CRC, and has the potential to serve as a biomarker for predicting CRC prognosis.
Collapse
Affiliation(s)
- Xiaohuan Guo
- School of Laboratory Medicine and Life Sciences, Wenzhou Medical University, Wenzhou, 325035, China
| | - Yu Zhang
- Department of Gastroenterology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, 310014, China
| | - Lina Peng
- School of Laboratory Medicine and Life Sciences, Wenzhou Medical University, Wenzhou, 325035, China
| | - Yaling Wang
- School of Laboratory Medicine and Life Sciences, Wenzhou Medical University, Wenzhou, 325035, China
| | - Cheng-Wen He
- Laboratory Medicine Center, Department of Transfusion Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, 310014, China
| | - Kaixuan Li
- Laboratory Medicine Center, Department of Transfusion Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, 310014, China
| | - Ke Hao
- Laboratory Medicine Center, Department of Transfusion Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, 310014, China
| | - Kaiqiang Li
- Laboratory Medicine Center, Department of Transfusion Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, 310014, China
| | - Zhen Wang
- Laboratory Medicine Center, Department of Transfusion Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, 310014, China
| | - Haishan Huang
- School of Laboratory Medicine and Life Sciences, Wenzhou Medical University, Wenzhou, 325035, China
| | - Xiaolin Miao
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| |
Collapse
|
6
|
Claudio Quiros A, Coudray N, Yeaton A, Yang X, Liu B, Le H, Chiriboga L, Karimkhan A, Narula N, Moore DA, Park CY, Pass H, Moreira AL, Le Quesne J, Tsirigos A, Yuan K. Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unannotated pathology slides. Nat Commun 2024; 15:4596. [PMID: 38862472 DOI: 10.1038/s41467-024-48666-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 05/08/2024] [Indexed: 06/13/2024] Open
Abstract
Cancer diagnosis and management depend upon the extraction of complex information from microscopy images by pathologists, which requires time-consuming expert interpretation prone to human bias. Supervised deep learning approaches have proven powerful, but are inherently limited by the cost and quality of annotations used for training. Therefore, we present Histomorphological Phenotype Learning, a self-supervised methodology requiring no labels and operating via the automatic discovery of discriminatory features in image tiles. Tiles are grouped into morphologically similar clusters which constitute an atlas of histomorphological phenotypes (HP-Atlas), revealing trajectories from benign to malignant tissue via inflammatory and reactive phenotypes. These clusters have distinct features which can be identified using orthogonal methods, linking histologic, molecular and clinical phenotypes. Applied to lung cancer, we show that they align closely with patient survival, with histopathologically recognised tumor types and growth patterns, and with transcriptomic measures of immunophenotype. These properties are maintained in a multi-cancer study.
Collapse
Affiliation(s)
- Adalberto Claudio Quiros
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK
| | - Nicolas Coudray
- Applied Bioinformatics Laboratories, NYU Grossman School of Medicine, New York, NY, USA
- Department of Cell Biology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
| | - Anna Yeaton
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Xinyu Yang
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK
| | - Bojing Liu
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Soln, Sweden
| | - Hortense Le
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Luis Chiriboga
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Afreen Karimkhan
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Navneet Narula
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - David A Moore
- Department of Cellular Pathology, University College London Hospital, London, UK
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
| | - Christopher Y Park
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
| | - Harvey Pass
- Department of Cardiothoracic Surgery, NYU Grossman School of Medicine, New York, NY, USA
| | - Andre L Moreira
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - John Le Quesne
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK.
- Cancer Research UK Scotland Institute, Glasgow, Scotland, UK.
- Queen Elizabeth University Hospital, Greater Glasgow and Clyde NHS Trust, Glasgow, Scotland, UK.
| | - Aristotelis Tsirigos
- Applied Bioinformatics Laboratories, NYU Grossman School of Medicine, New York, NY, USA.
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA.
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA.
| | - Ke Yuan
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK.
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK.
- Cancer Research UK Scotland Institute, Glasgow, Scotland, UK.
| |
Collapse
|
7
|
Hilgers L, Ghaffari Laleh N, West NP, Westwood A, Hewitt KJ, Quirke P, Grabsch HI, Carrero ZI, Matthaei E, Loeffler CML, Brinker TJ, Yuan T, Brenner H, Brobeil A, Hoffmeister M, Kather JN. Automated curation of large-scale cancer histopathology image datasets using deep learning. Histopathology 2024; 84:1139-1153. [PMID: 38409878 DOI: 10.1111/his.15159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/29/2023] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
BACKGROUND Artificial intelligence (AI) has numerous applications in pathology, supporting diagnosis and prognostication in cancer. However, most AI models are trained on highly selected data, typically one tissue slide per patient. In reality, especially for large surgical resection specimens, dozens of slides can be available for each patient. Manually sorting and labelling whole-slide images (WSIs) is a very time-consuming process, hindering the direct application of AI on the collected tissue samples from large cohorts. In this study we addressed this issue by developing a deep-learning (DL)-based method for automatic curation of large pathology datasets with several slides per patient. METHODS We collected multiple large multicentric datasets of colorectal cancer histopathological slides from the United Kingdom (FOXTROT, N = 21,384 slides; CR07, N = 7985 slides) and Germany (DACHS, N = 3606 slides). These datasets contained multiple types of tissue slides, including bowel resection specimens, endoscopic biopsies, lymph node resections, immunohistochemistry-stained slides, and tissue microarrays. We developed, trained, and tested a deep convolutional neural network model to predict the type of slide from the slide overview (thumbnail) image. The primary statistical endpoint was the macro-averaged area under the receiver operating curve (AUROCs) for detection of the type of slide. RESULTS In the primary dataset (FOXTROT), with an AUROC of 0.995 [95% confidence interval [CI]: 0.994-0.996] the algorithm achieved a high classification performance and was able to accurately predict the type of slide from the thumbnail image alone. In the two external test cohorts (CR07, DACHS) AUROCs of 0.982 [95% CI: 0.979-0.985] and 0.875 [95% CI: 0.864-0.887] were observed, which indicates the generalizability of the trained model on unseen datasets. With a confidence threshold of 0.95, the model reached an accuracy of 94.6% (7331 classified cases) in CR07 and 85.1% (2752 classified cases) for the DACHS cohort. CONCLUSION Our findings show that using the low-resolution thumbnail image is sufficient to accurately classify the type of slide in digital pathology. This can support researchers to make the vast resource of existing pathology archives accessible to modern AI models with only minimal manual annotations.
Collapse
Affiliation(s)
- Lars Hilgers
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Narmin Ghaffari Laleh
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Nicholas P West
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Alice Westwood
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Katherine J Hewitt
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Philip Quirke
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Heike I Grabsch
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Zunamys I Carrero
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Emylou Matthaei
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Chiara M L Loeffler
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tanwei Yuan
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hermann Brenner
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Division of Preventive Oncology, German Cancer Research Center (DKFZ) and National Center for Tumor Diseases (NCT), Heidelberg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Brobeil
- Institute of Pathology, University Hospital Heidelberg, Heidelberg, Germany
- Tissue Bank, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Michael Hoffmeister
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| |
Collapse
|
8
|
Duan L, He Y, Guo W, Du Y, Yin S, Yang S, Dong G, Li W, Chen F. Machine learning-based pathomics signature of histology slides as a novel prognostic indicator in primary central nervous system lymphoma. J Neurooncol 2024; 168:283-298. [PMID: 38557926 PMCID: PMC11147825 DOI: 10.1007/s11060-024-04665-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 03/26/2024] [Indexed: 04/04/2024]
Abstract
PURPOSE To develop and validate a pathomics signature for predicting the outcomes of Primary Central Nervous System Lymphoma (PCNSL). METHODS In this study, 132 whole-slide images (WSIs) of 114 patients with PCNSL were enrolled. Quantitative features of hematoxylin and eosin (H&E) stained slides were extracted using CellProfiler. A pathomics signature was established and validated. Cox regression analysis, receiver operating characteristic (ROC) curves, Calibration, decision curve analysis (DCA), and net reclassification improvement (NRI) were performed to assess the significance and performance. RESULTS In total, 802 features were extracted using a fully automated pipeline. Six machine-learning classifiers demonstrated high accuracy in distinguishing malignant neoplasms. The pathomics signature remained a significant factor of overall survival (OS) and progression-free survival (PFS) in the training cohort (OS: HR 7.423, p < 0.001; PFS: HR 2.143, p = 0.022) and independent validation cohort (OS: HR 4.204, p = 0.017; PFS: HR 3.243, p = 0.005). A significantly lower response rate to initial treatment was found in high Path-score group (19/35, 54.29%) as compared to patients in the low Path-score group (16/70, 22.86%; p < 0.001). The DCA and NRI analyses confirmed that the nomogram showed incremental performance compared with existing models. The ROC curve demonstrated a relatively sensitive and specific profile for the nomogram (1-, 2-, and 3-year AUC = 0.862, 0.932, and 0.927, respectively). CONCLUSION As a novel, non-invasive, and convenient approach, the newly developed pathomics signature is a powerful predictor of OS and PFS in PCNSL and might be a potential predictive indicator for therapeutic response.
Collapse
Affiliation(s)
- Ling Duan
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China
| | - Yongqi He
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China
| | - Wenhui Guo
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China
| | - Yanru Du
- Department of Pathology, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China
| | - Shuo Yin
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China
| | - Shoubo Yang
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China
| | - Gehong Dong
- Department of Pathology, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China.
| | - Wenbin Li
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China.
| | - Feng Chen
- Department of Neuro-Oncology, Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No.119 West Nansihuan Road, Beijing, 100070, China.
| |
Collapse
|
9
|
Halder A, Gharami S, Sadhu P, Singh PK, Woźniak M, Ijaz MF. Implementing vision transformer for classifying 2D biomedical images. Sci Rep 2024; 14:12567. [PMID: 38821977 PMCID: PMC11143185 DOI: 10.1038/s41598-024-63094-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/24/2024] [Indexed: 06/02/2024] Open
Abstract
In recent years, the growth spurt of medical imaging data has led to the development of various machine learning algorithms for various healthcare applications. The MedMNISTv2 dataset, a comprehensive benchmark for 2D biomedical image classification, encompasses diverse medical imaging modalities such as Fundus Camera, Breast Ultrasound, Colon Pathology, Blood Cell Microscope etc. Highly accurate classifications performed on these datasets is crucial for identification of various diseases and determining the course of treatment. This research paper presents a comprehensive analysis of four subsets within the MedMNISTv2 dataset: BloodMNIST, BreastMNIST, PathMNIST and RetinaMNIST. Each of these selected datasets is of diverse data modalities and comes with various sample sizes, and have been selected to analyze the efficiency of the model against diverse data modalities. The study explores the idea of assessing the Vision Transformer Model's ability to capture intricate patterns and features crucial for these medical image classification and thereby transcend the benchmark metrics substantially. The methodology includes pre-processing the input images which is followed by training the ViT-base-patch16-224 model on the mentioned datasets. The performance of the model is assessed using key metrices and by comparing the classification accuracies achieved with the benchmark accuracies. With the assistance of ViT, the new benchmarks achieved for BloodMNIST, BreastMNIST, PathMNIST and RetinaMNIST are 97.90%, 90.38%, 94.62% and 57%, respectively. The study highlights the promise of Vision transformer models in medical image analysis, preparing the way for their adoption and further exploration in healthcare applications, aiming to enhance diagnostic accuracy and assist medical professionals in clinical decision-making.
Collapse
Affiliation(s)
- Arindam Halder
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Kolkata, West Bengal, 700106, India
| | - Sanghita Gharami
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Kolkata, West Bengal, 700106, India
| | - Priyangshu Sadhu
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Kolkata, West Bengal, 700106, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Salt Lake Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Kolkata, West Bengal, 700106, India
- Metharath University, 99, Moo 10, Bang Toei, Sam Khok, 12160, Pathum Thani, Thailand
| | - Marcin Woźniak
- Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, 44-100, Gliwice, Poland.
| | - Muhammad Fazal Ijaz
- School of IT and Engineering, Melbourne Institute of Technology, Melbourne, 3000, Australia.
| |
Collapse
|
10
|
Baheti B, Innani S, Nasrallah M, Bakas S. Prognostic stratification of glioblastoma patients by unsupervised clustering of morphology patterns on whole slide images furthering our disease understanding. Front Neurosci 2024; 18:1304191. [PMID: 38831756 PMCID: PMC11146603 DOI: 10.3389/fnins.2024.1304191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/25/2024] [Indexed: 06/05/2024] Open
Abstract
Introduction Glioblastoma (GBM) is a highly aggressive malignant tumor of the central nervous system that displays varying molecular and morphological profiles, leading to challenging prognostic assessments. Stratifying GBM patients according to overall survival (OS) from H&E-stained whole slide images (WSI) using advanced computational methods is challenging, but with direct clinical implications. Methods This work is focusing on GBM (IDH-wildtype, CNS WHO Gr.4) cases, identified from the TCGA-GBM and TCGA-LGG collections after considering the 2021 WHO classification criteria. The proposed approach starts with patch extraction in each WSI, followed by comprehensive patch-level curation to discard artifactual content, i.e., glass reflections, pen markings, dust on the slide, and tissue tearing. Each patch is then computationally described as a feature vector defined by a pre-trained VGG16 convolutional neural network. Principal component analysis provides a feature representation of reduced dimensionality, further facilitating identification of distinct groups of morphology patterns, via unsupervised k-means clustering. Results The optimal number of clusters, according to cluster reproducibility and separability, is automatically determined based on the rand index and silhouette coefficient, respectively. Our proposed approach achieved prognostic stratification accuracy of 83.33% on a multi-institutional independent unseen hold-out test set with sensitivity and specificity of 83.33%. Discussion We hypothesize that the quantification of these clusters of morphology patterns, reflect the tumor's spatial heterogeneity and yield prognostic relevant information to distinguish between short and long survivors using a decision tree classifier. The interpretability analysis of the obtained results can contribute to furthering and quantifying our understanding of GBM and potentially improving our diagnostic and prognostic predictions.
Collapse
Affiliation(s)
- Bhakti Baheti
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Shubham Innani
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - MacLean Nasrallah
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Computer Science, Luddy School of Informatics, Computing, and Engineering, Indiana University, Indianapolis, IN, United States
| |
Collapse
|
11
|
Ahn B, Moon D, Kim HS, Lee C, Cho NH, Choi HK, Kim D, Lee JY, Nam EJ, Won D, An HJ, Kwon SY, Shin SJ, Jung HR, Kwon D, Park H, Kim M, Cha YJ, Park H, Lee Y, Noh S, Lee YM, Choi SE, Kim JM, Sung SH, Park E. Histopathologic image-based deep learning classifier for predicting platinum-based treatment responses in high-grade serous ovarian cancer. Nat Commun 2024; 15:4253. [PMID: 38762636 PMCID: PMC11102549 DOI: 10.1038/s41467-024-48667-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 05/09/2024] [Indexed: 05/20/2024] Open
Abstract
Platinum-based chemotherapy is the cornerstone treatment for female high-grade serous ovarian carcinoma (HGSOC), but choosing an appropriate treatment for patients hinges on their responsiveness to it. Currently, no available biomarkers can promptly predict responses to platinum-based treatment. Therefore, we developed the Pathologic Risk Classifier for HGSOC (PathoRiCH), a histopathologic image-based classifier. PathoRiCH was trained on an in-house cohort (n = 394) and validated on two independent external cohorts (n = 284 and n = 136). The PathoRiCH-predicted favorable and poor response groups show significantly different platinum-free intervals in all three cohorts. Combining PathoRiCH with molecular biomarkers provides an even more powerful tool for the risk stratification of patients. The decisions of PathoRiCH are explained through visualization and a transcriptomic analysis, which bolster the reliability of our model's decisions. PathoRiCH exhibits better predictive performance than current molecular biomarkers. PathoRiCH will provide a solid foundation for developing an innovative tool to transform the current diagnostic pipeline for HGSOC.
Collapse
Affiliation(s)
- Byungsoo Ahn
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Damin Moon
- Artificial Intelligence Research Center, JLK Inc., Seoul, South Korea
| | - Hyun-Soo Kim
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Chung Lee
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Nam Hoon Cho
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Heung-Kook Choi
- Artificial Intelligence Research Center, JLK Inc., Seoul, South Korea
| | - Dongmin Kim
- Artificial Intelligence Research Center, JLK Inc., Seoul, South Korea
| | - Jung-Yun Lee
- Department of Obstetrics and Gynecology, Institute of Women's Life Medical Science, Yonsei University College of Medicine, Seoul, South Korea
| | - Eun Ji Nam
- Department of Obstetrics and Gynecology, Institute of Women's Life Medical Science, Yonsei University College of Medicine, Seoul, South Korea
| | - Dongju Won
- Department of Laboratory Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Hee Jung An
- Department of Pathology, CHA Bundang Medical Center, CHA University School of Medicine, Seongnam, South Korea
| | - Sun Young Kwon
- Department of Pathology, Keimyung University School of Medicine, Daegu, South Korea
| | - Su-Jin Shin
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Hye Ra Jung
- Department of Pathology, Keimyung University School of Medicine, Daegu, South Korea
| | - Dohee Kwon
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Heejung Park
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Milim Kim
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoon Jin Cha
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
- Institute of Breast Cancer Precision Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Hyunjin Park
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yangkyu Lee
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Songmi Noh
- Department of Diagnostic Pathology, Gangnam CHA Medical Center, CHA University College of Medicine, Seoul, South Korea
| | - Yong-Moon Lee
- Department of Pathology, Dankook University School of Medicine, Cheonan, South Korea
| | - Sung-Eun Choi
- Department of Pathology, CHA Bundang Medical Center, CHA University School of Medicine, Seongnam, South Korea
| | - Ji Min Kim
- Department of Pathology, Ewha Womans University, Seoul, South Korea
| | - Sun Hee Sung
- Department of Pathology, Ewha Womans University, Seoul, South Korea
| | - Eunhyang Park
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea.
| |
Collapse
|
12
|
Uddin AH, Chen YL, Akter MR, Ku CS, Yang J, Por LY. Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures. Heliyon 2024; 10:e30625. [PMID: 38742084 PMCID: PMC11089372 DOI: 10.1016/j.heliyon.2024.e30625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/02/2024] [Accepted: 04/30/2024] [Indexed: 05/16/2024] Open
Abstract
Automatic classification of colon and lung cancer images is crucial for early detection and accurate diagnostics. However, there is room for improvement to enhance accuracy, ensuring better diagnostic precision. This study introduces two novel dense architectures (D1 and D2) and emphasizes their effectiveness in classifying colon and lung cancer from diverse images. It also highlights their resilience, efficiency, and superior performance across multiple datasets. These architectures were tested on various types of datasets, including NCT-CRC-HE-100K (set of 100,000 non-overlapping image patches from hematoxylin and eosin (H&E) stained histological images of human colorectal cancer (CRC) and normal tissue), CRC-VAL-HE-7K (set of 7180 image patches from N = 50 patients with colorectal adenocarcinoma, no overlap with patients in NCT-CRC-HE-100K), LC25000 (Lung and Colon Cancer Histopathological Image), and IQ-OTHNCCD (Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases), showcasing their effectiveness in classifying colon and lung cancers from histopathological and Computed Tomography (CT) scan images. This underscores the multi-modal image classification capability of the proposed models. Moreover, the study addresses imbalanced datasets, particularly in CRC-VAL-HE-7K and IQ-OTHNCCD, with a specific focus on model resilience and robustness. To assess overall performance, the study conducted experiments in different scenarios. The D1 model achieved an impressive 99.80 % accuracy on the NCT-CRC-HE-100K dataset, with a Jaccard Index (J) of 0.8371, a Matthew's Correlation Coefficient (MCC) of 0.9073, a Cohen's Kappa (Kp) of 0.9057, and a Critical Success Index (CSI) of 0.8213. When subjected to 10-fold cross-validation on LC25000, the D1 model averaged (avg) 99.96 % accuracy (avg J, MCC, Kp, and CSI of 0.9993, 0.9987, 0.9853, and 0.9990), surpassing recent reported performances. Furthermore, the ensemble of D1 and D2 reached 93 % accuracy (J, MCC, Kp, and CSI of 0.7556, 0.8839, 0.8796, and 0.7140) on the IQ-OTHNCCD dataset, exceeding recent benchmarks and aligning with other reported results. Efficiency evaluations were conducted in various scenarios. For instance, training on only 10 % of LC25000 resulted in high accuracy rates of 99.19 % (J, MCC, Kp, and CSI of 0.9840, 0.9898, 0.9898, and 0.9837) (D1) and 99.30 % (J, MCC, Kp, and CSI of 0.9863, 0.9913, 0.9913, and 0.9861) (D2). In NCT-CRC-HE-100K, D2 achieved an impressive 99.53 % accuracy (J, MCC, Kp, and CSI of 0.9906, 0.9946, 0.9946, and 0.9906) with training on only 30 % of the dataset and testing on the remaining 70 %. When tested on CRC-VAL-HE-7K, D1 and D2 achieved 95 % accuracy (J, MCC, Kp, and CSI of 0.8845, 0.9455, 0.9452, and 0.8745) and 96 % accuracy (J, MCC, Kp, and CSI of 0.8926, 0.9504, 0.9503, and 0.8798), respectively, outperforming previously reported results and aligning closely with others. Lastly, training D2 on just 10 % of NCT-CRC-HE-100K and testing on CRC-VAL-HE-7K resulted in significant outperformance of InceptionV3, Xception, and DenseNet201 benchmarks, achieving an accuracy rate of 82.98 % (J, MCC, Kp, and CSI of 0.7227, 0.8095, 0.8081, and 0.6671). Finally, using explainable AI algorithms such as Grad-CAM, Grad-CAM++, Score-CAM, and Faster Score-CAM, along with their emphasized versions, we visualized the features from the last layer of DenseNet201 for histopathological as well as CT-scan image samples. The proposed dense models, with their multi-modality, robustness, and efficiency in cancer image classification, hold the promise of significant advancements in medical diagnostics. They have the potential to revolutionize early cancer detection and improve healthcare accessibility worldwide.
Collapse
Affiliation(s)
- A. Hasib Uddin
- Department of Computer Science and Engineering, Khwaja Yunus Ali University, Enayetpur, Chouhali, Sirajganj, 6751, Bangladesh
| | - Yen-Lin Chen
- Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei, 106344, Taiwan
| | - Miss Rokeya Akter
- Department of Computer Science and Engineering, Khwaja Yunus Ali University, Enayetpur, Chouhali, Sirajganj, 6751, Bangladesh
| | - Chin Soon Ku
- Department of Computer Science, Universiti Tunku Abdul Rahman, Kampar, 31900, Malaysia
| | - Jing Yang
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Lip Yee Por
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
13
|
Gao M, Jiang H, Hu Y, Ren Q, Xie Z, Liu J. Suppressing label noise in medical image classification using mixup attention and self-supervised learning. Phys Med Biol 2024; 69:105026. [PMID: 38636495 DOI: 10.1088/1361-6560/ad4083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 04/18/2024] [Indexed: 04/20/2024]
Abstract
Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced in the medical image annotation, as the labeling process heavily relies on the expertise and experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the performance of models. Therefore, in this work, we innovatively devise a noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification. Specifically, we incorporate contrastive learning and intra-group mixup attention strategies into vanilla supervised learning. The contrastive learning for feature extractor helps to enhance visual representation of DNNs. The intra-group mixup attention module constructs groups and assigns self-attention weights for group-wise samples, and subsequently interpolates massive noisy-suppressed samples through weighted mixup operation. We conduct comparative experiments on both synthetic and real-world noisy medical datasets under various noise levels. Rigorous experiments validate that our noise-robust method with contrastive learning and mixup attention can effectively handle with label noise, and is superior to state-of-the-art methods. An ablation study also shows that both components contribute to boost model performance. The proposed method demonstrates its capability of curb label noise and has certain potential toward real-world clinic applications.
Collapse
Affiliation(s)
- Mengdi Gao
- College of Chemistry and Life Science, Beijing University of Technology, Beijing, People's Republic of China
- Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing, People's Republic of China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, People's Republic of China
| | - Yan Hu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, People's Republic of China
| | - Zhaoheng Xie
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, People's Republic of China
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China
| |
Collapse
|
14
|
Liu Y, Chen W, Ruan R, Zhang Z, Wang Z, Guan T, Lin Q, Tang W, Deng J, Wang Z, Li G. Deep learning based digital pathology for predicting treatment response to first-line PD-1 blockade in advanced gastric cancer. J Transl Med 2024; 22:438. [PMID: 38720336 PMCID: PMC11077733 DOI: 10.1186/s12967-024-05262-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/29/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND Advanced unresectable gastric cancer (GC) patients were previously treated with chemotherapy alone as the first-line therapy. However, with the Food and Drug Administration's (FDA) 2022 approval of programmed cell death protein 1 (PD-1) inhibitor combined with chemotherapy as the first-li ne treatment for advanced unresectable GC, patients have significantly benefited. However, the significant costs and potential adverse effects necessitate precise patient selection. In recent years, the advent of deep learning (DL) has revolutionized the medical field, particularly in predicting tumor treatment responses. Our study utilizes DL to analyze pathological images, aiming to predict first-line PD-1 combined chemotherapy response for advanced-stage GC. METHODS In this multicenter retrospective analysis, Hematoxylin and Eosin (H&E)-stained slides were collected from advanced GC patients across four medical centers. Treatment response was evaluated according to iRECIST 1.1 criteria after a comprehensive first-line PD-1 immunotherapy combined with chemotherapy. Three DL models were employed in an ensemble approach to create the immune checkpoint inhibitors Response Score (ICIsRS) as a novel histopathological biomarker derived from Whole Slide Images (WSIs). RESULTS Analyzing 148,181 patches from 313 WSIs of 264 advanced GC patients, the ensemble model exhibited superior predictive accuracy, leading to the creation of ICIsNet. The model demonstrated robust performance across four testing datasets, achieving AUC values of 0.92, 0.95, 0.96, and 1 respectively. The boxplot, constructed from the ICIsRS, reveals statistically significant disparities between the well response and poor response (all p-values < = 0.001). CONCLUSION ICIsRS, a DL-derived biomarker from WSIs, effectively predicts advanced GC patients' responses to PD-1 combined chemotherapy, offering a novel approach for personalized treatment planning and allowing for more individualized and potentially effective treatment strategies based on a patient's unique response situations.
Collapse
Affiliation(s)
- Yifan Liu
- Department of Gastrointestinal Surgery, First Affiliated Hospital of Sun Yat-sen University, Zhongshan 2nd Street, No. 58, Guangzhou, 510080, 86, Guangdong, China
| | - Wei Chen
- Guangdong Provincial Key Laboratory of Digestive Cancer Research, Digestive Diseases Center, Scientific Research Center, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Ruiwen Ruan
- Department of Oncology, First Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi Province, China
| | - Zhimei Zhang
- Department of Pathology, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhixiong Wang
- Department of Gastrointestinal Surgery, First Affiliated Hospital of Sun Yat-sen University, Zhongshan 2nd Street, No. 58, Guangzhou, 510080, 86, Guangdong, China
| | - Tianpei Guan
- Department of Gastrointestinal Surgery, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, Guangdong, China
| | - Qi Lin
- Department of Gastrointestinal Surgery, First Affiliated Hospital of Sun Yat-sen University, Zhongshan 2nd Street, No. 58, Guangzhou, 510080, 86, Guangdong, China
| | - Wei Tang
- Department of Gastrointestinal Surgery, First Affiliated Hospital of Sun Yat-sen University, Zhongshan 2nd Street, No. 58, Guangzhou, 510080, 86, Guangdong, China
| | - Jun Deng
- Department of Oncology, First Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi Province, China.
| | - Zhao Wang
- Department of Gastrointestinal Surgery, First Affiliated Hospital of Sun Yat-sen University, Zhongshan 2nd Street, No. 58, Guangzhou, 510080, 86, Guangdong, China.
| | - Guanghua Li
- Department of Gastrointestinal Surgery, First Affiliated Hospital of Sun Yat-sen University, Zhongshan 2nd Street, No. 58, Guangzhou, 510080, 86, Guangdong, China.
| |
Collapse
|
15
|
Han G, Guo W, Zhang H, Jin J, Gan X, Zhao X. Sample self-selection using dual teacher networks for pathological image classification with noisy labels. Comput Biol Med 2024; 174:108489. [PMID: 38640633 DOI: 10.1016/j.compbiomed.2024.108489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 04/02/2024] [Accepted: 04/15/2024] [Indexed: 04/21/2024]
Abstract
Deep neural networks (DNNs) involve advanced image processing but depend on large quantities of high-quality labeled data. The presence of noisy data significantly degrades the DNN model performance. In the medical field, where model accuracy is crucial and labels for pathological images are scarce and expensive to obtain, the need to handle noisy data is even more urgent. Deep networks exhibit a memorization effect, they tend to prioritize remembering clean labels initially. Therefore, early stopping is highly effective in managing learning with noisy labels. Previous research has often concentrated on developing robust loss functions or implementing training constraints to mitigate the impact of noisy labels; however, such approaches have frequently resulted in underfitting. We propose using knowledge distillation to slow the learning process of the target network rather than preventing late-stage training from being affected by noisy labels. In this paper, we introduce a data sample self-selection strategy based on early stopping to filter out most of the noisy data. Additionally, we employ the distillation training method with dual teacher networks to ensure the steady learning of the student network. The experimental results show that our method outperforms current state-of-the-art methods for handling noisy labels on both synthetic and real-world noisy datasets. In particular, on the real-world pathological image dataset Chaoyang, the highest classification accuracy increased by 2.39 %. Our method leverages the model's predictions based on training history to select cleaner datasets and retrains them using these cleaner datasets, significantly mitigating the impact of noisy labels on model performance.
Collapse
Affiliation(s)
- Gang Han
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China; School of Electronic and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Wenping Guo
- School of Electronic and Information Engineering, Taizhou University, Taizhou 318000, China.
| | - Haibo Zhang
- School of Electronic and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Jie Jin
- School of Electronic and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Xingli Gan
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
| | - Xiaoming Zhao
- School of Electronic and Information Engineering, Taizhou University, Taizhou 318000, China
| |
Collapse
|
16
|
Kazemi A, Rasouli-Saravani A, Gharib M, Albuquerque T, Eslami S, Schüffler PJ. A systematic review of machine learning-based tumor-infiltrating lymphocytes analysis in colorectal cancer: Overview of techniques, performance metrics, and clinical outcomes. Comput Biol Med 2024; 173:108306. [PMID: 38554659 DOI: 10.1016/j.compbiomed.2024.108306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 03/04/2024] [Accepted: 03/12/2024] [Indexed: 04/02/2024]
Abstract
The incidence of colorectal cancer (CRC), one of the deadliest cancers around the world, is increasing. Tissue microenvironment (TME) features such as tumor-infiltrating lymphocytes (TILs) can have a crucial impact on diagnosis or decision-making for treating patients with CRC. While clinical studies showed that TILs improve the host immune response, leading to a better prognosis, inter-observer agreement for quantifying TILs is not perfect. Incorporating machine learning (ML) based applications in clinical routine may promote diagnosis reliability. Recently, ML has shown potential for making progress in routine clinical procedures. We aim to systematically review the TILs analysis based on ML in CRC histological images. Deep learning (DL) and non-DL techniques can aid pathologists in identifying TILs, and automated TILs are associated with patient outcomes. However, a large multi-institutional CRC dataset with a diverse and multi-ethnic population is necessary to generalize ML methods.
Collapse
Affiliation(s)
- Azar Kazemi
- Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran; Institute of General and Surgical Pathology, Technical University of Munich, Munich, Germany.
| | - Ashkan Rasouli-Saravani
- Student Research Committee, Department of Immunology, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Masoumeh Gharib
- Department of Pathology, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran.
| | | | - Saeid Eslami
- Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran; Pharmaceutical Sciences Research Center, Institute of Pharmaceutical Technology, Mashhad University of Medical Sciences, Mashhad, Iran; Department of Medical Informatics, University of Amsterdam, Amsterdam, the Netherlands.
| | - Peter J Schüffler
- Institute of General and Surgical Pathology, Technical University of Munich, Munich, Germany; TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Munich Data Science Institute, Munich, Germany.
| |
Collapse
|
17
|
Vray G, Tomar D, Bozorgtabar B, Thiran JP. Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2021-2032. [PMID: 38236667 DOI: 10.1109/tmi.2024.3355645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
Abstract
Developing computational pathology models is essential for reducing manual tissue typing from whole slide images, transferring knowledge from the source domain to an unlabeled, shifted target domain, and identifying unseen categories. We propose a practical setting by addressing the above-mentioned challenges in one fell swoop, i.e., source-free open-set domain adaptation. Our methodology focuses on adapting a pre-trained source model to an unlabeled target dataset and encompasses both closed-set and open-set classes. Beyond addressing the semantic shift of unknown classes, our framework also deals with a covariate shift, which manifests as variations in color appearance between source and target tissue samples. Our method hinges on distilling knowledge from a self-supervised vision transformer (ViT), drawing guidance from either robustly pre-trained transformer models or histopathology datasets, including those from the target domain. In pursuit of this, we introduce a novel style-based adversarial data augmentation, serving as hard positives for self-training a ViT, resulting in highly contextualized embeddings. Following this, we cluster semantically akin target images, with the source model offering weak pseudo-labels, albeit with uncertain confidence. To enhance this process, we present the closed-set affinity score (CSAS), aiming to correct the confidence levels of these pseudo-labels and to calculate weighted class prototypes within the contextualized embedding space. Our approach establishes itself as state-of-the-art across three public histopathological datasets for colorectal cancer assessment. Notably, our self-training method seamlessly integrates with open-set detection methods, resulting in enhanced performance in both closed-set and open-set recognition tasks.
Collapse
|
18
|
Benzekry S, Mastri M, Nicolò C, Ebos JML. Machine-learning and mechanistic modeling of metastatic breast cancer after neoadjuvant treatment. PLoS Comput Biol 2024; 20:e1012088. [PMID: 38701089 PMCID: PMC11095706 DOI: 10.1371/journal.pcbi.1012088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 05/15/2024] [Accepted: 04/18/2024] [Indexed: 05/05/2024] Open
Abstract
Clinical trials involving systemic neoadjuvant treatments in breast cancer aim to shrink tumors before surgery while simultaneously allowing for controlled evaluation of biomarkers, toxicity, and suppression of distant (occult) metastatic disease. Yet neoadjuvant clinical trials are rarely preceded by preclinical testing involving neoadjuvant treatment, surgery, and post-surgery monitoring of the disease. Here we used a mouse model of spontaneous metastasis occurring after surgical removal of orthotopically implanted primary tumors to develop a predictive mathematical model of neoadjuvant treatment response to sunitinib, a receptor tyrosine kinase inhibitor (RTKI). Treatment outcomes were used to validate a novel mathematical kinetics-pharmacodynamics model predictive of perioperative disease progression. Longitudinal measurements of presurgical primary tumor size and postsurgical metastatic burden were compiled using 128 mice receiving variable neoadjuvant treatment doses and schedules (released publicly at https://zenodo.org/records/10607753). A non-linear mixed-effects modeling approach quantified inter-animal variabilities in metastatic dynamics and survival, and machine-learning algorithms were applied to investigate the significance of several biomarkers at resection as predictors of individual kinetics. Biomarkers included circulating tumor- and immune-based cells (circulating tumor cells and myeloid-derived suppressor cells) as well as immunohistochemical tumor proteins (CD31 and Ki67). Our computational simulations show that neoadjuvant RTKI treatment inhibits primary tumor growth but has little efficacy in preventing (micro)-metastatic disease progression after surgery and treatment cessation. Machine learning algorithms that included support vector machines, random forests, and artificial neural networks, confirmed a lack of definitive biomarkers, which shows the value of preclinical modeling studies to identify potential failures that should be avoided clinically.
Collapse
Affiliation(s)
- Sebastien Benzekry
- Computational Pharmacology and Clinical Oncology (COMPO), Inria Sophia Antipolis–Méditerranée, Cancer Research Center of Marseille, Inserm UMR1068, CNRS UMR7258, Aix Marseille University UM105, Marseille, France
| | - Michalis Mastri
- Department of Cancer Genetics and Genomics, Roswell Park Comprehensive Cancer Center, Buffalo, New York, United States of America
| | - Chiara Nicolò
- InSilicoTrials Technologies S.P.A, Riva Grumula, Trieste, Italy
| | - John M. L. Ebos
- Department of Cancer Genetics and Genomics, Roswell Park Comprehensive Cancer Center, Buffalo, New York, United States of America
- Department of Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York, United States of America
| |
Collapse
|
19
|
Lotter W, Hassett MJ, Schultz N, Kehl KL, Van Allen EM, Cerami E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov 2024; 14:711-726. [PMID: 38597966 PMCID: PMC11131133 DOI: 10.1158/2159-8290.cd-23-1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/29/2024] [Accepted: 02/28/2024] [Indexed: 04/11/2024]
Abstract
Artificial intelligence (AI) in oncology is advancing beyond algorithm development to integration into clinical practice. This review describes the current state of the field, with a specific focus on clinical integration. AI applications are structured according to cancer type and clinical domain, focusing on the four most common cancers and tasks of detection, diagnosis, and treatment. These applications encompass various data modalities, including imaging, genomics, and medical records. We conclude with a summary of existing challenges, evolving solutions, and potential future directions for the field. SIGNIFICANCE AI is increasingly being applied to all aspects of oncology, where several applications are maturing beyond research and development to direct clinical integration. This review summarizes the current state of the field through the lens of clinical translation along the clinical care continuum. Emerging areas are also highlighted, along with common challenges, evolving solutions, and potential future directions for the field.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Michael J. Hassett
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Nikolaus Schultz
- Marie-Josée and Henry R. Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center; New York, NY, USA
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kenneth L. Kehl
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Eliezer M. Van Allen
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
- Cancer Program, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ethan Cerami
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
20
|
Hagi T, Nakamura T, Yuasa H, Uchida K, Asanuma K, Sudo A, Wakabayahsi T, Morita K. Prediction of prognosis using artificial intelligence-based histopathological image analysis in patients with soft tissue sarcomas. Cancer Med 2024; 13:e7252. [PMID: 38800990 PMCID: PMC11129162 DOI: 10.1002/cam4.7252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/01/2024] [Accepted: 04/28/2024] [Indexed: 05/29/2024] Open
Abstract
BACKGROUND Prompt histopathological diagnosis with accuracy is required for soft tissue sarcomas (STSs) which are still challenging. In addition, the advances in artificial intelligence (AI) along with the development of pathology slides digitization may empower the demand for the prediction of behavior of STSs. In this article, we explored the application of deep learning for prediction of prognosis from histopathological images in patients with STS. METHODS Our retrospective study included a total of 35 histopathological slides from patients with STS. We trained Inception v3 which is proposed method of convolutional neural network based survivability estimation. F1 score which identify the accuracy and area under the receiver operating characteristic curve (AUC) served as main outcome measures from a 4-fold validation. RESULTS The cohort included 35 patients with a mean age of 64 years, and the mean follow-up period was 34 months (2-66 months). Our deep learning method achieved AUC of 0.974 and an accuracy of 91.9% in predicting overall survival. Concerning with the prediction of metastasis-free survival, the accuracy was 84.2% with the AUC of 0.852. CONCLUSION AI might be used to help pathologists with accurate prognosis prediction. This study could substantially improve the clinical management of patients with STS.
Collapse
Affiliation(s)
- Tomohito Hagi
- Department of Orthopedic SurgeryMie University Graduate School of MedicineTsuJapan
| | - Tomoki Nakamura
- Department of Orthopedic SurgeryMie University Graduate School of MedicineTsuJapan
| | - Hiroto Yuasa
- Department of Oncologic PathologyMie University Graduate School of MedicineTsuJapan
| | - Katsunori Uchida
- Department of Oncologic PathologyMie University Graduate School of MedicineTsuJapan
| | - Kunihiro Asanuma
- Department of Orthopedic SurgeryMie University Graduate School of MedicineTsuJapan
| | - Akihiro Sudo
- Department of Orthopedic SurgeryMie University Graduate School of MedicineTsuJapan
| | - Tetsushi Wakabayahsi
- Department of Information EngineeringMie University Graduate School of EngineeringTsuJapan
| | - Kento Morita
- Department of Information EngineeringMie University Graduate School of EngineeringTsuJapan
| |
Collapse
|
21
|
Zhou X, Lu Y, Wu Y, Yu Y, Liu Y, Wang C, Zhao Z, Wang C, Gao Z, Li Z, Zhao Y, Cao W. Construction and validation of a deep learning prognostic model based on digital pathology images of stage III colorectal cancer. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2024; 50:108369. [PMID: 38703632 DOI: 10.1016/j.ejso.2024.108369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 03/09/2024] [Accepted: 04/23/2024] [Indexed: 05/06/2024]
Abstract
BACKGROUND TNM staging is the main reference standard for prognostic prediction of colorectal cancer (CRC), but the prognosis heterogeneity of patients with the same stage is still large. This study aimed to classify the tumor microenvironment of patients with stage III CRC and quantify the classified tumor tissues based on deep learning to explore the prognostic value of the developed tumor risk signature (TRS). METHODS A tissue classification model was developed to identify nine tissues (adipose, background, debris, lymphocytes, mucus, smooth muscle, normal mucosa, stroma, and tumor) in whole-slide images (WSIs) of stage III CRC patients. This model was used to extract tumor tissues from WSIs of 265 stage III CRC patients from The Cancer Genome Atlas and 70 stage III CRC patients from the Sixth Affiliated Hospital of Sun Yat-sen University. We used three different deep learning models for tumor feature extraction and applied a Cox model to establish the TRS. Survival analysis was conducted to explore the prognostic performance of TRS. RESULTS The tissue classification model achieved 94.4 % accuracy in identifying nine tissue types. The TRS showed a Harrell's concordance index of 0.736, 0.716, and 0.711 in the internal training, internal validation, and external validation sets. Survival analysis showed that TRS had significant predictive ability (hazard ratio: 3.632, p = 0.03) for prognostic prediction. CONCLUSION The TRS is an independent and significant prognostic factor for PFS of stage III CRC patients and it contributes to risk stratification of patients with different clinical stages.
Collapse
Affiliation(s)
- Xuezhi Zhou
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Yizhan Lu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Yue Wu
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangdong Research Institute of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yi Yu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Yong Liu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Chang Wang
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Zongya Zhao
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Chong Wang
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Zhixian Gao
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Zhenxin Li
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China.
| | - Yandong Zhao
- Department of Pathology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangdong Research Institute of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Wuteng Cao
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangdong Research Institute of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
22
|
Peng J, Xu Z, Dan H, Li J, Wang J, Luo X, Xu H, Zeng X, Chen Q. Oral epithelial dysplasia detection and grading in oral leukoplakia using deep learning. BMC Oral Health 2024; 24:434. [PMID: 38594651 PMCID: PMC11005210 DOI: 10.1186/s12903-024-04191-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 03/27/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND The grading of oral epithelial dysplasia is often time-consuming for oral pathologists and the results are poorly reproducible between observers. In this study, we aimed to establish an objective, accurate and useful detection and grading system for oral epithelial dysplasia in the whole-slides of oral leukoplakia. METHODS Four convolutional neural networks were compared using the image patches from 56 whole-slide of oral leukoplakia labeled by pathologists as the gold standard. Sequentially, feature detection models were trained, validated and tested with 1,000 image patches using the optimal network. Lastly, a comprehensive system named E-MOD-plus was established by combining feature detection models and a multiclass logistic model. RESULTS EfficientNet-B0 was selected as the optimal network to build feature detection models. In the internal dataset of whole-slide images, the prediction accuracy of E-MOD-plus was 81.3% (95% confidence interval: 71.4-90.5%) and the area under the receiver operating characteristic curve was 0.793 (95% confidence interval: 0.650 to 0.925); in the external dataset of 229 tissue microarray images, the prediction accuracy was 86.5% (95% confidence interval: 82.4-90.0%) and the area under the receiver operating characteristic curve was 0.669 (95% confidence interval: 0.496 to 0.843). CONCLUSIONS E-MOD-plus was objective and accurate in the detection of pathological features as well as the grading of oral epithelial dysplasia, and had potential to assist pathologists in clinical practice.
Collapse
Affiliation(s)
- Jiakuan Peng
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Stomatology, North Sichuan Medical College, Nanchong, Sichuan, China
| | - Ziang Xu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Hongxia Dan
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Jing Li
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Jiongke Wang
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Xiaobo Luo
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Hao Xu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| | - Xin Zeng
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| | - Qianming Chen
- Key Laboratory of Oral Biomedical Research of Zhejiang Province, Affiliated Stomatology Hospital, Zhejiang University School of Stomatology, Hangzhou, Zhejiang, China
| |
Collapse
|
23
|
Yang M, Zhang X, Qiao O, Zhang J, Li X, Ma X, Zhou S, Gao W. Effect of Cerebralcare Granule® combined with memantine on Alzheimer's disease. JOURNAL OF ETHNOPHARMACOLOGY 2024; 323:117609. [PMID: 38142875 DOI: 10.1016/j.jep.2023.117609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 10/04/2023] [Accepted: 12/14/2023] [Indexed: 12/26/2023]
Abstract
ETHNOPHARMACOLOGICAL RELEVANCE In elderly people, Alzheimer's disease (AD) is the most common form of dementia. It has been shown that traditional Chinese medicine (TCM) based on phytomedicines enhances the therapeutic effects of modern medicine when taken in conjunction with them. Modern medicine N-methyl-D-aspartate receptor (NMDA) antagonist memantine (Mm) are mainly used in the clinical treatment of AD. TCM Cerebralcare Granule® (CG) has long been an effective treatment for headaches, dizziness, and other symptoms. In this study, we employ a blend of CG and Mm to address Alzheimer's disease-like symptoms and explore their impacts and underlying mechanisms. AIM OF THE STUDY The objective of our study was to observe the effects of CG combined with Memantine (Mm) on learning and memory impairment of AD mice induced by D-galactose and to explore the mechanism at work. MATERIALS AND METHODS CG and Mm were combined to target multiple pathological processes involved in AD. For a thorough analysis, we performed various experiments such as behavioral detection, pathological detection, proteomic detection, and other experimental methods of detection. RESULTS It was found that the combination of CG and Mm was significantly effective for improving learning and memory in AD mice as well as brain pathology. The serum and hippocampal tissue of AD mice were significantly enhanced with catalase (CAT), superoxide dismutase (SOD), and glutathione peroxidase (GSH-Px) activities and malondialdehyde (MDA) levels were decreased with this treatment. In AD mice, a combination of Mm and CG (CG + Mm) significantly increased the levels of the anti-inflammatory factors IL-4 and IL-10, decreased the levels of pro-inflammatory factors (IL-6, IL-1β) and tumor necrosis factor-alpha (TNF-α), improved synaptic plasticity by restoring synaptophysin (SYP) and postsynaptic density protein-95 (PSD-95) expression in the hippocampus, enhanced Aβ phagocytosis of microglia in AD mice, and increased mitochondrial respiratory chain enzyme complexes I, II, III, and IV, lead to an increase in the number of functionally active NMDA receptors in the hippocampus. Proteomic analysis GO analysis showed that the positive regulation gene H3BIV5 of G protein coupled receptor signal pathway and synaptic transmission was up-regulated, while the transsynaptic signal of postsynaptic membrane potential and regulation-related gene Q5NCT9 were down-regulated. Most proteins showed significant enriched signal transduction pathway profiles after CG + Mm treatment, based on the KEGG pathway database. CONCLUSION The data supported the idea that CG and Mm could be more effective in treating AD mice induced by D-galactose than Mm alone. We provided a basis for the clinical use of CG with Mm.
Collapse
Affiliation(s)
- Mingjuan Yang
- School of Pharmaceutical Science and Technology, Tianjin University, Tianjin 300072, China
| | - Xinyu Zhang
- School of Pharmaceutical Science and Technology, Tianjin University, Tianjin 300072, China
| | - Ou Qiao
- School of Pharmaceutical Science and Technology, Tianjin University, Tianjin 300072, China
| | - Jun Zhang
- National Key Laboratory of Chinese Medicine Modernization, Tasly Academy, Tasly Pharmaceutical Group Co., Ltd., Tianjin 300410, China
| | - Xiaoqing Li
- National Key Laboratory of Chinese Medicine Modernization, Tasly Academy, Tasly Pharmaceutical Group Co., Ltd., Tianjin 300410, China
| | - Xiaohui Ma
- National Key Laboratory of Chinese Medicine Modernization, Tasly Academy, Tasly Pharmaceutical Group Co., Ltd., Tianjin 300410, China
| | - Shuiping Zhou
- National Key Laboratory of Chinese Medicine Modernization, Tasly Academy, Tasly Pharmaceutical Group Co., Ltd., Tianjin 300410, China.
| | - Wenyuan Gao
- School of Pharmaceutical Science and Technology, Tianjin University, Tianjin 300072, China.
| |
Collapse
|
24
|
Ochi M, Komura D, Onoyama T, Shinbo K, Endo H, Odaka H, Kakiuchi M, Katoh H, Ushiku T, Ishikawa S. Registered multi-device/staining histology image dataset for domain-agnostic machine learning models. Sci Data 2024; 11:330. [PMID: 38570515 PMCID: PMC10991301 DOI: 10.1038/s41597-024-03122-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/04/2024] [Indexed: 04/05/2024] Open
Abstract
Variations in color and texture of histopathology images are caused by differences in staining conditions and imaging devices between hospitals. These biases decrease the robustness of machine learning models exposed to out-of-domain data. To address this issue, we introduce a comprehensive histopathology image dataset named PathoLogy Images of Scanners and Mobile phones (PLISM). The dataset consisted of 46 human tissue types stained using 13 hematoxylin and eosin conditions and captured using 13 imaging devices. Precisely aligned image patches from different domains allowed for an accurate evaluation of color and texture properties in each domain. Variation in PLISM was assessed and found to be significantly diverse across various domains, particularly between whole-slide images and smartphones. Furthermore, we assessed the improvement in domain shift using a convolutional neural network pre-trained on PLISM. PLISM is a valuable resource that facilitates the precise evaluation of domain shifts in digital pathology and makes significant contributions towards the development of robust machine learning models that can effectively address challenges of domain shift in histological image analysis.
Collapse
Affiliation(s)
- Mieko Ochi
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Daisuke Komura
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
| | - Takumi Onoyama
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
- Division of Gastroenterology and Nephrology, Department of Multidisciplinary Internal Medicine, School of Medicine, Faculty of Medicine, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Koki Shinbo
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Haruya Endo
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Hiroto Odaka
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Miwako Kakiuchi
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Hiroto Katoh
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Tetsuo Ushiku
- Department of Pathology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Shumpei Ishikawa
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
- Division of Pathology, National Cancer Center Exploratory Oncology Research & Clinical Trial Center, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
25
|
Zamanitajeddin N, Jahanifar M, Bilal M, Eastwood M, Rajpoot N. Social network analysis of cell networks improves deep learning for prediction of molecular pathways and key mutations in colorectal cancer. Med Image Anal 2024; 93:103071. [PMID: 38199068 DOI: 10.1016/j.media.2023.103071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 11/14/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
Colorectal cancer (CRC) is a primary global health concern, and identifying the molecular pathways, genetic subtypes, and mutations associated with CRC is crucial for precision medicine. However, traditional measurement techniques such as gene sequencing are costly and time-consuming, while most deep learning methods proposed for this task lack interpretability. This study offers a new approach to enhance the state-of-the-art deep learning methods for molecular pathways and key mutation prediction by incorporating cell network information. We build cell graphs with nuclei as nodes and nuclei connections as edges of the network and leverage Social Network Analysis (SNA) measures to extract abstract, perceivable, and interpretable features that explicitly describe the cell network characteristics in an image. Our approach does not rely on precise nuclei segmentation or feature extraction, is computationally efficient, and is easily scalable. In this study, we utilize the TCGA-CRC-DX dataset, comprising 499 patients and 502 diagnostic slides from primary colorectal tumours, sourced from 36 distinct medical centres in the United States. By incorporating the SNA features alongside deep features in two multiple instance learning frameworks, we demonstrate improved performance for chromosomal instability (CIN), hypermutated tumour (HM), TP53 gene, BRAF gene, and Microsatellite instability (MSI) status prediction tasks (2.4%-4% and 7-8.8% improvement in AUROC and AUPRC on average). Additionally, our method achieves outstanding performance on MSI prediction in an external PAIP dataset (99% AUROC and 98% AUPRC), demonstrating its generalizability. Our findings highlight the discrimination power of SNA features and how they can be beneficial to deep learning models' performance and provide insights into the correlation of cell network profiles with molecular pathways and key mutations.
Collapse
Affiliation(s)
- Neda Zamanitajeddin
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mark Eastwood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Histofy Ltd., Birmingham, UK.
| |
Collapse
|
26
|
Rigamonti A, Viatore M, Polidori R, Rahal D, Erreni M, Fumagalli MR, Zanini D, Doni A, Putignano AR, Bossi P, Voulaz E, Alloisio M, Rossi S, Zucali PA, Santoro A, Balzano V, Nisticò P, Feuerhake F, Mantovani A, Locati M, Marchesi F. Integrating AI-Powered Digital Pathology and Imaging Mass Cytometry Identifies Key Classifiers of Tumor Cells, Stroma, and Immune Cells in Non-Small Cell Lung Cancer. Cancer Res 2024; 84:1165-1177. [PMID: 38315789 PMCID: PMC10982643 DOI: 10.1158/0008-5472.can-23-1698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 11/13/2023] [Accepted: 02/01/2024] [Indexed: 02/07/2024]
Abstract
Artificial intelligence (AI)-powered approaches are becoming increasingly used as histopathologic tools to extract subvisual features and improve diagnostic workflows. On the other hand, hi-plex approaches are widely adopted to analyze the immune ecosystem in tumor specimens. Here, we aimed at combining AI-aided histopathology and imaging mass cytometry (IMC) to analyze the ecosystem of non-small cell lung cancer (NSCLC). An AI-based approach was used on hematoxylin and eosin (H&E) sections from 158 NSCLC specimens to accurately identify tumor cells, both adenocarcinoma and squamous carcinoma cells, and to generate a classifier of tumor cell spatial clustering. Consecutive tissue sections were stained with metal-labeled antibodies and processed through the IMC workflow, allowing quantitative detection of 24 markers related to tumor cells, tissue architecture, CD45+ myeloid and lymphoid cells, and immune activation. IMC identified 11 macrophage clusters that mainly localized in the stroma, except for S100A8+ cells, which infiltrated tumor nests. T cells were preferentially localized in peritumor areas or in tumor nests, the latter being associated with better prognosis, and they were more abundant in highly clustered tumors. Integrated tumor and immune classifiers were validated as prognostic on whole slides. In conclusion, integration of AI-powered H&E and multiparametric IMC allows investigation of spatial patterns and reveals tissue relevant features with clinical relevance. SIGNIFICANCE Leveraging artificial intelligence-powered H&E analysis integrated with hi-plex imaging mass cytometry provides insights into the tumor ecosystem and can translate tumor features into classifiers to predict prognosis, genotype, and therapy response.
Collapse
Affiliation(s)
- Alessandra Rigamonti
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
- Department of Medical Biotechnology and Translational Medicine, University of Milan; Milan, Italy
| | - Marika Viatore
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
- Department of Medical Biotechnology and Translational Medicine, University of Milan; Milan, Italy
| | - Rebecca Polidori
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
- Department of Medical Biotechnology and Translational Medicine, University of Milan; Milan, Italy
| | - Daoud Rahal
- Department of Pathology, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
| | - Marco Erreni
- Unit of Advanced Optical Microscopy, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
- Department of Biomedical Science, Humanitas University, Pieve Emanuele, Milan, Italy
| | - Maria Rita Fumagalli
- Unit of Advanced Optical Microscopy, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Damiano Zanini
- Unit of Advanced Optical Microscopy, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Andrea Doni
- Unit of Advanced Optical Microscopy, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Anna Rita Putignano
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
| | - Paola Bossi
- Department of Pathology, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
| | - Emanuele Voulaz
- Department of Biomedical Science, Humanitas University, Pieve Emanuele, Milan, Italy
- Division of Thoracic Surgery, IRCCS Humanitas Research Hospital, Rozzano (Milan), Italy
| | - Marco Alloisio
- Division of Thoracic Surgery, IRCCS Humanitas Research Hospital, Rozzano (Milan), Italy
| | - Sabrina Rossi
- Medical Oncology and Hematology Unit, IRCCS Humanitas Research Hospital, Rozzano (Milan), Italy
| | - Paolo Andrea Zucali
- Department of Biomedical Science, Humanitas University, Pieve Emanuele, Milan, Italy
- Medical Oncology and Hematology Unit, IRCCS Humanitas Research Hospital, Rozzano (Milan), Italy
| | - Armando Santoro
- Department of Biomedical Science, Humanitas University, Pieve Emanuele, Milan, Italy
- Medical Oncology and Hematology Unit, IRCCS Humanitas Research Hospital, Rozzano (Milan), Italy
| | - Vittoria Balzano
- Immunology and Immunotherapy Unit, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Paola Nisticò
- Immunology and Immunotherapy Unit, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | | | - Alberto Mantovani
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
- Department of Biomedical Science, Humanitas University, Pieve Emanuele, Milan, Italy
- The William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Massimo Locati
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
- Department of Medical Biotechnology and Translational Medicine, University of Milan; Milan, Italy
| | - Federica Marchesi
- Department of Immunology and Inflammation, IRCCS Humanitas Research Hospital; Rozzano (Milan), Italy
- Department of Medical Biotechnology and Translational Medicine, University of Milan; Milan, Italy
| |
Collapse
|
27
|
Sajithkumar A, Thomas J, Saji AM, Ali F, E K HH, Adampulan HAG, Sarathchand S. Artificial Intelligence in pathology: current applications, limitations, and future directions. Ir J Med Sci 2024; 193:1117-1121. [PMID: 37542634 DOI: 10.1007/s11845-023-03479-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 07/26/2023] [Indexed: 08/07/2023]
Abstract
PURPOSE Given AI's recent success in computer vision applications, majority of pathologists anticipate that it will be able to assist them with a variety of digital pathology activities. Massive improvements in deep learning have enabled a synergy between Artificial Intelligence (AI) and deep learning, enabling image-based diagnosis against the backdrop of digital pathology. AI-based solutions are being developed to eliminate errors and save pathologists time. AIMS In this paper, we will discuss the components that went into the use of Artificial Intelligence in Pathology, its use in the medical profession, the obstacles and constraints that it encounters, and the future possibilities of AI in the medical field. CONCLUSIONS Based on these factors, we elaborate upon the use of AI in medical pathology and provide future recommendations for its successful implementation in this field.
Collapse
Affiliation(s)
- Akhil Sajithkumar
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India.
| | - Jubin Thomas
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Ajish Meprathumalil Saji
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Fousiya Ali
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Haneena Hasin E K
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Hannan Abdul Gafoor Adampulan
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Swathy Sarathchand
- Sree Narayana Institute of Medical Sciences, Chalakka - Kuthiathode Rd, North Kuthiathode, Kunnukara, Kerala, 683594, India
| |
Collapse
|
28
|
Abhishek K, Brown CJ, Hamarneh G. Multi-sample ζ-mixup: richer, more realistic synthetic samples from a p-series interpolant. JOURNAL OF BIG DATA 2024; 11:43. [PMID: 38528850 PMCID: PMC10960781 DOI: 10.1186/s40537-024-00898-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 02/28/2024] [Indexed: 03/27/2024]
Abstract
Modern deep learning training procedures rely on model regularization techniques such as data augmentation methods, which generate training samples that increase the diversity of data and richness of label information. A popular recent method, mixup, uses convex combinations of pairs of original samples to generate new samples. However, as we show in our experiments, mixup can produce undesirable synthetic samples, where the data is sampled off the manifold and can contain incorrect labels. We propose ζ -mixup, a generalization of mixup with provably and demonstrably desirable properties that allows convex combinations of T ≥ 2 samples, leading to more realistic and diverse outputs that incorporate information from T original samples by using a p-series interpolant. We show that, compared to mixup, ζ -mixup better preserves the intrinsic dimensionality of the original datasets, which is a desirable property for training generalizable models. Furthermore, we show that our implementation of ζ -mixup is faster than mixup, and extensive evaluation on controlled synthetic and 26 diverse real-world natural and medical image classification datasets shows that ζ -mixup outperforms mixup, CutMix, and traditional data augmentation techniques. The code will be released at https://github.com/kakumarabhishek/zeta-mixup.
Collapse
Affiliation(s)
- Kumar Abhishek
- School of Computing Science, Simon Fraser University, 8888 University Drive, Burnaby, V5A 1S6 Canada
| | - Colin J Brown
- Engineering, Hinge Health, 455 Market Street, Suite 700, San Francisco, 94105 USA
| | - Ghassan Hamarneh
- School of Computing Science, Simon Fraser University, 8888 University Drive, Burnaby, V5A 1S6 Canada
| |
Collapse
|
29
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
30
|
Liu B, Polack M, Coudray N, Quiros AC, Sakellaropoulos T, Crobach ASLP, van Krieken JHJM, Yuan K, Tollenaar RAEM, Mesker WE, Tsirigos A. Self-Supervised Learning Reveals Clinically Relevant Histomorphological Patterns for Therapeutic Strategies in Colon Cancer. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.26.582106. [PMID: 38496571 PMCID: PMC10942268 DOI: 10.1101/2024.02.26.582106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Self-supervised learning (SSL) automates the extraction and interpretation of histopathology features on unannotated hematoxylin-and-eosin-stained whole-slide images (WSIs). We trained an SSL Barlow Twins-encoder on 435 TCGA colon adenocarcinoma WSIs to extract features from small image patches. Leiden community detection then grouped tiles into histomorphological phenotype clusters (HPCs). HPC reproducibility and predictive ability for overall survival was confirmed in an independent clinical trial cohort (N=1213 WSIs). This unbiased atlas resulted in 47 HPCs displaying unique and sharing clinically significant histomorphological traits, highlighting tissue type, quantity, and architecture, especially in the context of tumor stroma. Through in-depth analysis of these HPCs, including immune landscape and gene set enrichment analysis, and association to clinical outcomes, we shed light on the factors influencing survival and responses to treatments like standard adjuvant chemotherapy and experimental therapies. Further exploration of HPCs may unveil new insights and aid decision-making and personalized treatments for colon cancer patients.
Collapse
|
31
|
Arslan S, Schmidt J, Bass C, Mehrotra D, Geraldes A, Singhal S, Hense J, Li X, Raharja-Liu P, Maiques O, Kather JN, Pandya P. A systematic pan-cancer study on deep learning-based prediction of multi-omic biomarkers from routine pathology images. COMMUNICATIONS MEDICINE 2024; 4:48. [PMID: 38491101 PMCID: PMC10942985 DOI: 10.1038/s43856-024-00471-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 02/29/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND The objective of this comprehensive pan-cancer study is to evaluate the potential of deep learning (DL) for molecular profiling of multi-omic biomarkers directly from hematoxylin and eosin (H&E)-stained whole slide images. METHODS A total of 12,093 DL models predicting 4031 multi-omic biomarkers across 32 cancer types were trained and validated. The study included a broad range of genetic, transcriptomic, and proteomic biomarkers, as well as established prognostic markers, molecular subtypes, and clinical outcomes. RESULTS Here we show that 50% of the models achieve an area under the curve (AUC) of 0.644 or higher. The observed AUC for 25% of the models is at least 0.719 and exceeds 0.834 for the top 5%. Molecular profiling with image-based histomorphological features is generally considered feasible for most of the investigated biomarkers and across different cancer types. The performance appears to be independent of tumor purity, sample size, and class ratio (prevalence), suggesting a degree of inherent predictability in histomorphology. CONCLUSIONS The results demonstrate that DL holds promise to predict a wide range of biomarkers across the omics spectrum using only H&E-stained histological slides of solid tumors. This paves the way for accelerating diagnosis and developing more precise treatments for cancer patients.
Collapse
Affiliation(s)
| | | | | | - Debapriya Mehrotra
- Panakeia Technologies, London, UK
- Department of Pathology, Barking, Havering and Redbridge University NHS Trust, Romford, UK
| | | | - Shikha Singhal
- Panakeia Technologies, London, UK
- Department of Pathology, The Royal Wolverhampton NHS Trust, Wolverhampton, UK
| | | | - Xiusi Li
- Panakeia Technologies, London, UK
| | | | - Oscar Maiques
- Cytoskeleton and Cancer Metastasis Group, Breast Cancer Now Toby Robins Breast Cancer Research Centre, The Institute of Cancer Research, London, UK
- Cancer Biomarkers & Biotherapeutics, Barts Cancer Institute, Queen Mary University of London, John Vane Science Building, London, UK
| | - Jakob Nikolas Kather
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | | |
Collapse
|
32
|
Chen RJ, Ding T, Lu MY, Williamson DFK, Jaume G, Song AH, Chen B, Zhang A, Shao D, Shaban M, Williams M, Oldenburg L, Weishaupt LL, Wang JJ, Vaidya A, Le LP, Gerber G, Sahai S, Williams W, Mahmood F. Towards a general-purpose foundation model for computational pathology. Nat Med 2024; 30:850-862. [PMID: 38504018 DOI: 10.1038/s41591-024-02857-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 02/05/2024] [Indexed: 03/21/2024]
Abstract
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Tong Ding
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Andrew H Song
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Andrew Zhang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Mane Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Lukas Oldenburg
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Luca L Weishaupt
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anurag Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Long Phi Le
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Georg Gerber
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Systems Biology, Harvard University, Cambridge, MA, USA
| | - Walt Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
33
|
Lu MY, Chen B, Williamson DFK, Chen RJ, Liang I, Ding T, Jaume G, Odintsov I, Le LP, Gerber G, Parwani AV, Zhang A, Mahmood F. A visual-language foundation model for computational pathology. Nat Med 2024; 30:863-874. [PMID: 38504017 DOI: 10.1038/s41591-024-02856-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/05/2024] [Indexed: 03/21/2024]
Abstract
The accelerated adoption of digital pathology and advances in deep learning have enabled the development of robust models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain, and a model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text and, notably, over 1.17 million image-caption pairs through task-agnostic pretraining. Evaluated on a suite of 14 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving histopathology images and/or text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, and text-to-image and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.
Collapse
Affiliation(s)
- Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Ivy Liang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Tong Ding
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Igor Odintsov
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Long Phi Le
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Georg Gerber
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anil V Parwani
- Department of Pathology, Wexner Medical Center, Ohio State University, Columbus, OH, USA
| | - Andrew Zhang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
34
|
Reddy S, Shaheed A, Seo Y, Patel R. Development of an Artificial Intelligence Model for the Classification of Gastric Carcinoma Stages Using Pathology Slides. Cureus 2024; 16:e56740. [PMID: 38650818 PMCID: PMC11033212 DOI: 10.7759/cureus.56740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2024] [Indexed: 04/25/2024] Open
Abstract
This study showcases a novel AI-driven approach to accurately differentiate between stage one and stage two gastric carcinoma based on pathology slide analysis. Gastric carcinoma, a significant contributor to cancer-related mortality globally, necessitates precise staging for optimal treatment planning and patient management. Leveraging a comprehensive dataset of 3540 high-resolution pathology images sourced from Kaggle.com, comprising an equal distribution of stage one and stage two tumors, the developed AI model demonstrates remarkable performance in tumor staging. Through the application of state-of-the-art deep learning techniques on Google's Collaboration platform, the model achieves outstanding accuracy and precision rates of 100%, accompanied by notable sensitivity (97.09%), specificity (100%), and F1-score (98.31%). Additionally, the model exhibits an impressive area under the receiver operating characteristic curve (AUC) of 0.999, indicating superior discriminatory power and robustness. By providing clinicians with an efficient and reliable tool for gastric carcinoma staging, this AI-driven approach has the potential to significantly enhance diagnostic accuracy, inform treatment decisions, and ultimately improve patient outcomes in the management of gastric carcinoma. This research contributes to the ongoing advancement of cancer diagnosis and underscores the transformative potential of artificial intelligence in clinical practice.
Collapse
Affiliation(s)
- Shreya Reddy
- Biomedical Sciences, Creighton University, Omaha, USA
| | - Avneet Shaheed
- Pathology, University of Illinois at Chicago, Chicago, USA
| | - Yui Seo
- Medicine, California Northstate University College of Medicine, Elk Grove, USA
| | - Rakesh Patel
- Internal Medicine, East Tennessee State University Quillen College of Medicine, Johnson City, USA
| |
Collapse
|
35
|
Göndöcs D, Dörfler V. AI in medical diagnosis: AI prediction & human judgment. Artif Intell Med 2024; 149:102769. [PMID: 38462271 DOI: 10.1016/j.artmed.2024.102769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 12/02/2023] [Accepted: 01/14/2024] [Indexed: 03/12/2024]
Abstract
AI has long been regarded as a panacea for decision-making and many other aspects of knowledge work; as something that will help humans get rid of their shortcomings. We believe that AI can be a useful asset to support decision-makers, but not that it should replace decision-makers. Decision-making uses algorithmic analysis, but it is not solely algorithmic analysis; it also involves other factors, many of which are very human, such as creativity, intuition, emotions, feelings, and value judgments. We have conducted semi-structured open-ended research interviews with 17 dermatologists to understand what they expect from an AI application to deliver to medical diagnosis. We have found four aggregate dimensions along which the thinking of dermatologists can be described: the ways in which our participants chose to interact with AI, responsibility, 'explainability', and the new way of thinking (mindset) needed for working with AI. We believe that our findings will help physicians who might consider using AI in their diagnosis to understand how to use AI beneficially. It will also be useful for AI vendors in improving their understanding of how medics want to use AI in diagnosis. Further research will be needed to examine if our findings have relevance in the wider medical field and beyond.
Collapse
Affiliation(s)
| | - Viktor Dörfler
- University of Strathclyde Business School, United Kingdom.
| |
Collapse
|
36
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01049-2. [PMID: 38429563 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
37
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
38
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
39
|
Tian M, Yao Z, Zhou Y, Gan Q, Wang L, Lu H, Wang S, Zhou P, Dai Z, Zhang S, Sun Y, Tang Z, Yu J, Wang X. DeepRisk network: an AI-based tool for digital pathology signature and treatment responsiveness of gastric cancer using whole-slide images. J Transl Med 2024; 22:182. [PMID: 38373959 PMCID: PMC10877826 DOI: 10.1186/s12967-023-04838-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 12/26/2023] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Digital histopathology provides valuable information for clinical decision-making. We hypothesized that a deep risk network (DeepRisk) based on digital pathology signature (DPS) derived from whole-slide images could improve the prognostic value of the tumor, node, and metastasis (TNM) staging system and offer chemotherapeutic benefits for gastric cancer (GC). METHODS DeepRisk is a multi-scale, attention-based learning model developed on 1120 GCs in the Zhongshan dataset and validated with two external datasets. Then, we assessed its association with prognosis and treatment response. The multi-omics analysis and multiplex Immunohistochemistry were conducted to evaluate the potential pathogenesis and spatial immune contexture underlying DPS. RESULTS Multivariate analysis indicated that the DPS was an independent prognosticator with a better C-index (0.84 for overall survival and 0.71 for disease-free survival). Patients with low-DPS after neoadjuvant chemotherapy responded favorably to treatment. Spatial analysis indicated that exhausted immune clusters and increased infiltration of CD11b+CD11c+ immune cells were present at the invasive margin of high-DPS group. Multi-omics data from the Cancer Genome Atlas-Stomach adenocarcinoma (TCGA-STAD) hint at the relevance of DPS to myeloid derived suppressor cells infiltration and immune suppression. CONCLUSION DeepRisk network is a reliable tool that enhances prognostic value of TNM staging and aid in precise treatment, providing insights into the underlying pathogenic mechanisms.
Collapse
Affiliation(s)
- Mengxin Tian
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Zhao Yao
- Biomedical Engineering Center, School of Information Science and Technology, Fudan University, Shanghai, 200433, China
- The Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| | - Yufu Zhou
- Department of Immunology and Pathogenic Biology, School of Basic Medical Sciences, Shanghai University of Traditional Chinese Medicine, Shanghai, People's Republic of China
| | - Qiangjun Gan
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Leihao Wang
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Hongwei Lu
- Biomedical Engineering Center, School of Information Science and Technology, Fudan University, Shanghai, 200433, China
- The Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| | - Siyuan Wang
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Peng Zhou
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Zhiqiang Dai
- Department of General Surgery, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Xiamen Clinical Research Center for Cancer Therapy, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
| | - Sijia Zhang
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yihong Sun
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
- Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Zhaoqing Tang
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China.
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China.
- Department of General Surgery, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China.
| | - Jinhua Yu
- Biomedical Engineering Center, School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
- The Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China.
| | - Xuefei Wang
- Department of Gastrointestinal Surgery, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China.
- Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China.
- Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China.
- Department of General Surgery, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China.
- Xiamen Clinical Research Center for Cancer Therapy, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China.
| |
Collapse
|
40
|
Weber A, Enderle-Ammour K, Kurowski K, Metzger MC, Poxleitner P, Werner M, Rothweiler R, Beck J, Straehle J, Schmelzeisen R, Steybe D, Bronsert P. AI-Based Detection of Oral Squamous Cell Carcinoma with Raman Histology. Cancers (Basel) 2024; 16:689. [PMID: 38398080 PMCID: PMC10886627 DOI: 10.3390/cancers16040689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/02/2024] [Accepted: 02/02/2024] [Indexed: 02/25/2024] Open
Abstract
Stimulated Raman Histology (SRH) employs the stimulated Raman scattering (SRS) of photons at biomolecules in tissue samples to generate histological images. Subsequent pathological analysis allows for an intraoperative evaluation without the need for sectioning and staining. The objective of this study was to investigate a deep learning-based classification of oral squamous cell carcinoma (OSCC) and the sub-classification of non-malignant tissue types, as well as to compare the performances of the classifier between SRS and SRH images. Raman shifts were measured at wavenumbers k1 = 2845 cm-1 and k2 = 2930 cm-1. SRS images were transformed into SRH images resembling traditional H&E-stained frozen sections. The annotation of 6 tissue types was performed on images obtained from 80 tissue samples from eight OSCC patients. A VGG19-based convolutional neural network was then trained on 64 SRS images (and corresponding SRH images) and tested on 16. A balanced accuracy of 0.90 (0.87 for SRH images) and F1-scores of 0.91 (0.91 for SRH) for stroma, 0.98 (0.96 for SRH) for adipose tissue, 0.90 (0.87 for SRH) for squamous epithelium, 0.92 (0.76 for SRH) for muscle, 0.87 (0.90 for SRH) for glandular tissue, and 0.88 (0.87 for SRH) for tumor were achieved. The results of this study demonstrate the suitability of deep learning for the intraoperative identification of tissue types directly on SRS and SRH images.
Collapse
Affiliation(s)
- Andreas Weber
- Institute for Surgical Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
| | - Kathrin Enderle-Ammour
- Institute for Surgical Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - Konrad Kurowski
- Institute for Surgical Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Tumorbank Comprehensive Cancer Center Freiburg, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Core Facility for Histopathology and Digital Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - Marc C. Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - Philipp Poxleitner
- Department of Oral and Maxillofacial Surgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Center for Advanced Surgical Tissue Analysis (CAST), University of Freiburg, 79106 Freiburg, Germany
- Department of Oral and Maxillofacial Surgery and Facial Plastic Surgery, University Hospital, LMU Munich, 80337 Munich, Germany
| | - Martin Werner
- Institute for Surgical Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Tumorbank Comprehensive Cancer Center Freiburg, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - René Rothweiler
- Department of Oral and Maxillofacial Surgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - Jürgen Beck
- Center for Advanced Surgical Tissue Analysis (CAST), University of Freiburg, 79106 Freiburg, Germany
- Department of Neurosurgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - Jakob Straehle
- Center for Advanced Surgical Tissue Analysis (CAST), University of Freiburg, 79106 Freiburg, Germany
- Department of Neurosurgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Center for Advanced Surgical Tissue Analysis (CAST), University of Freiburg, 79106 Freiburg, Germany
| | - David Steybe
- Department of Oral and Maxillofacial Surgery, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Center for Advanced Surgical Tissue Analysis (CAST), University of Freiburg, 79106 Freiburg, Germany
- Department of Oral and Maxillofacial Surgery and Facial Plastic Surgery, University Hospital, LMU Munich, 80337 Munich, Germany
| | - Peter Bronsert
- Institute for Surgical Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Tumorbank Comprehensive Cancer Center Freiburg, Medical Center, University of Freiburg, 79106 Freiburg, Germany
- Core Facility for Histopathology and Digital Pathology, Medical Center, University of Freiburg, 79106 Freiburg, Germany
| |
Collapse
|
41
|
Wang R, Qiu Y, Wang T, Wang M, Jin S, Cong F, Zhang Y, Xu H. MIHIC: a multiplex IHC histopathological image classification dataset for lung cancer immune microenvironment quantification. Front Immunol 2024; 15:1334348. [PMID: 38370413 PMCID: PMC10869447 DOI: 10.3389/fimmu.2024.1334348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/09/2024] [Indexed: 02/20/2024] Open
Abstract
Background Immunohistochemistry (IHC) is a widely used laboratory technique for cancer diagnosis, which selectively binds specific antibodies to target proteins in tissue samples and then makes the bound proteins visible through chemical staining. Deep learning approaches have the potential to be employed in quantifying tumor immune micro-environment (TIME) in digitized IHC histological slides. However, it lacks of publicly available IHC datasets explicitly collected for the in-depth TIME analysis. Method In this paper, a notable Multiplex IHC Histopathological Image Classification (MIHIC) dataset is created based on manual annotations by pathologists, which is publicly available for exploring deep learning models to quantify variables associated with the TIME in lung cancer. The MIHIC dataset comprises of totally 309,698 multiplex IHC stained histological image patches, encompassing seven distinct tissue types: Alveoli, Immune cells, Necrosis, Stroma, Tumor, Other and Background. By using the MIHIC dataset, we conduct a series of experiments that utilize both convolutional neural networks (CNNs) and transformer models to benchmark IHC stained histological image classifications. We finally quantify lung cancer immune microenvironment variables by using the top-performing model on tissue microarray (TMA) cores, which are subsequently used to predict patients' survival outcomes. Result Experiments show that transformer models tend to provide slightly better performances than CNN models in histological image classifications, although both types of models provide the highest accuracy of 0.811 on the testing dataset in MIHIC. The automatically quantified TIME variables, which reflect proportions of immune cells over stroma and tumor over tissue core, show prognostic value for overall survival of lung cancer patients. Conclusion To the best of our knowledge, MIHIC is the first publicly available lung cancer IHC histopathological dataset that includes images with 12 different IHC stains, meticulously annotated by multiple pathologists across 7 distinct categories. This dataset holds significant potential for researchers to explore novel techniques for quantifying the TIME and advancing our understanding of the interactions between the immune system and tumors.
Collapse
Affiliation(s)
- Ranran Wang
- Affiliated Cancer Hospital, Dalian University of Technology, Dalian, China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Yusong Qiu
- Department of Pathology, Liaoning Cancer Hospital and Institute, Shenyang, China
| | - Tong Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Mingkang Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Shan Jin
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Fengyu Cong
- Affiliated Cancer Hospital, Dalian University of Technology, Dalian, China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
- Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian, Liaoning, China
- Faculty of Information Technology, University of Jyvaskyla, Jyvaskyla, Finland
| | - Yong Zhang
- Department of Pathology, Liaoning Cancer Hospital and Institute, Shenyang, China
| | - Hongming Xu
- Affiliated Cancer Hospital, Dalian University of Technology, Dalian, China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
- Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian, Liaoning, China
| |
Collapse
|
42
|
Mahbub T, Obeid A, Javed S, Dias J, Hassan T, Werghi N. Center-Focused Affinity Loss for Class Imbalance Histology Image Classification. IEEE J Biomed Health Inform 2024; 28:952-963. [PMID: 37999960 DOI: 10.1109/jbhi.2023.3336372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2023]
Abstract
Early-stage cancer diagnosis potentially improves the chances of survival for many cancer patients worldwide. Manual examination of Whole Slide Images (WSIs) is a time-consuming task for analyzing tumor-microenvironment. To overcome this limitation, the conjunction of deep learning with computational pathology has been proposed to assist pathologists in efficiently prognosing the cancerous spread. Nevertheless, the existing deep learning methods are ill-equipped to handle fine-grained histopathology datasets. This is because these models are constrained via conventional softmax loss function, which cannot expose them to learn distinct representational embeddings of the similarly textured WSIs containing an imbalanced data distribution. To address this problem, we propose a novel center-focused affinity loss (CFAL) function that exhibits 1) constructing uniformly distributed class prototypes in the feature space, 2) penalizing difficult samples, 3) minimizing intra-class variations, and 4) placing greater emphasis on learning minority class features. We evaluated the performance of the proposed CFAL loss function on two publicly available breast and colon cancer datasets having varying levels of imbalanced classes. The proposed CFAL function shows better discrimination abilities as compared to the popular loss functions such as ArcFace, CosFace, and Focal loss. Moreover, it outperforms several SOTA methods for histology image classification across both datasets.
Collapse
|
43
|
Wu X, Li W, Tu H. Big data and artificial intelligence in cancer research. Trends Cancer 2024; 10:147-160. [PMID: 37977902 DOI: 10.1016/j.trecan.2023.10.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/17/2023] [Accepted: 10/20/2023] [Indexed: 11/19/2023]
Abstract
The field of oncology has witnessed an extraordinary surge in the application of big data and artificial intelligence (AI). AI development has made multiscale and multimodal data fusion and analysis possible. A new era of extracting information from complex big data is rapidly evolving. However, challenges related to efficient data curation, in-depth analysis, and utilization remain. We provide a comprehensive overview of the current state of the art in big data and computational analysis, highlighting key applications, challenges, and future opportunities in cancer research. By sketching the current landscape, we seek to foster a deeper understanding and facilitate the advancement of big data utilization in oncology, call for interdisciplinary collaborations, ultimately contributing to improved patient outcomes and a profound understanding of cancer.
Collapse
Affiliation(s)
- Xifeng Wu
- Department of Big Data in Health Science, School of Public Health, Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China; National Institute for Data Science in Health and Medicine, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Wenyuan Li
- Department of Big Data in Health Science, School of Public Health, Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China; The Key Laboratory of Intelligent Preventive Medicine of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Huakang Tu
- Department of Big Data in Health Science, School of Public Health, Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China; Cancer Center, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
44
|
Luo X, Qu L, Guo Q, Song Z, Wang M. Negative Instance Guided Self-Distillation Framework for Whole Slide Image Analysis. IEEE J Biomed Health Inform 2024; 28:964-975. [PMID: 37494153 DOI: 10.1109/jbhi.2023.3298798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
Histopathology image classification is an important clinical task, and current deep learning-based whole-slide image (WSI) classification methods typically cut WSIs into small patches and cast the problem as multi-instance learning. The mainstream approach is to train a bag-level classifier, but their performance on both slide classification and positive patch localization is limited because the instance-level information is not fully explored. In this article, we propose a negative instance-guided, self-distillation framework to directly train an instance-level classifier end-to-end. Instead of depending only on the self-supervised training of the teacher and the student classifiers in a typical self-distillation framework, we input the true negative instances into the student classifier to guide the classifier to better distinguish positive and negative instances. In addition, we propose a prediction bank to constrain the distribution of pseudo instance labels generated by the teacher classifier to prevent the self-distillation from falling into the degeneration of classifying all instances as negative. We conduct extensive experiments and analysis on three publicly available pathological datasets: CAMELYON16, PANDA, and TCGA, as well as an in-house pathological dataset for cervical cancer lymph node metastasis prediction. The results show that our method outperforms existing methods by a large margin. Code will be publicly available.
Collapse
|
45
|
Graham S, Vu QD, Jahanifar M, Weigert M, Schmidt U, Zhang W, Zhang J, Yang S, Xiang J, Wang X, Rumberger JL, Baumann E, Hirsch P, Liu L, Hong C, Aviles-Rivero AI, Jain A, Ahn H, Hong Y, Azzuni H, Xu M, Yaqub M, Blache MC, Piégu B, Vernay B, Scherr T, Böhland M, Löffler K, Li J, Ying W, Wang C, Snead D, Raza SEA, Minhas F, Rajpoot NM. CoNIC Challenge: Pushing the frontiers of nuclear detection, segmentation, classification and counting. Med Image Anal 2024; 92:103047. [PMID: 38157647 DOI: 10.1016/j.media.2023.103047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/19/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | | | - Wenhua Zhang
- The Department of Computer Science, The University of Hong Kong, Hong Kong
| | | | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jinxi Xiang
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Josef Lorenz Rumberger
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany; Charité University Medicine, Berlin, Germany
| | | | - Peter Hirsch
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany
| | - Lihao Liu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Chenyang Hong
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Ayushi Jain
- Softsensor.ai, Bridgewater, NJ, United States of America; PRR.ai, TX, United States of America
| | - Heeyoung Ahn
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Yiyu Hong
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Hussam Azzuni
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Min Xu
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Benoît Piégu
- CNRS, IFCE, INRAE, Université de Tours, PRC, 3780, Nouzilly, France
| | - Bertrand Vernay
- Institut de Génétique et de Biologie Moléculaire et Cellulaire, Illkirch, France; Centre National de la Recherche Scientifique, UMR7104, Illkirch, France; Institut National de la Santé et de la Recherche Médicale, INSERM, U1258, Illkirch, France; Université de Strasbourg, Strasbourg, France
| | - Tim Scherr
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Moritz Böhland
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Jiachen Li
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Weiqin Ying
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Chixin Wang
- School of software engineering, South China University of Technology, Guangzhou, China
| | - David Snead
- Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom; Division of Biomedical Sciences, Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom
| |
Collapse
|
46
|
Haq I, Mazhar T, Asif RN, Ghadi YY, Ullah N, Khan MA, Al-Rasheed A. YOLO and residual network for colorectal cancer cell detection and counting. Heliyon 2024; 10:e24403. [PMID: 38304780 PMCID: PMC10831604 DOI: 10.1016/j.heliyon.2024.e24403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 12/30/2023] [Accepted: 01/08/2024] [Indexed: 02/03/2024] Open
Abstract
The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deep learning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models.
Collapse
Affiliation(s)
- Inayatul Haq
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Tehseen Mazhar
- Department of Computer Science, Virtual University of Pakistan, Lahore, 55150, Pakistan
| | - Rizwana Naz Asif
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Yazeed Yasin Ghadi
- Department of Computer Science and Software Engineering, Al Ain University, Abu Dhabi, 12555, United Arab Emirates
| | - Najib Ullah
- Faculty of Pharmacy and Health Sciences, Department of Pharmacy, University of Balochistan, Quetta, 08770, Pakistan
| | - Muhammad Amir Khan
- School of Computing Sciences, College of Computing, Informatics and Mathematics, Universiti Teknologi MARA, 40450, Shah Alam, Selangor, Malaysia
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|
47
|
Doğan RS, Yılmaz B. Histopathology image classification: highlighting the gap between manual analysis and AI automation. Front Oncol 2024; 13:1325271. [PMID: 38298445 PMCID: PMC10827850 DOI: 10.3389/fonc.2023.1325271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 12/19/2023] [Indexed: 02/02/2024] Open
Abstract
The field of histopathological image analysis has evolved significantly with the advent of digital pathology, leading to the development of automated models capable of classifying tissues and structures within diverse pathological images. Artificial intelligence algorithms, such as convolutional neural networks, have shown remarkable capabilities in pathology image analysis tasks, including tumor identification, metastasis detection, and patient prognosis assessment. However, traditional manual analysis methods have generally shown low accuracy in diagnosing colorectal cancer using histopathological images. This study investigates the use of AI in image classification and image analytics using histopathological images using the histogram of oriented gradients method. The study develops an AI-based architecture for image classification using histopathological images, aiming to achieve high performance with less complexity through specific parameters and layers. In this study, we investigate the complicated state of histopathological image classification, explicitly focusing on categorizing nine distinct tissue types. Our research used open-source multi-centered image datasets that included records of 100.000 non-overlapping images from 86 patients for training and 7180 non-overlapping images from 50 patients for testing. The study compares two distinct approaches, training artificial intelligence-based algorithms and manual machine learning models, to automate tissue classification. This research comprises two primary classification tasks: binary classification, distinguishing between normal and tumor tissues, and multi-classification, encompassing nine tissue types, including adipose, background, debris, stroma, lymphocytes, mucus, smooth muscle, normal colon mucosa, and tumor. Our findings show that artificial intelligence-based systems can achieve 0.91 and 0.97 accuracy in binary and multi-class classifications. In comparison, the histogram of directed gradient features and the Random Forest classifier achieved accuracy rates of 0.75 and 0.44 in binary and multi-class classifications, respectively. Our artificial intelligence-based methods are generalizable, allowing them to be integrated into histopathology diagnostics procedures and improve diagnostic accuracy and efficiency. The CNN model outperforms existing machine learning techniques, demonstrating its potential to improve the precision and effectiveness of histopathology image analysis. This research emphasizes the importance of maintaining data consistency and applying normalization methods during the data preparation stage for analysis. It particularly highlights the potential of artificial intelligence to assess histopathological images.
Collapse
Affiliation(s)
- Refika Sultan Doğan
- Department of Bioengineering, Abdullah Gül University, Kayseri, Türkiye
- Biomedical Instrumentation and Signal Analysis Laboratory, Abdullah Gül University, Kayseri, Türkiye
| | - Bülent Yılmaz
- Biomedical Instrumentation and Signal Analysis Laboratory, Abdullah Gül University, Kayseri, Türkiye
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, Türkiye
- Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, Kuwait
| |
Collapse
|
48
|
Li Z, Yan C, Zhang X, Gharibi G, Yin Z, Jiang X, Malin BA. Split Learning for Distributed Collaborative Training of Deep Learning Models in Health Informatics. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2024; 2023:1047-1056. [PMID: 38222326 PMCID: PMC10785879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks. However, realizing deep learning models that generalize across healthcare organizations is challenging. This is due, in part, to the inherent siloed nature of these organizations and patient privacy requirements. To address this problem, we illustrate how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets, while keeping the original records and model parameters private. We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning. We use several biomedical imaging and electronic health record (EHR) datasets to show that deep learning models trained via split learning can achieve highly similar performance to their centralized and federated counterparts while greatly improving computational efficiency and reducing privacy risks.
Collapse
Affiliation(s)
| | - Chao Yan
- Vanderbilt University Medical Center, Nashville, TN
| | | | | | - Zhijun Yin
- Vanderbilt University, Nashville, TN
- Vanderbilt University Medical Center, Nashville, TN
| | | | - Bradley A Malin
- Vanderbilt University, Nashville, TN
- Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
49
|
Kockwelp J, Thiele S, Bartsch J, Haalck L, Gromoll J, Schlatt S, Exeler R, Bleckmann A, Lenz G, Wolf S, Steffen B, Berdel WE, Schliemann C, Risse B, Angenendt L. Deep learning predicts therapy-relevant genetics in acute myeloid leukemia from Pappenheim-stained bone marrow smears. Blood Adv 2024; 8:70-79. [PMID: 37967385 PMCID: PMC10787267 DOI: 10.1182/bloodadvances.2023011076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/23/2023] [Accepted: 11/05/2023] [Indexed: 11/17/2023] Open
Abstract
ABSTRACT The detection of genetic aberrations is crucial for early therapy decisions in acute myeloid leukemia (AML) and recommended for all patients. Because genetic testing is expensive and time consuming, a need remains for cost-effective, fast, and broadly accessible tests to predict these aberrations in this aggressive malignancy. Here, we developed a novel fully automated end-to-end deep learning pipeline to predict genetic aberrations directly from single-cell images from scans of conventionally stained bone marrow smears already on the day of diagnosis. We used this pipeline to compile a multiterabyte data set of >2 000 000 single-cell images from diagnostic samples of 408 patients with AML. These images were then used to train convolutional neural networks for the prediction of various therapy-relevant genetic alterations. Moreover, we created a temporal test cohort data set of >444 000 single-cell images from further 71 patients with AML. We show that the models from our pipeline can significantly predict these subgroups with high areas under the curve of the receiver operating characteristic. Potential genotype-phenotype links were visualized with 2 different strategies. Our pipeline holds the potential to be used as a fast and inexpensive automated tool to screen patients with AML for therapy-relevant genetic aberrations directly from routine, conventionally stained bone marrow smears already on the day of diagnosis. It also creates a foundation to develop similar approaches for other bone marrow disorders in the future.
Collapse
Affiliation(s)
- Jacqueline Kockwelp
- Institute for Geoinformatics, University of Münster, Münster, Germany
- Institute for Computer Science, University of Münster, Münster, Germany
- Centre of Reproductive Medicine and Andrology, Institute of Reproductive and Regenerative Biology, Münster, Germany
| | - Sebastian Thiele
- Institute for Geoinformatics, University of Münster, Münster, Germany
- Institute for Computer Science, University of Münster, Münster, Germany
| | - Jannis Bartsch
- Department of Medicine A, University Hospital Münster, Münster, Germany
| | - Lars Haalck
- Institute for Geoinformatics, University of Münster, Münster, Germany
- Institute for Computer Science, University of Münster, Münster, Germany
| | - Jörg Gromoll
- Centre of Reproductive Medicine and Andrology, Institute of Reproductive and Regenerative Biology, Münster, Germany
| | - Stefan Schlatt
- Centre of Reproductive Medicine and Andrology, Institute of Reproductive and Regenerative Biology, Münster, Germany
| | - Rita Exeler
- Institute of Human Genetics, University Hospital Münster, Münster, Germany
| | - Annalen Bleckmann
- Department of Medicine A, University Hospital Münster, Münster, Germany
| | - Georg Lenz
- Department of Medicine A, University Hospital Münster, Münster, Germany
| | - Sebastian Wolf
- Department of Medicine II, University Hospital Frankfurt, Frankfurt, Germany
| | - Björn Steffen
- Department of Medicine II, University Hospital Frankfurt, Frankfurt, Germany
| | | | | | - Benjamin Risse
- Institute for Geoinformatics, University of Münster, Münster, Germany
- Institute for Computer Science, University of Münster, Münster, Germany
| | - Linus Angenendt
- Department of Medicine A, University Hospital Münster, Münster, Germany
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| |
Collapse
|
50
|
Yang J, Huang J, Han D, Ma X. Artificial Intelligence Applications in the Treatment of Colorectal Cancer: A Narrative Review. Clin Med Insights Oncol 2024; 18:11795549231220320. [PMID: 38187459 PMCID: PMC10771756 DOI: 10.1177/11795549231220320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/26/2023] [Indexed: 01/09/2024] Open
Abstract
Colorectal cancer is the third most prevalent cancer worldwide, and its treatment has been a demanding clinical problem. Beyond traditional surgical therapy and chemotherapy, newly revealed molecular mechanisms diversify therapeutic approaches for colorectal cancer. However, the selection of personalized treatment among multiple treatment options has become another challenge in the era of precision medicine. Artificial intelligence has recently been increasingly investigated in the treatment of colorectal cancer. This narrative review mainly discusses the applications of artificial intelligence in the treatment of colorectal cancer patients. A comprehensive literature search was conducted in MEDLINE, EMBASE, and Web of Science to identify relevant papers, resulting in 49 articles being included. The results showed that, based on different categories of data, artificial intelligence can predict treatment outcomes and essential guidance information of traditional and novel therapies, thus enabling individualized treatment strategy selection for colorectal cancer patients. Some frequently implemented machine learning algorithms and deep learning frameworks have also been employed for long-term prognosis prediction in patients with colorectal cancer. Overall, artificial intelligence shows encouraging results in treatment strategy selection and prognosis evaluation for colorectal cancer patients.
Collapse
Affiliation(s)
- Jiaqing Yang
- Department of Biotherapy, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, Chengdu, China
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jing Huang
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu, China
| | - Deqian Han
- Department of Oncology, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- Department of Biotherapy, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, Chengdu, China
| |
Collapse
|