1
|
Makhlouf Y, Singh VK, Craig S, McArdle A, French D, Loughrey MB, Oliver N, Acevedo JB, O’Reilly P, James JA, Maxwell P, Salto-Tellez M. True-T - Improving T-cell response quantification with holistic artificial intelligence based prediction in immunohistochemistry images. Comput Struct Biotechnol J 2024; 23:174-185. [PMID: 38146436 PMCID: PMC10749253 DOI: 10.1016/j.csbj.2023.11.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/24/2023] [Accepted: 11/24/2023] [Indexed: 12/27/2023] Open
Abstract
The immune response associated with oncogenesis and potential oncological ther- apeutic interventions has dominated the field of cancer research over the last decade. T-cell lymphocytes in the tumor microenvironment are a crucial aspect of cancer's adaptive immunity, and the quantification of T-cells in specific can- cer types has been suggested as a potential diagnostic aid. However, this is cur- rently not part of routine diagnostics. To address this challenge, we present a new method called True-T, which employs artificial intelligence-based techniques to quantify T-cells in colorectal cancer (CRC) using immunohistochemistry (IHC) images. True-T analyses the chromogenic tissue hybridization signal of three widely recognized T-cell markers (CD3, CD4, and CD8). Our method employs a pipeline consisting of three stages: T-cell segmentation, density estimation from the segmented mask, and prediction of individual five-year survival rates. In the first stage, we utilize the U-Net method, where a pre-trained ResNet-34 is em- ployed as an encoder to extract clinically relevant T-cell features. The segmenta- tion model is trained and evaluated individually, demonstrating its generalization in detecting the CD3, CD4, and CD8 biomarkers in IHC images. In the second stage, the density of T-cells is estimated using the predicted mask, which serves as a crucial indicator for patient survival statistics in the third stage. This ap- proach was developed and tested in 1041 patients from four reference diagnostic institutions, ensuring broad applicability. The clinical effectiveness of True-T is demonstrated in stages II-IV CRC by offering valuable prognostic information that surpasses previous quantitative gold standards, opening possibilities for po- tential clinical applications. Finally, to evaluate the robustness and broader ap- plicability of our approach without additional training, we assessed the universal accuracy of the CD3 component of the True-T algorithm across 13 distinct solid tumors.
Collapse
Affiliation(s)
- Yasmine Makhlouf
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Vivek Kumar Singh
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Stephanie Craig
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Aoife McArdle
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Dominique French
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Maurice B. Loughrey
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK
| | - Nicola Oliver
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Juvenal Baena Acevedo
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | | | - Jacqueline A. James
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
- Regional Molecular Diagnostic Service, Belfast Health and Social Care Trust, Belfast BT9 7AE, UK
| | - Perry Maxwell
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Manuel Salto-Tellez
- Precision Medicine Centre of Excellence, Health Sciences Building, The Patrick G Johnston, Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
- Sonrai Analytics, Belfast BT9 7AE, UK
- Regional Molecular Diagnostic Service, Belfast Health and Social Care Trust, Belfast BT9 7AE, UK
- Integrated Pathology Unit, Institute of Cancer Research and Royal Marsden Hospital, London SW7 3RP, UK
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Jensen MP, Qiang Z, Khan DZ, Stoyanov D, Baldeweg SE, Jaunmuktane Z, Brandner S, Marcus HJ. Artificial intelligence in histopathological image analysis of central nervous system tumours: A systematic review. Neuropathol Appl Neurobiol 2024; 50:e12981. [PMID: 38738494 DOI: 10.1111/nan.12981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 04/05/2024] [Accepted: 04/10/2024] [Indexed: 05/14/2024]
Abstract
The convergence of digital pathology and artificial intelligence could assist histopathology image analysis by providing tools for rapid, automated morphological analysis. This systematic review explores the use of artificial intelligence for histopathological image analysis of digitised central nervous system (CNS) tumour slides. Comprehensive searches were conducted across EMBASE, Medline and the Cochrane Library up to June 2023 using relevant keywords. Sixty-eight suitable studies were identified and qualitatively analysed. The risk of bias was evaluated using the Prediction model Risk of Bias Assessment Tool (PROBAST) criteria. All the studies were retrospective and preclinical. Gliomas were the most frequently analysed tumour type. The majority of studies used convolutional neural networks or support vector machines, and the most common goal of the model was for tumour classification and/or grading from haematoxylin and eosin-stained slides. The majority of studies were conducted when legacy World Health Organisation (WHO) classifications were in place, which at the time relied predominantly on histological (morphological) features but have since been superseded by molecular advances. Overall, there was a high risk of bias in all studies analysed. Persistent issues included inadequate transparency in reporting the number of patients and/or images within the model development and testing cohorts, absence of external validation, and insufficient recognition of batch effects in multi-institutional datasets. Based on these findings, we outline practical recommendations for future work including a framework for clinical implementation, in particular, better informing the artificial intelligence community of the needs of the neuropathologist.
Collapse
Affiliation(s)
- Melanie P Jensen
- Pathology Department, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
- Briscoe Lab, The Francis Crick Institute, London, UK
| | - Zekai Qiang
- School of Medicine and Population Health, University of Sheffield Medical School, Sheffield, UK
| | - Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| | - Danail Stoyanov
- Department of Computer Science, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes and Endocrinology, University College London Hospitals, London, UK
- Centre for Obesity and Metabolism, Department of Experimental and Translational Medicine, Division of Medicine, University College London, London, UK
| | - Zane Jaunmuktane
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London, UK
| | - Sebastian Brandner
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| |
Collapse
|
4
|
Yang P, Qiu H, Yang X, Wang L, Wang X. SAGL: A self-attention-based graph learning framework for predicting survival of colorectal cancer patients. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 249:108159. [PMID: 38583291 DOI: 10.1016/j.cmpb.2024.108159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 02/28/2024] [Accepted: 03/29/2024] [Indexed: 04/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Colorectal cancer (CRC) is one of the most commonly diagnosed cancers worldwide. The accurate survival prediction for CRC patients plays a significant role in the formulation of treatment strategies. Recently, machine learning and deep learning approaches have been increasingly applied in cancer survival prediction. However, most existing methods inadequately represent and leverage the dependencies among features and fail to sufficiently mine and utilize the comorbidity patterns of CRC. To address these issues, we propose a self-attention-based graph learning (SAGL) framework to improve the postoperative cancer-specific survival prediction for CRC patients. METHODS We present a novel method for constructing dependency graph (DG) to reflect two types of dependencies including comorbidity-comorbidity dependencies and the dependencies between features related to patient characteristics and cancer treatments. This graph is subsequently refined by a disease comorbidity network, which offers a holistic view of comorbidity patterns of CRC. A DG-guided self-attention mechanism is proposed to unearth novel dependencies beyond what DG offers, thus augmenting CRC survival prediction. Finally, each patient will be represented, and these representations will be used for survival prediction. RESULTS The experimental results show that SAGL outperforms state-of-the-art methods on a real-world dataset, with the receiver operating characteristic curve for 3- and 5-year survival prediction achieving 0.849±0.002 and 0.895±0.005, respectively. In addition, the comparison results with different graph neural network-based variants demonstrate the advantages of our DG-guided self-attention graph learning framework. CONCLUSIONS Our study reveals that the potential of the DG-guided self-attention in optimizing feature graph learning which can improve the performance of CRC survival prediction.
Collapse
Affiliation(s)
- Ping Yang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, PR China
| | - Hang Qiu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, PR China; Big Data Research Center, University of Electronic Science and Technology of China, Chengdu, 611731, PR China.
| | - Xulin Yang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, PR China
| | - Liya Wang
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu, 611731, PR China
| | - Xiaodong Wang
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu, 610041, PR China.
| |
Collapse
|
5
|
Chen J, Chen R, Chen L, Zhang L, Wang W, Zeng X. Kidney medicine meets computer vision: a bibliometric analysis. Int Urol Nephrol 2024:10.1007/s11255-024-04082-w. [PMID: 38814370 DOI: 10.1007/s11255-024-04082-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/16/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE Rapid advances in computer vision (CV) have the potential to facilitate the examination, diagnosis, and treatment of diseases of the kidney. The bibliometric study aims to explore the research landscape and evolving research focus of the application of CV in kidney medicine research. METHODS The Web of Science Core Collection was utilized to identify publications related to the research or applications of CV technology in the field of kidney medicine from January 1, 1900, to December 31, 2022. We analyzed emerging research trends, highly influential publications and journals, prolific researchers, countries/regions, research institutions, co-authorship networks, and co-occurrence networks. Bibliographic information was analyzed and visualized using Python, Matplotlib, Seaborn, HistCite, and Vosviewer. RESULTS There was an increasing trend in the number of publications on CV-based kidney medicine research. These publications mainly focused on medical image processing, surgical procedures, medical image analysis/diagnosis, as well as the application and innovation of CV technology in medical imaging. The United States is currently the leading country in terms of the quantities of published articles and international collaborations, followed by China. Deep learning-based segmentation and machine learning-based texture analysis are the most commonly used techniques in this field. Regarding research hotspot trends, CV algorithms are shifting toward artificial intelligence, and research objects are expanding to encompass a wider range of kidney-related objects, with data dimensions used in research transitioning from 2D to 3D while simultaneously incorporating more diverse data modalities. CONCLUSION The present study provides a scientometric overview of the current progress in the research and application of CV technology in kidney medicine research. Through the use of bibliometric analysis and network visualization, we elucidate emerging trends, key sources, leading institutions, and popular topics. Our findings and analysis are expected to provide valuable insights for future research on the use of CV in kidney medicine research.
Collapse
Affiliation(s)
- Junren Chen
- Department of Nephrology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
- Med-X Center for Informatics, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Rui Chen
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Liangyin Chen
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
| | - Lei Zhang
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
| | - Wei Wang
- School of Automation, Chengdu University of Information Technology, Chengdu, 610225, Sichuan, China
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China
| | - Xiaoxi Zeng
- Department of Nephrology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China.
- Med-X Center for Informatics, Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
6
|
Huang YJ, Chen CH, Yang HC. AI-enhanced integration of genetic and medical imaging data for risk assessment of Type 2 diabetes. Nat Commun 2024; 15:4230. [PMID: 38762475 PMCID: PMC11102564 DOI: 10.1038/s41467-024-48618-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 05/08/2024] [Indexed: 05/20/2024] Open
Abstract
Type 2 diabetes (T2D) presents a formidable global health challenge, highlighted by its escalating prevalence, underscoring the critical need for precision health strategies and early detection initiatives. Leveraging artificial intelligence, particularly eXtreme Gradient Boosting (XGBoost), we devise robust risk assessment models for T2D. Drawing upon comprehensive genetic and medical imaging datasets from 68,911 individuals in the Taiwan Biobank, our models integrate Polygenic Risk Scores (PRS), Multi-image Risk Scores (MRS), and demographic variables, such as age, sex, and T2D family history. Here, we show that our model achieves an Area Under the Receiver Operating Curve (AUC) of 0.94, effectively identifying high-risk T2D subgroups. A streamlined model featuring eight key variables also maintains a high AUC of 0.939. This high accuracy for T2D risk assessment promises to catalyze early detection and preventive strategies. Moreover, we introduce an accessible online risk assessment tool for T2D, facilitating broader applicability and dissemination of our findings.
Collapse
Affiliation(s)
- Yi-Jia Huang
- Institute of Public Health, National Yang-Ming Chiao-Tung University, Taipei, Taiwan
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Chun-Houh Chen
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Hsin-Chou Yang
- Institute of Public Health, National Yang-Ming Chiao-Tung University, Taipei, Taiwan.
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan.
- Biomedical Translation Research Center, Academia Sinica, Taipei, Taiwan.
- Department of Statistics, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
7
|
Hiremath A, Corredor G, Li L, Leo P, Magi-Galluzzi C, Elliott R, Purysko A, Shiradkar R, Madabhushi A. An integrated radiology-pathology machine learning classifier for outcome prediction following radical prostatectomy: Preliminary findings. Heliyon 2024; 10:e29602. [PMID: 38665576 PMCID: PMC11044050 DOI: 10.1016/j.heliyon.2024.e29602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 04/08/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
Objectives To evaluate the added benefit of integrating features from pre-treatment MRI (radiomics) and digitized post-surgical pathology slides (pathomics) in prostate cancer (PCa) patients for prognosticating outcomes post radical-prostatectomy (RP) including a) rising prostate specific antigen (PSA), and b) extraprostatic-extension (EPE). Methods Multi-institutional data (N = 58) of PCa patients who underwent pre-treatment 3-T MRI prior to RP were included in this retrospective study. Radiomic and pathomic features were extracted from PCa regions on MRI and RP specimens delineated by expert clinicians. On training set (D1, N = 44), Cox Proportional-Hazards models MR, MP and MRaP were trained using radiomics, pathomics, and their combination, respectively, to prognosticate rising PSA (PSA > 0.03 ng/mL). Top features from MRaP were used to train a model to predict EPE on D1 and test on external dataset (D2, N = 14). C-index, Kalplan-Meier curves were used for survival analysis, and area under ROC (AUC) was used for EPE. MRaP was compared with the existing post-treatment risk-calculator, CAPRA (MC). Results Patients had median follow-up of 34 months. MRaP (c-index = 0.685 ± 0.05) significantly outperformed MR (c-index = 0.646 ± 0.05), MP (c-index = 0.631 ± 0.06) and MC (c-index = 0.601 ± 0.071) (p < 0.0001). Cross-validated Kaplan-Meier curves showed significant separation among risk groups for rising PSA for MRaP (p < 0.005, Hazard Ratio (HR) = 11.36) as compared to MR (p = 0.64, HR = 1.33), MP (p = 0.19, HR = 2.82) and MC (p = 0.10, HR = 3.05). Integrated radio-pathomic model MRaP (AUC = 0.80) outperformed MR (AUC = 0.57) and MP (AUC = 0.76) in predicting EPE on external-data (D2). Conclusions Results from this preliminary study suggest that a combination of radiomic and pathomic features can better predict post-surgical outcomes (rising PSA and EPE) compared to either of them individually as well as extant prognostic nomogram (CAPRA).
Collapse
Affiliation(s)
| | - Germán Corredor
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Lin Li
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Patrick Leo
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Robin Elliott
- Department of Pathology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Andrei Purysko
- Department of Radiology and Nuclear Medicine, Cleveland Clinic, Cleveland, OH, USA
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA
| |
Collapse
|
8
|
Wu H, Peng L, Du D, Xu H, Lin G, Zhou Z, Lu L, Lv W. BAF-Net: bidirectional attention-aware fluid pyramid feature integrated multimodal fusion network for diagnosis and prognosis. Phys Med Biol 2024; 69:105007. [PMID: 38593831 DOI: 10.1088/1361-6560/ad3cb2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/09/2024] [Indexed: 04/11/2024]
Abstract
Objective. To go beyond the deficiencies of the three conventional multimodal fusion strategies (i.e. input-, feature- and output-level fusion), we propose a bidirectional attention-aware fluid pyramid feature integrated fusion network (BAF-Net) with cross-modal interactions for multimodal medical image diagnosis and prognosis.Approach. BAF-Net is composed of two identical branches to preserve the unimodal features and one bidirectional attention-aware distillation stream to progressively assimilate cross-modal complements and to learn supplementary features in both bottom-up and top-down processes. Fluid pyramid connections were adopted to integrate the hierarchical features at different levels of the network, and channel-wise attention modules were exploited to mitigate cross-modal cross-level incompatibility. Furthermore, depth-wise separable convolution was introduced to fuse the cross-modal cross-level features to alleviate the increase in parameters to a great extent. The generalization abilities of BAF-Net were evaluated in terms of two clinical tasks: (1) an in-house PET-CT dataset with 174 patients for differentiation between lung cancer and pulmonary tuberculosis. (2) A public multicenter PET-CT head and neck cancer dataset with 800 patients from nine centers for overall survival prediction.Main results. On the LC-PTB dataset, improved performance was found in BAF-Net (AUC = 0.7342) compared with input-level fusion model (AUC = 0.6825;p< 0.05), feature-level fusion model (AUC = 0.6968;p= 0.0547), output-level fusion model (AUC = 0.7011;p< 0.05). On the H&N cancer dataset, BAF-Net (C-index = 0.7241) outperformed the input-, feature-, and output-level fusion model, with 2.95%, 3.77%, and 1.52% increments of C-index (p= 0.3336, 0.0479 and 0.2911, respectively). The ablation experiments demonstrated the effectiveness of all the designed modules regarding all the evaluated metrics in both datasets.Significance. Extensive experiments on two datasets demonstrated better performance and robustness of BAF-Net than three conventional fusion strategies and PET or CT unimodal network in terms of diagnosis and prognosis.
Collapse
Affiliation(s)
- Huiqin Wu
- Department of Medical Imaging, Guangdong Second Provincial General Hospital, Guangzhou, Guangdong, 518037, People's Republic of China
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Lihong Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Dongyang Du
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Hui Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Guoyu Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Zidong Zhou
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Pazhou Lab, Guangzhou, Guangdong, 510330, People's Republic of China
| | - Wenbing Lv
- School of Information and Yunnan Key Laboratory of Intelligent Systems and Computing, Yunnan University, Kunming, Yunnan, 650504, People's Republic of China
| |
Collapse
|
9
|
Qiu L, Zhao L, Zhao W, Zhao J. Dual-space disentangled-multimodal network (DDM-net) for glioma diagnosis and prognosis with incomplete pathology and genomic data. Phys Med Biol 2024; 69:085028. [PMID: 38595094 DOI: 10.1088/1361-6560/ad37ec] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/26/2024] [Indexed: 04/11/2024]
Abstract
Objective. Effective fusion of histology slides and molecular profiles from genomic data has shown great potential in the diagnosis and prognosis of gliomas. However, it remains challenging to explicitly utilize the consistent-complementary information among different modalities and create comprehensive representations of patients. Additionally, existing researches mainly focus on complete multi-modality data and usually fail to construct robust models for incomplete samples.Approach. In this paper, we propose adual-space disentangled-multimodal network (DDM-net)for glioma diagnosis and prognosis. DDM-net disentangles the latent features generated by two separate variational autoencoders (VAEs) into common and specific components through a dual-space disentangled approach, facilitating the construction of comprehensive representations of patients. More importantly, DDM-net imputes the unavailable modality in the latent feature space, making it robust to incomplete samples.Main results. We evaluated our approach on the TCGA-GBMLGG dataset for glioma grading and survival analysis tasks. Experimental results demonstrate that the proposed method achieves superior performance compared to state-of-the-art methods, with a competitive AUC of 0.952 and a C-index of 0.768.Significance. The proposed model may help the clinical understanding of gliomas and can serve as an effective fusion model with multimodal data. Additionally, it is capable of handling incomplete samples, making it less constrained by clinical limitations.
Collapse
Affiliation(s)
- Lu Qiu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
| |
Collapse
|
10
|
Pan L, Peng Y, Li Y, Wang X, Liu W, Xu L, Liang Q, Peng S. SELECTOR: Heterogeneous graph network with convolutional masked autoencoder for multimodal robust prediction of cancer survival. Comput Biol Med 2024; 172:108301. [PMID: 38492453 DOI: 10.1016/j.compbiomed.2024.108301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/03/2024] [Accepted: 03/12/2024] [Indexed: 03/18/2024]
Abstract
Accurately predicting the survival rate of cancer patients is crucial for aiding clinicians in planning appropriate treatment, reducing cancer-related medical expenses, and significantly enhancing patients' quality of life. Multimodal prediction of cancer patient survival offers a more comprehensive and precise approach. However, existing methods still grapple with challenges related to missing multimodal data and information interaction within modalities. This paper introduces SELECTOR, a heterogeneous graph-aware network based on convolutional mask encoders for robust multimodal prediction of cancer patient survival. SELECTOR comprises feature edge reconstruction, convolutional mask encoder, feature cross-fusion, and multimodal survival prediction modules. Initially, we construct a multimodal heterogeneous graph and employ the meta-path method for feature edge reconstruction, ensuring comprehensive incorporation of feature information from graph edges and effective embedding of nodes. To mitigate the impact of missing features within the modality on prediction accuracy, we devised a convolutional masked autoencoder (CMAE) to process the heterogeneous graph post-feature reconstruction. Subsequently, the feature cross-fusion module facilitates communication between modalities, ensuring that output features encompass all features of the modality and relevant information from other modalities. Extensive experiments and analysis on six cancer datasets from TCGA demonstrate that our method significantly outperforms state-of-the-art methods in both modality-missing and intra-modality information-confirmed cases. Our codes are made available at https://github.com/panliangrui/Selector.
Collapse
Affiliation(s)
- Liangrui Pan
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Yijun Peng
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Yan Li
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Xiang Wang
- Department of Thoracic Surgery, The second xiangya hospital, Central South University, Changsha, 410011, Hunan, China.
| | - Wenjuan Liu
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Liwen Xu
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Qingchun Liang
- Department of Pathology, The second xiangya hospital, Central South University, Changsha, 410011, Hunan, China.
| | - Shaoliang Peng
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| |
Collapse
|
11
|
D'Souza NS, Wang H, Giovannini A, Foncubierta-Rodriguez A, Beck KL, Boyko O, Syeda-Mahmood TF. Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond. Med Image Anal 2024; 93:103064. [PMID: 38219500 DOI: 10.1016/j.media.2023.103064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 09/09/2023] [Accepted: 12/11/2023] [Indexed: 01/16/2024]
Abstract
With the emergence of multimodal electronic health records, the evidence for diseases, events, or findings may be present across multiple modalities ranging from clinical to imaging and genomic data. Developing effective patient-tailored therapeutic guidance and outcome prediction will require fusing evidence across these modalities. Developing general-purpose frameworks capable of modeling fine-grained and multi-faceted complex interactions, both within and across modalities is an important open problem in multimodal fusion. Generalized multimodal fusion is extremely challenging as evidence for outcomes may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients, due to which simple methods of early, late, or intermediate fusion may be inadequate. In this paper, we present a novel approach that uses the machinery of multiplexed graphs for fusion. This allows for modalities to be represented through their targeted encodings. We model their relationship between explicitly via multiplexed graphs derived from salient features in a combined latent space. We then derive a new graph neural network for multiplex graphs for task-informed reasoning. We compare our framework against several state-of-the-art approaches for multi-graph reasoning and multimodal fusion. As a sanity check on the neural network design, we evaluate the multiplexed GNN on two popular benchmark datasets, namely the AIFB and the MUTAG dataset against several state-of-the-art multi-relational GNNs for reasoning. Second, we evaluate our multiplexed framework against several state-of-the-art multimodal fusion frameworks on two large clinical datasets for two separate applications. The first is the NIH-TB portals dataset for treatment outcome prediction in Tuberculosis, and the second is the ABIDE dataset for Autism Spectrum Disorder classification. Through rigorous experimental evaluation, we demonstrate that the multiplexed GNN provides robust performance improvements in all of these diverse applications.
Collapse
Affiliation(s)
| | | | | | | | | | - Orest Boyko
- Department of Radiology, VA Southern Nevada Healthcare System, NV, USA
| | | |
Collapse
|
12
|
Vanea C, Džigurski J, Rukins V, Dodi O, Siigur S, Salumäe L, Meir K, Parks WT, Hochner-Celnikier D, Fraser A, Hochner H, Laisk T, Ernst LM, Lindgren CM, Nellåker C. Mapping cell-to-tissue graphs across human placenta histology whole slide images using deep learning with HAPPY. Nat Commun 2024; 15:2710. [PMID: 38548713 PMCID: PMC10978962 DOI: 10.1038/s41467-024-46986-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Accurate placenta pathology assessment is essential for managing maternal and newborn health, but the placenta's heterogeneity and temporal variability pose challenges for histology analysis. To address this issue, we developed the 'Histology Analysis Pipeline.PY' (HAPPY), a deep learning hierarchical method for quantifying the variability of cells and micro-anatomical tissue structures across placenta histology whole slide images. HAPPY differs from patch-based features or segmentation approaches by following an interpretable biological hierarchy, representing cells and cellular communities within tissues at a single-cell resolution across whole slide images. We present a set of quantitative metrics from healthy term placentas as a baseline for future assessments of placenta health and we show how these metrics deviate in placentas with clinically significant placental infarction. HAPPY's cell and tissue predictions closely replicate those from independent clinical experts and placental biology literature.
Collapse
Affiliation(s)
- Claudia Vanea
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| | | | | | - Omri Dodi
- Faculty of Medicine, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Siim Siigur
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Liis Salumäe
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Karen Meir
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - W Tony Parks
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Canada
| | | | - Abigail Fraser
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK
| | - Hagit Hochner
- Braun School of Public Health, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Triin Laisk
- Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Linda M Ernst
- Department of Pathology and Laboratory Medicine, NorthShore University HealthSystem, Chicago, USA
- Department of Pathology, University of Chicago Pritzker School of Medicine, Chicago, USA
| | - Cecilia M Lindgren
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
- Centre for Human Genetics, Nuffield Department, University of Oxford, Oxford, UK
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Nuffield Department of Population Health Health, University of Oxford, Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| |
Collapse
|
13
|
Carrillo-Perez F, Pizurica M, Zheng Y, Nandi TN, Madduri R, Shen J, Gevaert O. Generation of synthetic whole-slide image tiles of tumours from RNA-sequencing data via cascaded diffusion models. Nat Biomed Eng 2024:10.1038/s41551-024-01193-8. [PMID: 38514775 DOI: 10.1038/s41551-024-01193-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 02/29/2024] [Indexed: 03/23/2024]
Abstract
Training machine-learning models with synthetically generated data can alleviate the problem of data scarcity when acquiring diverse and sufficiently large datasets is costly and challenging. Here we show that cascaded diffusion models can be used to synthesize realistic whole-slide image tiles from latent representations of RNA-sequencing data from human tumours. Alterations in gene expression affected the composition of cell types in the generated synthetic image tiles, which accurately preserved the distribution of cell types and maintained the cell fraction observed in bulk RNA-sequencing data, as we show for lung adenocarcinoma, kidney renal papillary cell carcinoma, cervical squamous cell carcinoma, colon adenocarcinoma and glioblastoma. Machine-learning models pretrained with the generated synthetic data performed better than models trained from scratch. Synthetic data may accelerate the development of machine-learning models in scarce-data settings and allow for the imputation of missing data modalities.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, Stanford, CA, USA
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, Stanford, CA, USA
- Internet technology and Data science Lab (IDLab), Ghent University, Ghent, Belgium
| | - Yuanning Zheng
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, Stanford, CA, USA
| | - Tarak Nath Nandi
- Data Science and Learning Division, Argonne National Laboratory, Lemont, IL, USA
| | - Ravi Madduri
- Data Science and Learning Division, Argonne National Laboratory, Lemont, IL, USA
| | - Jeanne Shen
- Department of Pathology, Stanford University, School of Medicine, Palo Alto, CA, USA
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, Stanford, CA, USA.
- Department of Biomedical Data Science, Stanford University, School of Medicine, Stanford, CA, USA.
| |
Collapse
|
14
|
Aalam SW, Ahanger AB, Masoodi TA, Bhat AA, Akil ASAS, Khan MA, Assad A, Macha MA, Bhat MR. Deep learning-based identification of esophageal cancer subtypes through analysis of high-resolution histopathology images. Front Mol Biosci 2024; 11:1346242. [PMID: 38567100 PMCID: PMC10985197 DOI: 10.3389/fmolb.2024.1346242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 02/23/2024] [Indexed: 04/04/2024] Open
Abstract
Esophageal cancer (EC) remains a significant health challenge globally, with increasing incidence and high mortality rates. Despite advances in treatment, there remains a need for improved diagnostic methods and understanding of disease progression. This study addresses the significant challenges in the automatic classification of EC, particularly in distinguishing its primary subtypes: adenocarcinoma and squamous cell carcinoma, using histopathology images. Traditional histopathological diagnosis, while being the gold standard, is subject to subjectivity and human error and imposes a substantial burden on pathologists. This study proposes a binary class classification system for detecting EC subtypes in response to these challenges. The system leverages deep learning techniques and tissue-level labels for enhanced accuracy. We utilized 59 high-resolution histopathological images from The Cancer Genome Atlas (TCGA) Esophageal Carcinoma dataset (TCGA-ESCA). These images were preprocessed, segmented into patches, and analyzed using a pre-trained ResNet101 model for feature extraction. For classification, we employed five machine learning classifiers: Support Vector Classifier (SVC), Logistic Regression (LR), Decision Tree (DT), AdaBoost (AD), Random Forest (RF), and a Feed-Forward Neural Network (FFNN). The classifiers were evaluated based on their prediction accuracy on the test dataset, yielding results of 0.88 (SVC and LR), 0.64 (DT and AD), 0.82 (RF), and 0.94 (FFNN). Notably, the FFNN classifier achieved the highest Area Under the Curve (AUC) score of 0.92, indicating its superior performance, followed closely by SVC and LR, with a score of 0.87. This suggested approach holds promising potential as a decision-support tool for pathologists, particularly in regions with limited resources and expertise. The timely and precise detection of EC subtypes through this system can substantially enhance the likelihood of successful treatment, ultimately leading to reduced mortality rates in patients with this aggressive cancer.
Collapse
Affiliation(s)
- Syed Wajid Aalam
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| | - Abdul Basit Ahanger
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| | - Tariq A. Masoodi
- Human Immunology Department, Research Branch, Sidra Medicine, Doha, Qatar
| | - Ajaz A. Bhat
- Department of Human Genetics-Precision Medicine in Diabetes, Obesity and Cancer Program, Sidra Medicine, Doha, Qatar
| | - Ammira S. Al-Shabeeb Akil
- Department of Human Genetics-Precision Medicine in Diabetes, Obesity and Cancer Program, Sidra Medicine, Doha, Qatar
| | | | - Assif Assad
- Department of Computer Science and Engineering, Islamic University of Science and Technology, Awantipora, India
| | - Muzafar A. Macha
- Watson-Crick Centre for Molecular Medicine, Islamic University of Science and Technology, Awantipora, India
| | - Muzafar Rasool Bhat
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| |
Collapse
|
15
|
Yang F, Xu Z, Wang H, Sun L, Zhai M, Zhang J. A hybrid feature selection algorithm combining information gain and grouping particle swarm optimization for cancer diagnosis. PLoS One 2024; 19:e0290332. [PMID: 38466662 PMCID: PMC10927139 DOI: 10.1371/journal.pone.0290332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/04/2023] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Cancer diagnosis based on machine learning has become a popular application direction. Support vector machine (SVM), as a classical machine learning algorithm, has been widely used in cancer diagnosis because of its advantages in high-dimensional and small sample data. However, due to the high-dimensional feature space and high feature redundancy of gene expression data, SVM faces the problem of poor classification effect when dealing with such data. METHODS Based on this, this paper proposes a hybrid feature selection algorithm combining information gain and grouping particle swarm optimization (IG-GPSO). The algorithm firstly calculates the information gain values of the features and ranks them in descending order according to the value. Then, ranked features are grouped according to the information index, so that the features in the group are close, and the features outside the group are sparse. Finally, grouped features are searched using grouping PSO and evaluated according to in-group and out-group. RESULTS Experimental results show that the average accuracy (ACC) of the SVM on the feature subset selected by the IG-GPSO is 98.50%, which is significantly better than the traditional feature selection algorithm. Compared with KNN, the classification effect of the feature subset selected by the IG-GPSO is still optimal. In addition, the results of multiple comparison tests show that the feature selection effect of the IG-GPSO is significantly better than that of traditional feature selection algorithms. CONCLUSION The feature subset selected by IG-GPSO not only has the best classification effect, but also has the least feature scale (FS). More importantly, the IG-GPSO significantly improves the ACC of SVM in cancer diagnostic.
Collapse
Affiliation(s)
- Fangyuan Yang
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Zhaozhao Xu
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, China
| | - Hong Wang
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Lisha Sun
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Mengjiao Zhai
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Juan Zhang
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| |
Collapse
|
16
|
Vollmer A, Hartmann S, Vollmer M, Shavlokhova V, Brands RC, Kübler A, Wollborn J, Hassel F, Couillard-Despres S, Lang G, Saravi B. Multimodal artificial intelligence-based pathogenomics improves survival prediction in oral squamous cell carcinoma. Sci Rep 2024; 14:5687. [PMID: 38453964 PMCID: PMC10920832 DOI: 10.1038/s41598-024-56172-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 03/03/2024] [Indexed: 03/09/2024] Open
Abstract
In this study, we aimed to develop a novel prognostic algorithm for oral squamous cell carcinoma (OSCC) using a combination of pathogenomics and AI-based techniques. We collected comprehensive clinical, genomic, and pathology data from a cohort of OSCC patients in the TCGA dataset and used machine learning and deep learning algorithms to identify relevant features that are predictive of survival outcomes. Our analyses included 406 OSCC patients. Initial analyses involved gene expression analyses, principal component analyses, gene enrichment analyses, and feature importance analyses. These insights were foundational for subsequent model development. Furthermore, we applied five machine learning/deep learning algorithms (Random Survival Forest, Gradient Boosting Survival Analysis, Cox PH, Fast Survival SVM, and DeepSurv) for survival prediction. Our initial analyses revealed relevant gene expression variations and biological pathways, laying the groundwork for robust feature selection in model building. The results showed that the multimodal model outperformed the unimodal models across all methods, with c-index values of 0.722 for RSF, 0.633 for GBSA, 0.625 for FastSVM, 0.633 for CoxPH, and 0.515 for DeepSurv. When considering only important features, the multimodal model continued to outperform the unimodal models, with c-index values of 0.834 for RSF, 0.747 for GBSA, 0.718 for FastSVM, 0.742 for CoxPH, and 0.635 for DeepSurv. Our results demonstrate the potential of pathogenomics and AI-based techniques in improving the accuracy of prognostic prediction in OSCC, which may ultimately aid in the development of personalized treatment strategies for patients with this devastating disease.
Collapse
Affiliation(s)
- Andreas Vollmer
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, 97070, Würzburg, Franconia, Germany.
| | - Stefan Hartmann
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, 97070, Würzburg, Franconia, Germany
| | - Michael Vollmer
- Department of Oral and Maxillofacial Surgery, Tuebingen University Hospital, Osianderstrasse 2-8, 72076, Tuebingen, Germany
| | - Veronika Shavlokhova
- Maxillofacial Surgery University Hospital Ruppin-Brandenburg, Fehrbelliner Straße 38, 16816, Neuruppin, Germany
| | - Roman C Brands
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, 97070, Würzburg, Franconia, Germany
| | - Alexander Kübler
- Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, 97070, Würzburg, Franconia, Germany
| | - Jakob Wollborn
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Frank Hassel
- Department of Spine Surgery, Loretto Hospital, Freiburg, Germany
| | - Sebastien Couillard-Despres
- Institute of Experimental Neuroregeneration, Paracelsus Medical University, 5020, Salzburg, Austria
- Austrian Cluster for Tissue Regeneration, Vienna, Austria
| | - Gernot Lang
- Department of Orthopedics and Trauma Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Babak Saravi
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
- Department of Spine Surgery, Loretto Hospital, Freiburg, Germany
- Institute of Experimental Neuroregeneration, Paracelsus Medical University, 5020, Salzburg, Austria
- Department of Orthopedics and Trauma Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
17
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01049-2. [PMID: 38429563 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
18
|
Lv Q, Liu Y, Sun Y, Wu M. Insight into deep learning for glioma IDH medical image analysis: A systematic review. Medicine (Baltimore) 2024; 103:e37150. [PMID: 38363910 PMCID: PMC10869095 DOI: 10.1097/md.0000000000037150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 01/11/2024] [Indexed: 02/18/2024] Open
Abstract
BACKGROUND Deep learning techniques explain the enormous potential of medical image analysis, particularly in digital pathology. Concurrently, molecular markers have gained increasing significance over the past decade in the context of glioma patients, providing novel insights into diagnosis and more personalized treatment options. Deep learning combined with imaging and molecular analysis enables more accurate prognostication of patients, more accurate treatment plan proposals, and accurate biomarker (IDH) prediction for gliomas. This systematic study examines the development of deep learning techniques for IDH prediction using histopathology images, spanning the period from 2019 to 2023. METHOD The study adhered to the PRISMA reporting requirements, and databases including PubMed, Google Scholar, Google Search, and preprint repositories (such as arXiv) were systematically queried for pertinent literature spanning the period from 2019 to the 30th of 2023. Search phrases related to deep learning, digital pathology, glioma, and IDH were collaboratively utilized. RESULTS Fifteen papers meeting the inclusion criteria were included in the analysis. These criteria specifically encompassed studies utilizing deep learning for the analysis of hematoxylin and eosin images to determine the IDH status in patients with gliomas. CONCLUSIONS When predicting the status of IDH, the classifier built on digital pathological images demonstrates exceptional performance. The study's predictive effectiveness is enhanced with the utilization of the appropriate deep learning model. However, external verification is necessary to showcase their resilience and universality. Larger sample sizes and multicenter samples are necessary for more comprehensive research to evaluate performance and confirm clinical advantages.
Collapse
Affiliation(s)
- Qingqing Lv
- Hunan Cancer Hospital, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha 410008, Hunan, China
- The Key Laboratory of Carcinogenesis of the Chinese Ministry of Health, The Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research Institute, Central South University, Changsha, 410078, Hunan, China
| | - Yihao Liu
- Hunan Cancer Hospital, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha 410008, Hunan, China
- The Key Laboratory of Carcinogenesis of the Chinese Ministry of Health, The Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research Institute, Central South University, Changsha, 410078, Hunan, China
| | - Yingnan Sun
- Hunan Cancer Hospital, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha 410008, Hunan, China
| | - Minghua Wu
- Hunan Cancer Hospital, The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha 410008, Hunan, China
- The Key Laboratory of Carcinogenesis of the Chinese Ministry of Health, The Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research Institute, Central South University, Changsha, 410078, Hunan, China
| |
Collapse
|
19
|
Unger M, Kather JN. A systematic analysis of deep learning in genomics and histopathology for precision oncology. BMC Med Genomics 2024; 17:48. [PMID: 38317154 PMCID: PMC10845449 DOI: 10.1186/s12920-024-01796-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/02/2024] [Indexed: 02/07/2024] Open
Abstract
BACKGROUND Digitized histopathological tissue slides and genomics profiling data are available for many patients with solid tumors. In the last 5 years, Deep Learning (DL) has been broadly used to extract clinically actionable information and biological knowledge from pathology slides and genomic data in cancer. In addition, a number of recent studies have introduced multimodal DL models designed to simultaneously process both images from pathology slides and genomic data as inputs. By comparing patterns from one data modality with those in another, multimodal DL models are capable of achieving higher performance compared to their unimodal counterparts. However, the application of these methodologies across various tumor entities and clinical scenarios lacks consistency. METHODS Here, we present a systematic survey of the academic literature from 2010 to November 2023, aiming to quantify the application of DL for pathology, genomics, and the combined use of both data types. After filtering 3048 publications, our search identified 534 relevant articles which then were evaluated by basic (diagnosis, grading, subtyping) and advanced (mutation, drug response and survival prediction) application types, publication year and addressed cancer tissue. RESULTS Our analysis reveals a predominant application of DL in pathology compared to genomics. However, there is a notable surge in DL incorporation within both domains. Furthermore, while DL applied to pathology primarily targets the identification of histology-specific patterns in individual tissues, DL in genomics is more commonly used in a pan-cancer context. Multimodal DL, on the contrary, remains a niche topic, evidenced by a limited number of publications, primarily focusing on prognosis predictions. CONCLUSION In summary, our quantitative analysis indicates that DL not only has a well-established role in histopathology but is also being successfully integrated into both genomic and multimodal applications. In addition, there is considerable potential in multimodal DL for harnessing further advanced tasks, such as predicting drug response. Nevertheless, this review also underlines the need for further research to bridge the existing gaps in these fields.
Collapse
Affiliation(s)
- Michaela Unger
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
20
|
Feng X, Shu W, Li M, Li J, Xu J, He M. Pathogenomics for accurate diagnosis, treatment, prognosis of oncology: a cutting edge overview. J Transl Med 2024; 22:131. [PMID: 38310237 PMCID: PMC10837897 DOI: 10.1186/s12967-024-04915-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/20/2024] [Indexed: 02/05/2024] Open
Abstract
The capability to gather heterogeneous data, alongside the increasing power of artificial intelligence to examine it, leading a revolution in harnessing multimodal data in the life sciences. However, most approaches are limited to unimodal data, leaving integrated approaches across modalities relatively underdeveloped in computational pathology. Pathogenomics, as an invasive method to integrate advanced molecular diagnostics from genomic data, morphological information from histopathological imaging, and codified clinical data enable the discovery of new multimodal cancer biomarkers to propel the field of precision oncology in the coming decade. In this perspective, we offer our opinions on synthesizing complementary modalities of data with emerging multimodal artificial intelligence methods in pathogenomics. It includes correlation between the pathological and genomic profile of cancer, fusion of histology, and genomics profile of cancer. We also present challenges, opportunities, and avenues for future work.
Collapse
Affiliation(s)
- Xiaobing Feng
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Wen Shu
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Mingya Li
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Junyu Li
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Junyao Xu
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Min He
- College of Electrical and Information Engineering, Hunan University, Changsha, China.
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China.
| |
Collapse
|
21
|
Yang G, Li W, Xie W, Wang L, Yu K. An improved binary particle swarm optimization algorithm for clinical cancer biomarker identification in microarray data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107987. [PMID: 38157825 DOI: 10.1016/j.cmpb.2023.107987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/04/2023] [Accepted: 12/16/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND AND OBJECTIVE The limited number of samples and high-dimensional features in microarray data make selecting a small number of features for disease diagnosis a challenging problem. Traditional feature selection methods based on evolutionary algorithms are difficult to search for the optimal set of features in a limited time when dealing with the high-dimensional feature selection problem. New solutions are proposed to solve the above problems. METHODS In this paper, we propose a hybrid feature selection method (C-IFBPFE) for biomarker identification in microarray data, which combines clustering and improved binary particle swarm optimization while incorporating an embedded feature elimination strategy. Firstly, an adaptive redundant feature judgment method based on correlation clustering is proposed for feature screening to reduce the search space in the subsequent stage. Secondly, we propose an improved flipping probability-based binary particle swarm optimization (IFBPSO), better applicable to the binary particle swarm optimization problem. Finally, we also design a new feature elimination (FE) strategy embedded in the binary particle swarm optimization algorithm. This strategy gradually removes poorer features during iterations to reduce the number of features and improve accuracy. RESULTS We compared C-IFBPFE with other published hybrid feature selection methods on eight public datasets and analyzed the impact of each improvement. The proposed method outperforms other current state-of-the-art feature selection methods in terms of accuracy, number of features, sensitivity, and specificity. The ablation study of this method validates the efficacy of each component, especially the proposed feature elimination strategy significantly improves the performance of the algorithm. CONCLUSIONS The hybrid feature selection method proposed in this paper helps address the issue of high-dimensional microarray data with few samples. It can select a small subset of features and achieve high classification accuracy on microarray datasets. Additionally, independent validation of the selected features shows that those chosen by C-IFBPFE have strong correlations with disease phenotypes and can identify important biomarkers from data related to biomedical problems.
Collapse
Affiliation(s)
- Guicheng Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110000, Liaoning, China.
| | - Wei Li
- Key Laboratory of Intelligent Computing in Medical Image (MIIC), Northeastern University, Ministry of Education, Shenyang, 110000, Liaoning, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, 110819, Liaoning, China.
| | - Weidong Xie
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110000, Liaoning, China.
| | - Linjie Wang
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110000, Liaoning, China.
| | - Kun Yu
- College of Medicine and Bioinformation Engineering, Northeastern University, Shenyang, 110819, Liaoning, China.
| |
Collapse
|
22
|
Ratziu V, Hompesch M, Petitjean M, Serdjebi C, Iyer JS, Parwani AV, Tai D, Bugianesi E, Cusi K, Friedman SL, Lawitz E, Romero-Gómez M, Schuppan D, Loomba R, Paradis V, Behling C, Sanyal AJ. Artificial intelligence-assisted digital pathology for non-alcoholic steatohepatitis: current status and future directions. J Hepatol 2024; 80:335-351. [PMID: 37879461 DOI: 10.1016/j.jhep.2023.10.015] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 08/28/2023] [Accepted: 10/09/2023] [Indexed: 10/27/2023]
Abstract
The worldwide prevalence of non-alcoholic steatohepatitis (NASH) is increasing, causing a significant medical burden, but no approved therapeutics are currently available. NASH drug development requires histological analysis of liver biopsies by expert pathologists for trial enrolment and efficacy assessment, which can be hindered by multiple issues including sample heterogeneity, inter-reader and intra-reader variability, and ordinal scoring systems. Consequently, there is a high unmet need for accurate, reproducible, quantitative, and automated methods to assist pathologists with histological analysis to improve the precision around treatment and efficacy assessment. Digital pathology (DP) workflows in combination with artificial intelligence (AI) have been established in other areas of medicine and are being actively investigated in NASH to assist pathologists in the evaluation and scoring of NASH histology. DP/AI models can be used to automatically detect, localise, quantify, and score histological parameters and have the potential to reduce the impact of scoring variability in NASH clinical trials. This narrative review provides an overview of DP/AI tools in development for NASH, highlights key regulatory considerations, and discusses how these advances may impact the future of NASH clinical management and drug development. This should be a high priority in the NASH field, particularly to improve the development of safe and effective therapeutics.
Collapse
Affiliation(s)
- Vlad Ratziu
- Sorbonne Université, ICAN Institute for Cardiometabolism and Nutrition, Hospital Pitié-Salpêtrière, INSERM UMRS 1138 CRC, Paris, France.
| | | | | | | | | | - Anil V Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | | | | | - Kenneth Cusi
- Division of Endocrinology, Diabetes and Metabolism, University of Florida, Gainesville, FL, USA
| | - Scott L Friedman
- Division of Liver Diseases, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Eric Lawitz
- Texas Liver Institute, University of Texas Health San Antonio, San Antonio, TX, USA
| | - Manuel Romero-Gómez
- Hospital Universitario Virgen del Rocío, CiberEHD, Insituto de Biomedicina de Sevilla (HUVR/CSIC/US), Universidad de Sevilla, Seville, Spain
| | - Detlef Schuppan
- Institute of Translational Immunology and Department of Medicine, University Medical Center, Mainz, Germany; Department of Hepatology and Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Rohit Loomba
- NAFLD Research Center, University of California at San Diego, San Diego, CA, USA
| | - Valérie Paradis
- Université Paris Cité, Service d'Anatomie Pathologique, Hôpital Beaujon, Paris, France
| | | | - Arun J Sanyal
- Division of Gastroenterology, Hepatology and Nutrition, Virginia Commonwealth University, Richmond, VA, USA
| |
Collapse
|
23
|
Klauschen F, Dippel J, Keyl P, Jurmeister P, Bockmayr M, Mock A, Buchstab O, Alber M, Ruff L, Montavon G, Müller KR. Toward Explainable Artificial Intelligence for Precision Pathology. ANNUAL REVIEW OF PATHOLOGY 2024; 19:541-570. [PMID: 37871132 DOI: 10.1146/annurev-pathmechdis-051222-113147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
The rapid development of precision medicine in recent years has started to challenge diagnostic pathology with respect to its ability to analyze histological images and increasingly large molecular profiling data in a quantitative, integrative, and standardized way. Artificial intelligence (AI) and, more precisely, deep learning technologies have recently demonstrated the potential to facilitate complex data analysis tasks, including clinical, histological, and molecular data for disease classification; tissue biomarker quantification; and clinical outcome prediction. This review provides a general introduction to AI and describes recent developments with a focus on applications in diagnostic pathology and beyond. We explain limitations including the black-box character of conventional AI and describe solutions to make machine learning decisions more transparent with so-called explainable AI. The purpose of the review is to foster a mutual understanding of both the biomedical and the AI side. To that end, in addition to providing an overview of the relevant foundations in pathology and machine learning, we present worked-through examples for a better practical understanding of what AI can achieve and how it should be done.
Collapse
Affiliation(s)
- Frederick Klauschen
- Institute of Pathology, Ludwig-Maximilians-Universität München, Munich, Germany;
- Institute of Pathology, Charité Universitätsmedizin Berlin, Berlin, Germany
- Berlin Institute for the Foundations of Learning and Data (BIFOLD), Berlin, Germany
- German Cancer Consortium, German Cancer Research Center (DKTK/DKFZ), Munich Partner Site, Munich, Germany
| | - Jonas Dippel
- Berlin Institute for the Foundations of Learning and Data (BIFOLD), Berlin, Germany
- Machine Learning Group, Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany;
| | - Philipp Keyl
- Institute of Pathology, Ludwig-Maximilians-Universität München, Munich, Germany;
| | - Philipp Jurmeister
- Institute of Pathology, Ludwig-Maximilians-Universität München, Munich, Germany;
- German Cancer Consortium, German Cancer Research Center (DKTK/DKFZ), Munich Partner Site, Munich, Germany
| | - Michael Bockmayr
- Institute of Pathology, Charité Universitätsmedizin Berlin, Berlin, Germany
- Department of Pediatric Hematology and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Research Institute Children's Cancer Center Hamburg, Hamburg, Germany
| | - Andreas Mock
- Institute of Pathology, Ludwig-Maximilians-Universität München, Munich, Germany;
- German Cancer Consortium, German Cancer Research Center (DKTK/DKFZ), Munich Partner Site, Munich, Germany
| | - Oliver Buchstab
- Institute of Pathology, Ludwig-Maximilians-Universität München, Munich, Germany;
| | - Maximilian Alber
- Institute of Pathology, Charité Universitätsmedizin Berlin, Berlin, Germany
- Aignostics, Berlin, Germany
| | | | - Grégoire Montavon
- Berlin Institute for the Foundations of Learning and Data (BIFOLD), Berlin, Germany
- Machine Learning Group, Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany;
- Department of Mathematics and Computer Science, Freie Universität Berlin, Berlin, Germany
| | - Klaus-Robert Müller
- Berlin Institute for the Foundations of Learning and Data (BIFOLD), Berlin, Germany
- Machine Learning Group, Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany;
- Department of Artificial Intelligence, Korea University, Seoul, Korea
- Max Planck Institute for Informatics, Saarbrücken, Germany
| |
Collapse
|
24
|
Tong L, Shi W, Isgut M, Zhong Y, Lais P, Gloster L, Sun J, Swain A, Giuste F, Wang MD. Integrating Multi-Omics Data With EHR for Precision Medicine Using Advanced Artificial Intelligence. IEEE Rev Biomed Eng 2024; 17:80-97. [PMID: 37824325 DOI: 10.1109/rbme.2023.3324264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
With the recent advancement of novel biomedical technologies such as high-throughput sequencing and wearable devices, multi-modal biomedical data ranging from multi-omics molecular data to real-time continuous bio-signals are generated at an unprecedented speed and scale every day. For the first time, these multi-modal biomedical data are able to make precision medicine close to a reality. However, due to data volume and the complexity, making good use of these multi-modal biomedical data requires major effort. Researchers and clinicians are actively developing artificial intelligence (AI) approaches for data-driven knowledge discovery and causal inference using a variety of biomedical data modalities. These AI-based approaches have demonstrated promising results in various biomedical and healthcare applications. In this review paper, we summarize the state-of-the-art AI models for integrating multi-omics data and electronic health records (EHRs) for precision medicine. We discuss the challenges and opportunities in integrating multi-omics data with EHRs and future directions. We hope this review can inspire future research and developing in integrating multi-omics data with EHRs for precision medicine.
Collapse
|
25
|
Ricker CA, Meli K, Van Allen EM. Historical perspective and future directions: computational science in immuno-oncology. J Immunother Cancer 2024; 12:e008306. [PMID: 38191244 PMCID: PMC10826578 DOI: 10.1136/jitc-2023-008306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2023] [Indexed: 01/10/2024] Open
Abstract
Immuno-oncology holds promise for transforming patient care having achieved durable clinical response rates across a variety of advanced and metastatic cancers. Despite these achievements, only a minority of patients respond to immunotherapy, underscoring the importance of elucidating molecular mechanisms responsible for response and resistance to inform the development and selection of treatments. Breakthroughs in molecular sequencing technologies have led to the generation of an immense amount of genomic and transcriptomic sequencing data that can be mined to uncover complex tumor-immune interactions using computational tools. In this review, we discuss existing and emerging computational methods that contextualize the composition and functional state of the tumor microenvironment, infer the reactivity and clonal dynamics from reconstructed immune cell receptor repertoires, and predict the antigenic landscape for immune cell recognition. We further describe the advantage of multi-omics analyses for capturing multidimensional relationships and artificial intelligence techniques for integrating omics data with histopathological and radiological images to encapsulate patterns of treatment response and tumor-immune biology. Finally, we discuss key challenges impeding their widespread use and clinical application and conclude with future perspectives. We are hopeful that this review will both serve as a guide for prospective researchers seeking to use existing tools for scientific discoveries and inspire the optimization or development of novel tools to enhance precision, ultimately expediting advancements in immunotherapy that improve patient survival and quality of life.
Collapse
Affiliation(s)
- Cora A Ricker
- Medical Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Kevin Meli
- Medical Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | | |
Collapse
|
26
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
27
|
Amgad M, Hodge JM, Elsebaie MAT, Bodelon C, Puvanesarajah S, Gutman DA, Siziopikou KP, Goldstein JA, Gaudet MM, Teras LR, Cooper LAD. A population-level digital histologic biomarker for enhanced prognosis of invasive breast cancer. Nat Med 2024; 30:85-97. [PMID: 38012314 DOI: 10.1038/s41591-023-02643-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Breast cancer is a heterogeneous disease with variable survival outcomes. Pathologists grade the microscopic appearance of breast tissue using the Nottingham criteria, which are qualitative and do not account for noncancerous elements within the tumor microenvironment. Here we present the Histomic Prognostic Signature (HiPS), a comprehensive, interpretable scoring of the survival risk incurred by breast tumor microenvironment morphology. HiPS uses deep learning to accurately map cellular and tissue structures to measure epithelial, stromal, immune, and spatial interaction features. It was developed using a population-level cohort from the Cancer Prevention Study-II and validated using data from three independent cohorts, including the Prostate, Lung, Colorectal, and Ovarian Cancer trial, Cancer Prevention Study-3, and The Cancer Genome Atlas. HiPS consistently outperformed pathologists in predicting survival outcomes, independent of tumor-node-metastasis stage and pertinent variables. This was largely driven by stromal and immune features. In conclusion, HiPS is a robustly validated biomarker to support pathologists and improve patient prognosis.
Collapse
Affiliation(s)
- Mohamed Amgad
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - James M Hodge
- Department of Population Science, American Cancer Society, Atlanta, GA, USA
| | - Maha A T Elsebaie
- Department of Medicine, John H. Stroger, Jr. Hospital of Cook County, Chicago, IL, USA
| | - Clara Bodelon
- Department of Population Science, American Cancer Society, Atlanta, GA, USA
| | | | - David A Gutman
- Department of Pathology, Emory University School of Medicine, Atlanta, GA, USA
| | - Kalliopi P Siziopikou
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Jeffery A Goldstein
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Mia M Gaudet
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD, USA
| | - Lauren R Teras
- Department of Population Science, American Cancer Society, Atlanta, GA, USA
| | - Lee A D Cooper
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
| |
Collapse
|
28
|
Liu H, Shi Y, Li A, Wang M. Multi-modal fusion network with intra- and inter-modality attention for prognosis prediction in breast cancer. Comput Biol Med 2024; 168:107796. [PMID: 38064843 DOI: 10.1016/j.compbiomed.2023.107796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 11/20/2023] [Accepted: 11/29/2023] [Indexed: 01/10/2024]
Abstract
Accurate breast cancer prognosis prediction can help clinicians to develop appropriate treatment plans and improve life quality for patients. Recent prognostic prediction studies suggest that fusing multi-modal data, e.g., genomic data and pathological images, plays a crucial role in improving predictive performance. Despite promising results of existing approaches, there remain challenges in effective multi-modal fusion. First, albeit a powerful fusion technique, Kronecker product produces high-dimensional quadratic expansion of features that may result in high computational cost and overfitting risk, thereby limiting its performance and applicability in cancer prognosis prediction. Second, most existing methods put more attention on learning cross-modality relations between different modalities, ignoring modality-specific relations that are complementary to cross-modality relations and beneficial for cancer prognosis prediction. To address these challenges, in this study we propose a novel attention-based multi-modal network to accurately predict breast cancer prognosis, which efficiently models both modality-specific and cross-modality relations without bringing in high-dimensional features. Specifically, two intra-modality self-attentional modules and an inter-modality cross-attentional module, accompanied by latent space transformation of channel affinity matrix, are developed to successfully capture modality-specific and cross-modality relations for efficient integration of genomic data and pathological images, respectively. Moreover, we design an adaptive fusion block to take full advantage of both modality-specific and cross-modality relations. Comprehensive experiment demonstrates that our method can effectively boost prognosis prediction performance of breast cancer and compare favorably with the state-of-the-art methods.
Collapse
Affiliation(s)
- Honglei Liu
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Yi Shi
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
29
|
Xiao X, Kong Y, Li R, Wang Z, Lu H. Transformer with convolution and graph-node co-embedding: An accurate and interpretable vision backbone for predicting gene expressions from local histopathological image. Med Image Anal 2024; 91:103040. [PMID: 38007979 DOI: 10.1016/j.media.2023.103040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 11/04/2023] [Accepted: 11/17/2023] [Indexed: 11/28/2023]
Abstract
Inferring gene expressions from histopathological images has long been a fascinating yet challenging task, primarily due to the substantial disparities between the two modality. Existing strategies using local or global features of histological images are suffering model complexity, GPU consumption, low interpretability, insufficient encoding of local features, and over-smooth prediction of gene expressions among neighboring sites. In this paper, we develop TCGN (Transformer with Convolution and Graph-Node co-embedding method) for gene expression estimation from H&E-stained pathological slide images. TCGN comprises a combination of convolutional layers, transformer encoders, and graph neural networks, and is the first to integrate these blocks in a general and interpretable computer vision backbone. Notably, TCGN uniquely operates with just a single spot image as input for histopathological image analysis, simplifying the process while maintaining interpretability. We validate TCGN on three publicly available spatial transcriptomic datasets. TCGN consistently exhibited the best performance (with median PCC 0.232). TCGN offers superior accuracy while keeping parameters to a minimum (just 86.241 million), and it consumes minimal memory, allowing it to run smoothly even on personal computers. Moreover, TCGN can be extended to handle bulk RNA-seq data while providing the interpretability. Enhancing the accuracy of omics information prediction from pathological images not only establishes a connection between genotype and phenotype, enabling the prediction of costly-to-measure biomarkers from affordable histopathological images, but also lays the groundwork for future multi-modal data modeling. Our results confirm that TCGN is a powerful tool for inferring gene expressions from histopathological images in precision health applications.
Collapse
Affiliation(s)
- Xiao Xiao
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China; SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China; Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, United States
| | - Yan Kong
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China; SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Ronghan Li
- SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China; Zhiyuan College, Shanghai Jiao Tong University, Shanghai, China
| | - Zuoheng Wang
- Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, United States
| | - Hui Lu
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China; SJTU-Yale Joint Center for Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China; NHC Key Laboratory of Medical Embryogenesis and Developmental Molecular Biology & Shanghai Key Laboratory of Embryo and Reproduction Engineering, Shanghai Engineering Research Center for Big Data in Pediatric Precision Medicine, Shanghai, China.
| |
Collapse
|
30
|
Shen H, Wu J, Shen X, Hu J, Liu J, Zhang Q, Sun Y, Chen K, Li X. An efficient context-aware approach for whole-slide image classification. iScience 2023; 26:108175. [PMID: 38047071 PMCID: PMC10690557 DOI: 10.1016/j.isci.2023.108175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 08/29/2023] [Accepted: 10/08/2023] [Indexed: 12/05/2023] Open
Abstract
Computational pathology for gigapixel whole-slide images (WSIs) at slide level is helpful in disease diagnosis and remains challenging. We propose a context-aware approach termed WSI inspection via transformer (WIT) for slide-level classification via holistically modeling dependencies among patches on WSI. WIT automatically learns feature representation of WSI by aggregating features of all image patches. We evaluate classification performance of WIT and state-of-the-art baseline method. WIT achieved an accuracy of 82.1% (95% CI, 80.7%-83.3%) in the detection of 32 cancer types on the TCGA dataset, 0.918 (0.910-0.925) in diagnosis of cancer on the CPTAC dataset, and 0.882 (0.87-0.890) in the diagnosis of prostate cancer from needle biopsy slide, outperforming the baseline by 31.6%, 5.4%, and 9.3%, respectively. WIT can pinpoint the WSI regions that are most influential for its decision. WIT represents a new paradigm for computational pathology, facilitating the development of digital pathology tools.
Collapse
Affiliation(s)
- Hongru Shen
- Tianjin Cancer Institute, Tianjin’s Clinical Research Center for Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Jianghua Wu
- Department of Pathology, Peking University Cancer Hospital & Institute, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Beijing, China
| | - Xilin Shen
- Tianjin Cancer Institute, Tianjin’s Clinical Research Center for Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Jiani Hu
- Tianjin Cancer Institute, Tianjin’s Clinical Research Center for Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Jilei Liu
- Tianjin Cancer Institute, Tianjin’s Clinical Research Center for Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Qiang Zhang
- Department of Maxillofacial and Otorhinolaryngology Oncology, Tianjin’s Clinical Research Center for Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Yan Sun
- Department of Pathology, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Immunology and Biotherapy, National Clinical Research Center for Cancer, Tianjin Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Kexin Chen
- Department of Epidemiology and Biostatistics, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Molecular Cancer Epidemiology of Tianjin, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| | - Xiangchun Li
- Tianjin Cancer Institute, Tianjin’s Clinical Research Center for Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| |
Collapse
|
31
|
Schulz S, Jesinghaus M, Foersch S. [Multistain deep learning as a prognostic and predictive biomarker in colorectal cancer]. PATHOLOGIE (HEIDELBERG, GERMANY) 2023; 44:104-108. [PMID: 37987821 DOI: 10.1007/s00292-023-01280-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/30/2023] [Indexed: 11/22/2023]
Abstract
The tumor immune microenvironment (TIME) plays a crucial prognostic and predictive role in solid malignancies such as colorectal cancer (CRC). Nevertheless, scoring systems based on TIME such as the Immunoscore (IS) are rarely used in clinical practice. Among other reasons, this might be due to the additional time required for manual quantification of tumor-associated immune cells or costs associated with proprietary/commercial solutions. To address these issues, we developed a multistain deep learning model (MSDLM) and trained, validated, and tested it on immunohistochemical image data of different immune cell subtypes from over 1000 patients with CRC. Our model showed high prognostic accuracy and outperformed other clinical, molecular, and immune cell-based parameters. It might also be used for therapy response prediction in rectal cancer patients undergoing neoadjuvant therapy. Leveraging artificial intelligence interpretability/explainability methods, we ascertained that the MSDLM's predictions align with recognized antitumor immune response patterns. Consequently, the AImmunoscore (AIS) could emerge as a potential TIME-based decision-making tool for clinicians.
Collapse
Affiliation(s)
- Stefan Schulz
- Institut für Pathologie, Universitätsmedizin Mainz, Langenbeckstr. 1, Mainz, Deutschland
| | - Moritz Jesinghaus
- Institut für Pathologie, Universitätsklinikum Marburg, Marburg, Deutschland
| | - Sebastian Foersch
- Institut für Pathologie, Universitätsmedizin Mainz, Langenbeckstr. 1, Mainz, Deutschland.
| |
Collapse
|
32
|
Li Y, Shen Y, Zhang J, Song S, Li Z, Ke J, Shen D. A Hierarchical Graph V-Net With Semi-Supervised Pre-Training for Histological Image Based Breast Cancer Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3907-3918. [PMID: 37725717 DOI: 10.1109/tmi.2023.3317132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Numerous patch-based methods have recently been proposed for histological image based breast cancer classification. However, their performance could be highly affected by ignoring spatial contextual information in the whole slide image (WSI). To address this issue, we propose a novel hierarchical Graph V-Net by integrating 1) patch-level pre-training and 2) context-based fine-tuning, with a hierarchical graph network. Specifically, a semi-supervised framework based on knowledge distillation is first developed to pre-train a patch encoder for extracting disease-relevant features. Then, a hierarchical Graph V-Net is designed to construct a hierarchical graph representation from neighboring/similar individual patches for coarse-to-fine classification, where each graph node (corresponding to one patch) is attached with extracted disease-relevant features and its target label during training is the average label of all pixels in the corresponding patch. To evaluate the performance of our proposed hierarchical Graph V-Net, we collect a large WSI dataset of 560 WSIs, with 30 labeled WSIs from the BACH dataset (through our further refinement), 30 labeled WSIs and 500 unlabeled WSIs from Yunnan Cancer Hospital. Those 500 unlabeled WSIs are employed for patch-level pre-training to improve feature representation, while 60 labeled WSIs are used to train and test our proposed hierarchical Graph V-Net. Both comparative assessment and ablation studies demonstrate the superiority of our proposed hierarchical Graph V-Net over state-of-the-art methods in classifying breast cancer from WSIs. The source code and our annotations for the BACH dataset have been released at https://github.com/lyhkevin/Graph-V-Net.
Collapse
|
33
|
Toussaint PA, Leiser F, Thiebes S, Schlesner M, Brors B, Sunyaev A. Explainable artificial intelligence for omics data: a systematic mapping study. Brief Bioinform 2023; 25:bbad453. [PMID: 38113073 PMCID: PMC10729786 DOI: 10.1093/bib/bbad453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 07/28/2023] [Accepted: 11/08/2023] [Indexed: 12/21/2023] Open
Abstract
Researchers increasingly turn to explainable artificial intelligence (XAI) to analyze omics data and gain insights into the underlying biological processes. Yet, given the interdisciplinary nature of the field, many findings have only been shared in their respective research community. An overview of XAI for omics data is needed to highlight promising approaches and help detect common issues. Toward this end, we conducted a systematic mapping study. To identify relevant literature, we queried Scopus, PubMed, Web of Science, BioRxiv, MedRxiv and arXiv. Based on keywording, we developed a coding scheme with 10 facets regarding the studies' AI methods, explainability methods and omics data. Our mapping study resulted in 405 included papers published between 2010 and 2023. The inspected papers analyze DNA-based (mostly genomic), transcriptomic, proteomic or metabolomic data by means of neural networks, tree-based methods, statistical methods and further AI methods. The preferred post-hoc explainability methods are feature relevance (n = 166) and visual explanation (n = 52), while papers using interpretable approaches often resort to the use of transparent models (n = 83) or architecture modifications (n = 72). With many research gaps still apparent for XAI for omics data, we deduced eight research directions and discuss their potential for the field. We also provide exemplary research questions for each direction. Many problems with the adoption of XAI for omics data in clinical practice are yet to be resolved. This systematic mapping study outlines extant research on the topic and provides research directions for researchers and practitioners.
Collapse
Affiliation(s)
- Philipp A Toussaint
- Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany
- HIDSS4Health – Helmholtz Information and Data Science School for Health, Karlsruhe, Heidelberg, Germany
| | - Florian Leiser
- Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Scott Thiebes
- Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Matthias Schlesner
- Biomedical Informatics, Data Mining and Data Analytics, Faculty of Applied Computer Science and Medical Faculty, University of Augsburg, Augsburg, Germany
| | - Benedikt Brors
- Division of Applied Bioinformatics, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Translational Oncology, National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ali Sunyaev
- Department of Economics and Management, Karlsruhe Institute of Technology, Karlsruhe, Germany
| |
Collapse
|
34
|
Chen B, Jin J, Liu H, Yang Z, Zhu H, Wang Y, Lin J, Wang S, Chen S. Trends and hotspots in research on medical images with deep learning: a bibliometric analysis from 2013 to 2023. Front Artif Intell 2023; 6:1289669. [PMID: 38028662 PMCID: PMC10665961 DOI: 10.3389/frai.2023.1289669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 10/27/2023] [Indexed: 12/01/2023] Open
Abstract
Background With the rapid development of the internet, the improvement of computer capabilities, and the continuous advancement of algorithms, deep learning has developed rapidly in recent years and has been widely applied in many fields. Previous studies have shown that deep learning has an excellent performance in image processing, and deep learning-based medical image processing may help solve the difficulties faced by traditional medical image processing. This technology has attracted the attention of many scholars in the fields of computer science and medicine. This study mainly summarizes the knowledge structure of deep learning-based medical image processing research through bibliometric analysis and explores the research hotspots and possible development trends in this field. Methods Retrieve the Web of Science Core Collection database using the search terms "deep learning," "medical image processing," and their synonyms. Use CiteSpace for visual analysis of authors, institutions, countries, keywords, co-cited references, co-cited authors, and co-cited journals. Results The analysis was conducted on 562 highly cited papers retrieved from the database. The trend chart of the annual publication volume shows an upward trend. Pheng-Ann Heng, Hao Chen, and Klaus Hermann Maier-Hein are among the active authors in this field. Chinese Academy of Sciences has the highest number of publications, while the institution with the highest centrality is Stanford University. The United States has the highest number of publications, followed by China. The most frequent keyword is "Deep Learning," and the highest centrality keyword is "Algorithm." The most cited author is Kaiming He, and the author with the highest centrality is Yoshua Bengio. Conclusion The application of deep learning in medical image processing is becoming increasingly common, and there are many active authors, institutions, and countries in this field. Current research in medical image processing mainly focuses on deep learning, convolutional neural networks, classification, diagnosis, segmentation, image, algorithm, and artificial intelligence. The research focus and trends are gradually shifting toward more complex and systematic directions, and deep learning technology will continue to play an important role.
Collapse
Affiliation(s)
- Borui Chen
- First School of Clinical Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jing Jin
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Haichao Liu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Zhengyu Yang
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Haoming Zhu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Yu Wang
- First School of Clinical Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jianping Lin
- The School of Health, Fujian Medical University, Fuzhou, China
| | - Shizhong Wang
- The School of Health, Fujian Medical University, Fuzhou, China
| | - Shaoqing Chen
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| |
Collapse
|
35
|
Wang Z, Yu L, Ding X, Liao X, Wang L. Shared-Specific Feature Learning With Bottleneck Fusion Transformer for Multi-Modal Whole Slide Image Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3374-3383. [PMID: 37335798 DOI: 10.1109/tmi.2023.3287256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
The fusion of multi-modal medical data is essential to assist medical experts to make treatment decisions for precision medicine. For example, combining the whole slide histopathological images (WSIs) and tabular clinical data can more accurately predict the lymph node metastasis (LNM) of papillary thyroid carcinoma before surgery to avoid unnecessary lymph node resection. However, the huge-sized WSI provides much more high-dimensional information than low-dimensional tabular clinical data, making the information alignment challenging in the multi-modal WSI analysis tasks. This paper presents a novel transformer-guided multi-modal multi-instance learning framework to predict lymph node metastasis from both WSIs and tabular clinical data. We first propose an effective multi-instance grouping scheme, named siamese attention-based feature grouping (SAG), to group high-dimensional WSIs into representative low-dimensional feature embeddings for fusion. We then design a novel bottleneck shared-specific feature transfer module (BSFT) to explore the shared and specific features between different modalities, where a few learnable bottleneck tokens are utilized for knowledge transfer between modalities. Moreover, a modal adaptation and orthogonal projection scheme were incorporated to further encourage BSFT to learn shared and specific features from multi-modal data. Finally, the shared and specific features are dynamically aggregated via an attention mechanism for slide-level prediction. Experimental results on our collected lymph node metastasis dataset demonstrate the efficiency of our proposed components and our framework achieves the best performance with AUC (area under the curve) of 97.34%, outperforming the state-of-the-art methods by over 1.27%.
Collapse
|
36
|
Wang H, Huang G, Zhao Z, Cheng L, Juncker-Jensen A, Nagy ML, Lu X, Zhang X, Chen DZ. CCF-GNN: A Unified Model Aggregating Appearance, Microenvironment, and Topology for Pathology Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3179-3193. [PMID: 37027573 DOI: 10.1109/tmi.2023.3249343] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Pathology images contain rich information of cell appearance, microenvironment, and topology features for cancer analysis and diagnosis. Among such features, topology becomes increasingly important in analysis for cancer immunotherapy. By analyzing geometric and hierarchically structured cell distribution topology, oncologists can identify densely-packed and cancer-relevant cell communities (CCs) for making decisions. Compared to commonly-used pixel-level Convolution Neural Network (CNN) features and cell-instance-level Graph Neural Network (GNN) features, CC topology features are at a higher level of granularity and geometry. However, topological features have not been well exploited by recent deep learning (DL) methods for pathology image classification due to lack of effective topological descriptors for cell distribution and gathering patterns. In this paper, inspired by clinical practice, we analyze and classify pathology images by comprehensively learning cell appearance, microenvironment, and topology in a fine-to-coarse manner. To describe and exploit topology, we design Cell Community Forest (CCF), a novel graph that represents the hierarchical formulation process of big-sparse CCs from small-dense CCs. Using CCF as a new geometric topological descriptor of tumor cells in pathology images, we propose CCF-GNN, a GNN model that successively aggregates heterogeneous features (e.g., appearance, microenvironment) from cell-instance-level, cell-community-level, into image-level for pathology image classification. Extensive cross-validation experiments show that our method significantly outperforms alternative methods on H&E-stained and immunofluorescence images for disease grading tasks with multiple cancer types. Our proposed CCF-GNN establishes a new topological data analysis (TDA) based method, which facilitates integrating multi-level heterogeneous features of point clouds (e.g., for cells) into a unified DL framework.
Collapse
|
37
|
Ariotta V, Lehtonen O, Salloum S, Micoli G, Lavikka K, Rantanen V, Hynninen J, Virtanen A, Hautaniemi S. H&E image analysis pipeline for quantifying morphological features. J Pathol Inform 2023; 14:100339. [PMID: 37915837 PMCID: PMC10616375 DOI: 10.1016/j.jpi.2023.100339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/15/2023] [Accepted: 09/30/2023] [Indexed: 11/03/2023] Open
Abstract
Detecting cell types from histopathological images is essential for various digital pathology applications. However, large number of cells in whole-slide images (WSIs) necessitates automated analysis pipelines for efficient cell type detection. Herein, we present hematoxylin and eosin (H&E) Image Processing pipeline (HEIP) for automatied analysis of scanned H&E-stained slides. HEIP is a flexible and modular open-source software that performs preprocessing, instance segmentation, and nuclei feature extraction. To evaluate the performance of HEIP, we applied it to extract cell types from ovarian high-grade serous carcinoma (HGSC) patient WSIs. HEIP showed high precision in instance segmentation, particularly for neoplastic and epithelial cells. We also show that there is a significant correlation between genomic ploidy values and morphological features, such as major axis of the nucleus.
Collapse
Affiliation(s)
- Valeria Ariotta
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Oskari Lehtonen
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Shams Salloum
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
- Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, 00029 Helsinki, Finland
| | - Giulia Micoli
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Kari Lavikka
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Ville Rantanen
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Johanna Hynninen
- Department of Obstetrics and Gynaecology, University of Turku and Turku University Hospital, 200521 Turku, Finland
| | - Anni Virtanen
- Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, 00029 Helsinki, Finland
| | - Sampsa Hautaniemi
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| |
Collapse
|
38
|
Xie W, Fang Y, Yang G, Yu K, Li W. Transformer-Based Multi-Modal Data Fusion Method for COPD Classification and Physiological and Biochemical Indicators Identification. Biomolecules 2023; 13:1391. [PMID: 37759791 PMCID: PMC10527317 DOI: 10.3390/biom13091391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 09/03/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
As the number of modalities in biomedical data continues to increase, the significance of multi-modal data becomes evident in capturing complex relationships between biological processes, thereby complementing disease classification. However, the current multi-modal fusion methods for biomedical data require more effective exploitation of intra- and inter-modal interactions, and the application of powerful fusion methods to biomedical data is relatively rare. In this paper, we propose a novel multi-modal data fusion method that addresses these limitations. Our proposed method utilizes a graph neural network and a 3D convolutional network to identify intra-modal relationships. By doing so, we can extract meaningful features from each modality, preserving crucial information. To fuse information from different modalities, we employ the Low-rank Multi-modal Fusion method, which effectively integrates multiple modalities while reducing noise and redundancy. Additionally, our method incorporates the Cross-modal Transformer to automatically learn relationships between different modalities, facilitating enhanced information exchange and representation. We validate the effectiveness of our proposed method using lung CT imaging data and physiological and biochemical data obtained from patients diagnosed with Chronic Obstructive Pulmonary Disease (COPD). Our method demonstrates superior performance compared to various fusion methods and their variants in terms of disease classification accuracy.
Collapse
Affiliation(s)
- Weidong Xie
- School of Computer Science and Engineering, Northeastern University, Hunnan District, Shenyang 110169, China; (W.X.); (Y.F.); (G.Y.)
| | - Yushan Fang
- School of Computer Science and Engineering, Northeastern University, Hunnan District, Shenyang 110169, China; (W.X.); (Y.F.); (G.Y.)
| | - Guicheng Yang
- School of Computer Science and Engineering, Northeastern University, Hunnan District, Shenyang 110169, China; (W.X.); (Y.F.); (G.Y.)
| | - Kun Yu
- College of Medicine and Bioinformation Engineering, Northeastern University, Hunnan District, Shenyang 110169, China;
| | - Wei Li
- School of Computer Science and Engineering, Northeastern University, Hunnan District, Shenyang 110169, China; (W.X.); (Y.F.); (G.Y.)
- Key Laboratory of Intelligent Computing in Medical Image (MIIC), Hunnan District, Shenyang 110169, China
| |
Collapse
|
39
|
Khoraminia F, Fuster S, Kanwal N, Olislagers M, Engan K, van Leenders GJLH, Stubbs AP, Akram F, Zuiverloon TCM. Artificial Intelligence in Digital Pathology for Bladder Cancer: Hype or Hope? A Systematic Review. Cancers (Basel) 2023; 15:4518. [PMID: 37760487 PMCID: PMC10526515 DOI: 10.3390/cancers15184518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/30/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
Bladder cancer (BC) diagnosis and prediction of prognosis are hindered by subjective pathological evaluation, which may cause misdiagnosis and under-/over-treatment. Computational pathology (CPATH) can identify clinical outcome predictors, offering an objective approach to improve prognosis. However, a systematic review of CPATH in BC literature is lacking. Therefore, we present a comprehensive overview of studies that used CPATH in BC, analyzing 33 out of 2285 identified studies. Most studies analyzed regions of interest to distinguish normal versus tumor tissue and identify tumor grade/stage and tissue types (e.g., urothelium, stroma, and muscle). The cell's nuclear area, shape irregularity, and roundness were the most promising markers to predict recurrence and survival based on selected regions of interest, with >80% accuracy. CPATH identified molecular subtypes by detecting features, e.g., papillary structures, hyperchromatic, and pleomorphic nuclei. Combining clinicopathological and image-derived features improved recurrence and survival prediction. However, due to the lack of outcome interpretability and independent test datasets, robustness and clinical applicability could not be ensured. The current literature demonstrates that CPATH holds the potential to improve BC diagnosis and prediction of prognosis. However, more robust, interpretable, accurate models and larger datasets-representative of clinical scenarios-are needed to address artificial intelligence's reliability, robustness, and black box challenge.
Collapse
Affiliation(s)
- Farbod Khoraminia
- Department of Urology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, 3015 GD Rotterdam, The Netherlands;
| | - Saul Fuster
- Department of Electrical Engineering and Computer Science, University of Stavanger, 4021 Stavanger, Norway; (S.F.); (N.K.); (K.E.)
| | - Neel Kanwal
- Department of Electrical Engineering and Computer Science, University of Stavanger, 4021 Stavanger, Norway; (S.F.); (N.K.); (K.E.)
| | - Mitchell Olislagers
- Department of Urology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, 3015 GD Rotterdam, The Netherlands;
| | - Kjersti Engan
- Department of Electrical Engineering and Computer Science, University of Stavanger, 4021 Stavanger, Norway; (S.F.); (N.K.); (K.E.)
| | - Geert J. L. H. van Leenders
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Cancer Institute, University Medical Center Rotterdam, 3015 GD Rotterdam, The Netherlands; (G.J.L.H.v.L.); (A.P.S.); (F.A.)
| | - Andrew P. Stubbs
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Cancer Institute, University Medical Center Rotterdam, 3015 GD Rotterdam, The Netherlands; (G.J.L.H.v.L.); (A.P.S.); (F.A.)
| | - Farhan Akram
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Cancer Institute, University Medical Center Rotterdam, 3015 GD Rotterdam, The Netherlands; (G.J.L.H.v.L.); (A.P.S.); (F.A.)
| | - Tahlita C. M. Zuiverloon
- Department of Urology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, 3015 GD Rotterdam, The Netherlands;
| |
Collapse
|
40
|
Li Z, Jiang Y, Lu M, Li R, Xia Y. Survival Prediction via Hierarchical Multimodal Co-Attention Transformer: A Computational Histology-Radiology Solution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2678-2689. [PMID: 37030860 DOI: 10.1109/tmi.2023.3263010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
The rapid advances in deep learning-based computational pathology and radiology have demonstrated the promise of using whole slide images (WSIs) and radiology images for survival prediction in cancer patients. However, most image-based survival prediction methods are limited to using either histology or radiology alone, leaving integrated approaches across histology and radiology relatively underdeveloped. There are two main challenges in integrating WSIs and radiology images: (1) the gigapixel nature of WSIs and (2) the vast difference in spatial scales between WSIs and radiology images. To address these challenges, in this work, we propose an interpretable, weakly-supervised, multimodal learning framework, called Hierarchical Multimodal Co-Attention Transformer (HMCAT), to integrate WSIs and radiology images for survival prediction. Our approach first uses hierarchical feature extractors to capture various information including cellular features, cellular organization, and tissue phenotypes in WSIs. Then the hierarchical radiology-guided co- attention (HRCA) in HMCAT characterizes the multimodal interactions between hierarchical histology-based visual concepts and radiology features and learns hierarchical co- attention mappings for two modalities. Finally, HMCAT combines their complementary information into a multimodal risk score and discovers prognostic features from two modalities by multimodal interpretability. We apply our approach to two cancer datasets (365 WSIs with matched magnetic resonance [MR] images and 213 WSIs with matched computed tomography [CT] images). Our results demonstrate that the proposed HMCAT consistently achieves superior performance over the unimodal approaches trained on either histology or radiology data alone, as well as other state-of-the-art methods.
Collapse
|
41
|
Matek C, Marr C, von Bergwelt-Baildon M, Spiekermann K. [Artificial Intelligence for computer-aided leukemia diagnostics]. Dtsch Med Wochenschr 2023; 148:1108-1112. [PMID: 37611575 DOI: 10.1055/a-1965-7044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
The manual examination of blood and bone marrow specimens for leukemia patients is time-consuming and limited by intra- and inter-observer variance. The development of AI algorithms for leukemia diagnostics requires high-quality sample digitization and reliable annotation of large datasets. Deep learning-based algorithms using these datasets attain human-level performance for some well-defined, clinically relevant questions such as the blast character of cells. Methods such as multiple - instance - learning allow predicting diagnoses from a collection of leukocytes, but are more data-intensive. Using "explainable AI" methods can make the prediction process more transparent and allow users to verify the algorithm's predictions. Stability and robustness analyses are necessary for routine application of these algorithms, and regulatory institutions are developing standards for this purpose. Integrated diagnostics, which link different diagnostic modalities, offer the promise of even greater accuracy but require more extensive and diverse datasets.
Collapse
Affiliation(s)
- Christian Matek
- Pathologisches Institut, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Deutschland
- Institute of AI for Health, Helmholtz Zentrum München, münchen
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Zentrum München, münchen
| | | | | |
Collapse
|
42
|
Shao W, Liu J, Zuo Y, Qi S, Hong H, Sheng J, Zhu Q, Zhang D. FAM3L: Feature-Aware Multi-Modal Metric Learning for Integrative Survival Analysis of Human Cancers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2552-2565. [PMID: 37030781 DOI: 10.1109/tmi.2023.3262024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Survival analysis is to estimate the survival time for an individual or a group of patients, which is a valid solution for cancer treatments. Recent studies suggested that the integrative analysis of histopathological images and genomic data can better predict the survival of cancer patients than simply using single bio-marker, for different bio-markers may provide complementary information. However, for the given multi-modal data that may contain irrelevant or redundant features, it is still challenge to design a distance metric that can simultaneously discover significant features and measure the difference of survival time among different patients. To solve this issue, we propose a Feature-Aware Multi-modal Metric Learning method (FAM3L), which not only learns the metric for distance constraints on patients' survival time, but also identifies important images and genomic features for survival analysis. Specifically, for each modality of data, we firstly design one feature-aware metric that can be decoupled into a traditional distance metric and a diagonal weight for important feature identification. Then, in order to explore the complex correlation across multiple modality data, we apply Hilbert-Schmidt Independence Criterion (HSIC) to jointly learn multiple metrics. Finally, based on the learned distance metrics, we apply the Cox proportional hazards model for prognosis prediction. We evaluate the performance of our proposed FAM3L method on three cancer cohorts derived from The Cancer Genome Atlas (TCGA), the experimental results demonstrate that our method can not only achieve superior performance for cancer prognosis, but also identify meaningful image and genomic features correlating strongly with cancer survival.
Collapse
|
43
|
Carrillo-Perez F, Pizurica M, Ozawa MG, Vogel H, West RB, Kong CS, Herrera LJ, Shen J, Gevaert O. Synthetic whole-slide image tile generation with gene expression profile-infused deep generative models. CELL REPORTS METHODS 2023; 3:100534. [PMID: 37671024 PMCID: PMC10475789 DOI: 10.1016/j.crmeth.2023.100534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 03/10/2023] [Accepted: 06/22/2023] [Indexed: 09/07/2023]
Abstract
In this work, we propose an approach to generate whole-slide image (WSI) tiles by using deep generative models infused with matched gene expression profiles. First, we train a variational autoencoder (VAE) that learns a latent, lower-dimensional representation of multi-tissue gene expression profiles. Then, we use this representation to infuse generative adversarial networks (GANs) that generate lung and brain cortex tissue tiles, resulting in a new model that we call RNA-GAN. Tiles generated by RNA-GAN were preferred by expert pathologists compared with tiles generated using traditional GANs, and in addition, RNA-GAN needs fewer training epochs to generate high-quality tiles. Finally, RNA-GAN was able to generalize to gene expression profiles outside of the training set, showing imputation capabilities. A web-based quiz is available for users to play a game distinguishing real and synthetic tiles: https://rna-gan.stanford.edu/, and the code for RNA-GAN is available here: https://github.com/gevaertlab/RNA-GAN.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, 1265 Welch Road, Stanford, CA 94305-547, USA
- Computer Engineering, Automatics and Robotics Department, University of Granada, C. Periodista Daniel Saucedo Aranda, s/n, Granada, 18014 Granada, Spain
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, 1265 Welch Road, Stanford, CA 94305-547, USA
- Internet Technology and Data Science Lab (IDLab), Ghent University, Technologiepark-Zwijnaarde 126, Gent, 9052 Gent, Belgium
| | - Michael G. Ozawa
- Department of Pathology, Stanford University School of Medicine, 300 Pasteur Dr, Palo Alto, CA 94304, USA
| | - Hannes Vogel
- Department of Pathology, Stanford University School of Medicine, 300 Pasteur Dr, Palo Alto, CA 94304, USA
| | - Robert B. West
- Department of Pathology, Stanford University School of Medicine, 300 Pasteur Dr, Palo Alto, CA 94304, USA
| | - Christina S. Kong
- Department of Pathology, Stanford University School of Medicine, 300 Pasteur Dr, Palo Alto, CA 94304, USA
| | - Luis Javier Herrera
- Computer Engineering, Automatics and Robotics Department, University of Granada, C. Periodista Daniel Saucedo Aranda, s/n, Granada, 18014 Granada, Spain
| | - Jeanne Shen
- Department of Pathology, Stanford University School of Medicine, 300 Pasteur Dr, Palo Alto, CA 94304, USA
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, School of Medicine, 1265 Welch Road, Stanford, CA 94305-547, USA
- Department of Biomedical Data Science, Stanford University, School of Medicine, Medical School Office Building (MSOB), 1265 Welch Road, Stanford, CA 94305-547, USA
| |
Collapse
|
44
|
Asif A, Rajpoot K, Graham S, Snead D, Minhas F, Rajpoot N. Unleashing the potential of AI for pathology: challenges and recommendations. J Pathol 2023; 260:564-577. [PMID: 37550878 PMCID: PMC10952719 DOI: 10.1002/path.6168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 08/09/2023]
Abstract
Computational pathology is currently witnessing a surge in the development of AI techniques, offering promise for achieving breakthroughs and significantly impacting the practices of pathology and oncology. These AI methods bring with them the potential to revolutionize diagnostic pipelines as well as treatment planning and overall patient care. Numerous peer-reviewed studies reporting remarkable performance across diverse tasks serve as a testimony to the potential of AI in the field. However, widespread adoption of these methods in clinical and pre-clinical settings still remains a challenge. In this review article, we present a detailed analysis of the major obstacles encountered during the development of effective models and their deployment in practice. We aim to provide readers with an overview of the latest developments, assist them with insights into identifying some specific challenges that may require resolution, and suggest recommendations and potential future research directions. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Amina Asif
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Kashif Rajpoot
- School of Computer ScienceUniversity of BirminghamBirminghamUK
| | - Simon Graham
- Histofy Ltd, Birmingham Business ParkBirminghamUK
| | - David Snead
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Department of PathologyUniversity Hospitals Coventry & Warwickshire NHS TrustCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Cancer Research CentreUniversity of WarwickCoventryUK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Cancer Research CentreUniversity of WarwickCoventryUK
- The Alan Turing InstituteLondonUK
| |
Collapse
|
45
|
Lee M. Recent Advancements in Deep Learning Using Whole Slide Imaging for Cancer Prognosis. Bioengineering (Basel) 2023; 10:897. [PMID: 37627783 PMCID: PMC10451210 DOI: 10.3390/bioengineering10080897] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/21/2023] [Accepted: 07/27/2023] [Indexed: 08/26/2023] Open
Abstract
This review furnishes an exhaustive analysis of the latest advancements in deep learning techniques applied to whole slide images (WSIs) in the context of cancer prognosis, focusing specifically on publications from 2019 through 2023. The swiftly maturing field of deep learning, in combination with the burgeoning availability of WSIs, manifests significant potential in revolutionizing the predictive modeling of cancer prognosis. In light of the swift evolution and profound complexity of the field, it is essential to systematically review contemporary methodologies and critically appraise their ramifications. This review elucidates the prevailing landscape of this intersection, cataloging major developments, evaluating their strengths and weaknesses, and providing discerning insights into prospective directions. In this paper, a comprehensive overview of the field aims to be presented, which can serve as a critical resource for researchers and clinicians, ultimately enhancing the quality of cancer care outcomes. This review's findings accentuate the need for ongoing scrutiny of recent studies in this rapidly progressing field to discern patterns, understand breakthroughs, and navigate future research trajectories.
Collapse
Affiliation(s)
- Minhyeok Lee
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|
46
|
Walke D, Micheel D, Schallert K, Muth T, Broneske D, Saake G, Heyer R. The importance of graph databases and graph learning for clinical applications. Database (Oxford) 2023; 2023:baad045. [PMID: 37428679 PMCID: PMC10332447 DOI: 10.1093/database/baad045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 05/26/2023] [Accepted: 06/16/2023] [Indexed: 07/12/2023]
Abstract
The increasing amount and complexity of clinical data require an appropriate way of storing and analyzing those data. Traditional approaches use a tabular structure (relational databases) for storing data and thereby complicate storing and retrieving interlinked data from the clinical domain. Graph databases provide a great solution for this by storing data in a graph as nodes (vertices) that are connected by edges (links). The underlying graph structure can be used for the subsequent data analysis (graph learning). Graph learning consists of two parts: graph representation learning and graph analytics. Graph representation learning aims to reduce high-dimensional input graphs to low-dimensional representations. Then, graph analytics uses the obtained representations for analytical tasks like visualization, classification, link prediction and clustering which can be used to solve domain-specific problems. In this survey, we review current state-of-the-art graph database management systems, graph learning algorithms and a variety of graph applications in the clinical domain. Furthermore, we provide a comprehensive use case for a clearer understanding of complex graph learning algorithms. Graphical abstract.
Collapse
Affiliation(s)
- Daniel Walke
- Bioprocess Engineering, Otto von Guericke University, Universitätsplatz 2, Magdeburg 39106, Germany
- Database and Software Engineering Group, Otto von Guericke University, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Daniel Micheel
- Database and Software Engineering Group, Otto von Guericke University, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Kay Schallert
- Multidimensional Omics Analyses Group, Leibniz-Institut für Analytische Wissenschaften—ISAS—e.V., Bunsen-Kirchhoff-Straße 11, Dortmund 44139, Germany
| | - Thilo Muth
- Section eScience (S.3), Federal Institute for Materials Research and Testing (BAM), Unter den Eichen 87, Berlin 12205, Germany
| | - David Broneske
- Infrastructure and Methods, German Center for Higher Education Research and Science Studies (DZHW), Lange Laube 12, Hannover 30159, Germany
| | - Gunter Saake
- Database and Software Engineering Group, Otto von Guericke University, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Robert Heyer
- Multidimensional Omics Analyses Group, Leibniz-Institut für Analytische Wissenschaften—ISAS—e.V., Bunsen-Kirchhoff-Straße 11, Dortmund 44139, Germany
- Faculty of Technology, Bielefeld University, Universitätsstraße 25, Bielefeld 33615, Germany
| |
Collapse
|
47
|
Halder A, Bhowmick C, Dutta PK, Mahadevappa M. Identification and Analysis of Imaging-Genomic Signatures to Study Recurrence in Breast Cancers. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083388 DOI: 10.1109/embc40787.2023.10339965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
One of the main causes of breast cancer related death is its recurrence. In this study, we investigate the association of gene expression and pathological image features to understand breast cancer recurrence. A total of 172 breast cancer patient data was downloaded from the TCGA-BRCA database. The dataset contained diagnostic whole slide images and RNA-seq data of 80 recurrent and 92 disease-free breast cancer patients. We performed genomic analysis on RNA-seq data to obtain the hub genes related to recurrent breast cancer. We extracted relevant pathomic features from histopathology images. The discriminative ability of the hub genes and pathomic features were evaluated using machine learning classifiers. We used Spearman rank correlation analysis to find statistically significant association between gene expression and pathomic features. We identified that, genes which are related to breast cancer progression is significantly associated (adjusted p-value<0.05) with several pathomic features.Clinical Relevance- Histopathology is the gold standard for cancer detection. It provides us with cellular level information. A strong association between a pathomic feature and a gene expression will help clinicians understand the cellular and molecular mechanism of cancer for better prognosis.
Collapse
|
48
|
Hao Y, Jing XY, Sun Q. Cancer survival prediction by learning comprehensive deep feature representation for multiple types of genetic data. BMC Bioinformatics 2023; 24:267. [PMID: 37380946 DOI: 10.1186/s12859-023-05392-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/19/2023] [Indexed: 06/30/2023] Open
Abstract
BACKGROUND Cancer is one of the leading death causes around the world. Accurate prediction of its survival time is significant, which can help clinicians make appropriate therapeutic schemes. Cancer data can be characterized by varied molecular features, clinical behaviors and morphological appearances. However, the cancer heterogeneity problem usually makes patient samples with different risks (i.e., short and long survival time) inseparable, thereby causing unsatisfactory prediction results. Clinical studies have shown that genetic data tends to contain more molecular biomarkers associated with cancer, and hence integrating multi-type genetic data may be a feasible way to deal with cancer heterogeneity. Although multi-type gene data have been used in the existing work, how to learn more effective features for cancer survival prediction has not been well studied. RESULTS To this end, we propose a deep learning approach to reduce the negative impact of cancer heterogeneity and improve the cancer survival prediction effect. It represents each type of genetic data as the shared and specific features, which can capture the consensus and complementary information among all types of data. We collect mRNA expression, DNA methylation and microRNA expression data for four cancers to conduct experiments. CONCLUSIONS Experimental results demonstrate that our approach substantially outperforms established integrative methods and is effective for cancer survival prediction. AVAILABILITY AND IMPLEMENTATION https://github.com/githyr/ComprehensiveSurvival .
Collapse
Affiliation(s)
- Yaru Hao
- School of Computer Science, Wuhan University, Wuhan, China.
| | - Xiao-Yuan Jing
- School of Computer Science, Wuhan University, Wuhan, China.
- School of Computer, Guangdong University of Petrochemical Technology, Maoming, China.
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China.
| | - Qixing Sun
- School of Computer Science, Wuhan University, Wuhan, China
| |
Collapse
|
49
|
Hill CS, Pandit AS. Moving towards a unified classification of glioblastomas utilizing artificial intelligence and deep machine learning integration. Front Oncol 2023; 13:1063937. [PMID: 37427111 PMCID: PMC10327552 DOI: 10.3389/fonc.2023.1063937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 04/24/2023] [Indexed: 07/11/2023] Open
Abstract
Glioblastoma a deadly brain cancer that is nearly universally fatal. Accurate prognostication and the successful application of emerging precision medicine in glioblastoma relies upon the resolution and exactitude of classification. We discuss limitations of our current classification systems and their inability to capture the full heterogeneity of the disease. We review the various layers of data that are available to substratify glioblastoma and we discuss how artificial intelligence and machine learning tools provide the opportunity to organize and integrate this data in a nuanced way. In doing so there is the potential to generate clinically relevant disease sub-stratifications, which could help predict neuro-oncological patient outcomes with greater certainty. We discuss limitations of this approach and how these might be overcome. The development of a comprehensive unified classification of glioblastoma would be a major advance in the field. This will require the fusion of advances in understanding glioblastoma biology with technological innovation in data processing and organization.
Collapse
Affiliation(s)
- Ciaran Scott Hill
- Institute of Neurology, University College London, London, United Kingdom
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery (NHNN), London, United Kingdom
| | - Anand S. Pandit
- Institute of Neurology, University College London, London, United Kingdom
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery (NHNN), London, United Kingdom
| |
Collapse
|
50
|
Lee M. Deep Learning Techniques with Genomic Data in Cancer Prognosis: A Comprehensive Review of the 2021-2023 Literature. BIOLOGY 2023; 12:893. [PMID: 37508326 PMCID: PMC10376033 DOI: 10.3390/biology12070893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/16/2023] [Accepted: 06/20/2023] [Indexed: 07/30/2023]
Abstract
Deep learning has brought about a significant transformation in machine learning, leading to an array of novel methodologies and consequently broadening its influence. The application of deep learning in various sectors, especially biomedical data analysis, has initiated a period filled with noteworthy scientific developments. This trend has majorly influenced cancer prognosis, where the interpretation of genomic data for survival analysis has become a central research focus. The capacity of deep learning to decode intricate patterns embedded within high-dimensional genomic data has provoked a paradigm shift in our understanding of cancer survival. Given the swift progression in this field, there is an urgent need for a comprehensive review that focuses on the most influential studies from 2021 to 2023. This review, through its careful selection and thorough exploration of dominant trends and methodologies, strives to fulfill this need. The paper aims to enhance our existing understanding of applications of deep learning in cancer survival analysis, while also highlighting promising directions for future research. This paper undertakes aims to enrich our existing grasp of the application of deep learning in cancer survival analysis, while concurrently shedding light on promising directions for future research in this vibrant and rapidly proliferating field.
Collapse
Affiliation(s)
- Minhyeok Lee
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|