1
|
Rahaman MM, Millar EKA, Meijering E. Generalized deep learning for histopathology image classification using supervised contrastive learning. J Adv Res 2024:S2090-1232(24)00532-0. [PMID: 39551131 DOI: 10.1016/j.jare.2024.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 10/07/2024] [Accepted: 11/10/2024] [Indexed: 11/19/2024] Open
Abstract
INTRODUCTION Cancer is a leading cause of death worldwide, necessitating effective diagnostic tools for early detection and treatment. Histopathological image analysis is crucial for cancer diagnosis but is often hindered by human error and variability. This study introduces HistopathAI, a hybrid network designed for histopathology image classification, aimed at enhancing diagnostic precision and efficiency in clinical pathology. OBJECTIVES The primary goal of this study is to demonstrate that HistopathAI, leveraging supervised contrastive learning (SCL) and hybrid deep feature fusion (HDFF), can significantly improve the accuracy of histopathological image classification, including scenarios involving imbalanced datasets. METHODS HistopathAI integrates features from EfficientNetB3 and ResNet50, using HDFF to provide a rich representation of histopathology images. The framework employs a sequential methodology, transitioning from feature learning to classifier learning, mirroring the essence of contrastive learning with the aim of producing superior feature representations. The model combines SCL for feature representation with cross-entropy (CE) loss for classification. We evaluated HistopathAI across seven publicly available datasets and one private dataset, covering various histopathology domains. RESULTS HistopathAI achieved state-of-the-art classification accuracy across all datasets, demonstrating superior performance in both binary and multiclass classification tasks. Statistical testing confirmed that HistopathAI's performance is significantly better than baseline models, ensuring robust and reliable improvements. CONCLUSION HistopathAI offers a robust tool for histopathology image classification, enhancing diagnostic accuracy and supporting the transition to digital pathology. This framework has the potential to improve cancer diagnosis and patient outcomes, paving the way for broader clinical application. The code is available on https://github.com/Mamunur-20/HistopathAI.
Collapse
Affiliation(s)
- Md Mamunur Rahaman
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia.
| | - Ewan K A Millar
- Department of Anatomical Pathology, NSW Health Pathology, St. George Hospital, Sydney NSW 2217, Australia; St. George and Sutherland Clinical School, University of New South Wales, Sydney NSW 2052, Australia; Faculty of Medicine & Health Sciences, Western Sydney University, Sydney NSW 2560, Australia.
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia.
| |
Collapse
|
2
|
Liu Q, Zhang X, Jiang X, Zhang C, Li J, Zhang X, Yang J, Yu N, Zhu Y, Liu J, Xie F, Li Y, Hao Y, Feng Y, Wang Q, Gao Q, Zhang W, Zhang T, Dong T, Cui B. A Histopathologic Image Analysis for the Classification of Endocervical Adenocarcinoma Silva Patterns Depend on Weakly Supervised Deep Learning. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:735-746. [PMID: 38382842 DOI: 10.1016/j.ajpath.2024.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 12/25/2023] [Accepted: 01/18/2024] [Indexed: 02/23/2024]
Abstract
Twenty-five percent of cervical cancers are classified as endocervical adenocarcinomas (EACs), which comprise a highly heterogeneous group of tumors. A histopathologic risk stratification system known as the Silva pattern system was developed based on morphology. However, accurately classifying such patterns can be challenging. The study objective was to develop a deep learning pipeline (Silva3-AI) that automatically analyzes whole slide image-based histopathologic images and identifies Silva patterns with high accuracy. Initially, a total of 202 patients with EACs and histopathologic slides were obtained from Qilu Hospital of Shandong University for developing and internally testing the Silva3-AI model. Subsequently, an additional 161 patients and slides were collected from seven other medical centers for independent testing. The Silva3-AI model was developed using a vision transformer and recurrent neural network architecture, utilizing multi-magnification patches, and its performance was evaluated based on a class-specific area under the receiver-operating characteristic curve. Silva3-AI achieved a class-specific area under the receiver-operating characteristic curve of 0.947 for Silva A, 0.908 for Silva B, and 0.947 for Silva C on the independent test set. Notably, the performance of Silva3-AI was consistent with that of professional pathologists with 10 years' diagnostic experience. Furthermore, the visualization of prediction heatmaps facilitated the identification of tumor microenvironment heterogeneity, which is known to contribute to variations in Silva patterns.
Collapse
Affiliation(s)
- Qingqing Liu
- Cheeloo College of Medicine, Shandong University, Jinan City, China
| | - Xiaofang Zhang
- Department of Pathology, School of Basic Medical Sciences and Qilu Hospital, Shandong University, Jinan City, China
| | - Xuji Jiang
- Cheeloo College of Medicine, Shandong University, Jinan City, China
| | - Chunyan Zhang
- Department of Pathology, Affiliated Hospital of Jining Medical University of Shandong, Jining City, China
| | - Jiamei Li
- Department of Pathology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan City, China
| | - Xuedong Zhang
- Department of Pathology, Liaocheng People's Hospital, Liaocheng City, China
| | - Jingyan Yang
- Department of Pathology, The Second Hospital of Shandong University, Jinan City, China
| | - Ning Yu
- Department of Pathology, Binzhou Medical University Hospital, Binzhou City, China
| | - Yongcun Zhu
- Department of Pathology, Weihai Municipal Hospital of Shandong University, Weihai City, China
| | - Jing Liu
- Department of Pathology, Jining No. 1 People's Hospital, Jining City, China
| | - Fengxiang Xie
- Department of Pathology, KingMed Diagnostics, Jinan City, China
| | - Yawen Li
- Department of Pathology, School of Basic Medical Sciences and Qilu Hospital, Shandong University, Jinan City, China
| | - Yiping Hao
- Cheeloo College of Medicine, Shandong University, Jinan City, China
| | - Yuan Feng
- Cheeloo College of Medicine, Shandong University, Jinan City, China
| | - Qi Wang
- Department of Obstetrics and Gynecology, Shandong Provincial Qianfoshan Hospital, Shandong University, Jinan City, China
| | - Qun Gao
- Department of Obstetrics and Gynecology, The Affiliated Hospital of Qingdao University, Qingdao City, China
| | - Wenjing Zhang
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Teng Zhang
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Taotao Dong
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China.
| | - Baoxia Cui
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China.
| |
Collapse
|
3
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
4
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
5
|
Gu Q, Meroueh C, Levernier J, Kroneman T, Flotte T, Hart S. Using an anomaly detection approach for the segmentation of colorectal cancer tumors in whole slide images. J Pathol Inform 2023; 14:100336. [PMID: 37811333 PMCID: PMC10550750 DOI: 10.1016/j.jpi.2023.100336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 09/13/2023] [Accepted: 09/19/2023] [Indexed: 10/10/2023] Open
Abstract
Colorectal cancer (CRC) is the second most commonly diagnosed cancer in the United States. Genetic testing is critical in assisting in the early detection of CRC and selection of individualized treatment plans, which have shown to improve the survival rate of CRC patients. The tissue slide review (TSR), a tumor tissue macro-dissection procedure, is a required pre-analytical step to perform genetic testing. Due to the subjective nature of the process, major discrepancies in CRC diagnostics by pathologists are reported, and metrics for quality are often only qualitative. Progressive context encoder anomaly detection (P-CEAD) is an anomaly detection approach to detect tumor tissue from whole slide images (WSIs), since tumor tissue is by its nature, an anomaly. P-CEAD-based CRC tumor segmentation achieves a 71% 26% sensitivity, 92% 7% specificity, and 63% 23% F1 score. The proposed approach provides an automated CRC tumor segmentation pipeline with a quantitatively reproducible quality compared with the conventional manual tumor segmentation procedure.
Collapse
Affiliation(s)
- Qiangqiang Gu
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55901, United States
| | - Chady Meroueh
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55901, United States
| | - Jacob Levernier
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55901, United States
| | - Trynda Kroneman
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55901, United States
| | - Thomas Flotte
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55901, United States
| | - Steven Hart
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55901, United States
- Department of Quantitative Health Science, Mayo Clinic, Rochester, MN 55901, United States
| |
Collapse
|
6
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
7
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Baidar Bakht A, Javed S, Gilani SQ, Karki H, Muneeb M, Werghi N. DeepBLS: Deep Feature-Based Broad Learning System for Tissue Phenotyping in Colorectal Cancer WSIs. J Digit Imaging 2023; 36:1653-1662. [PMID: 37059892 PMCID: PMC10406762 DOI: 10.1007/s10278-023-00797-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 04/16/2023] Open
Abstract
Tissue phenotyping is a fundamental step in computational pathology for the analysis of tumor micro-environment in whole slide images (WSIs). Automatic tissue phenotyping in whole slide images (WSIs) of colorectal cancer (CRC) assists pathologists in better cancer grading and prognostication. In this paper, we propose a novel algorithm for the identification of distinct tissue components in colon cancer histology images by blending a comprehensive learning system with deep features extraction in the current work. Firstly, we extracted the features from the pre-trained VGG19 network which are then transformed into mapped features space for nodes enhancement generation. Utilizing both mapped features and enhancement nodes, the proposed algorithm classifies seven distinct tissue components including stroma, tumor, complex stroma, necrotic, normal benign, lymphocytes, and smooth muscle. To validate our proposed model, the experiments are performed on two publicly available colorectal cancer histology datasets. We showcase that our approach achieves a remarkable performance boost surpassing existing state-of-the-art methods by (1.3% AvTP, 2% F1) and (7% AvTP, 6% F1) on CRCD-1, and CRCD-2, respectively.
Collapse
Affiliation(s)
- Ahsan Baidar Bakht
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Sajid Javed
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, 33431 USA
| | - Hamad Karki
- Mechanical Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Muhammad Muneeb
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Naoufel Werghi
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| |
Collapse
|
9
|
Tong Y, Udupa JK, Chong E, Winchell N, Sun C, Zou Y, Schuster SJ, Torigian DA. Prediction of lymphoma response to CAR T cells by deep learning-based image analysis. PLoS One 2023; 18:e0282573. [PMID: 37478073 PMCID: PMC10361488 DOI: 10.1371/journal.pone.0282573] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 02/21/2023] [Indexed: 07/23/2023] Open
Abstract
Clinical prognostic scoring systems have limited utility for predicting treatment outcomes in lymphomas. We therefore tested the feasibility of a deep-learning (DL)-based image analysis methodology on pre-treatment diagnostic computed tomography (dCT), low-dose CT (lCT), and 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images and rule-based reasoning to predict treatment response to chimeric antigen receptor (CAR) T-cell therapy in B-cell lymphomas. Pre-treatment images of 770 lymph node lesions from 39 adult patients with B-cell lymphomas treated with CD19-directed CAR T-cells were analyzed. Transfer learning using a pre-trained neural network model, then retrained for a specific task, was used to predict lesion-level treatment responses from separate dCT, lCT, and FDG-PET images. Patient-level response analysis was performed by applying rule-based reasoning to lesion-level prediction results. Patient-level response prediction was also compared to prediction based on the international prognostic index (IPI) for diffuse large B-cell lymphoma. The average accuracy of lesion-level response prediction based on single whole dCT slice-based input was 0.82+0.05 with sensitivity 0.87+0.07, specificity 0.77+0.12, and AUC 0.91+0.03. Patient-level response prediction from dCT, using the "Majority 60%" rule, had accuracy 0.81, sensitivity 0.75, and specificity 0.88 using 12-month post-treatment patient response as the reference standard and outperformed response prediction based on IPI risk factors (accuracy 0.54, sensitivity 0.38, and specificity 0.61 (p = 0.046)). Prediction of treatment outcome in B-cell lymphomas from pre-treatment medical images using DL-based image analysis and rule-based reasoning is feasible. This approach can potentially provide clinically useful prognostic information for decision-making in advance of initiating CAR T-cell therapy.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Emeline Chong
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Nicole Winchell
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Changjian Sun
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Yongning Zou
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Stephen J Schuster
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
10
|
Kazemi A, Gharib M, Mohamadian Roshan N, Taraz Jamshidi S, Stögbauer F, Eslami S, Schüffler PJ. Assessment of the Tumor-Stroma Ratio and Tumor-Infiltrating Lymphocytes in Colorectal Cancer: Inter-Observer Agreement Evaluation. Diagnostics (Basel) 2023; 13:2339. [PMID: 37510083 PMCID: PMC10378655 DOI: 10.3390/diagnostics13142339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/04/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
BACKGROUND To implement the new marker in clinical practice, reliability assessment, validation, and standardization of utilization must be applied. This study evaluated the reliability of tumor-infiltrating lymphocytes (TILs) and tumor-stroma ratio (TSR) assessment through conventional microscopy by comparing observers' estimations. METHODS Intratumoral and tumor-front stromal TILs, and TSR, were assessed by three pathologists using 86 CRC HE slides. TSR and TILs were categorized using one and four different proposed cutoff systems, respectively, and agreement was assessed using the intraclass coefficient (ICC) and Cohen's kappa statistics. Pairwise evaluation of agreement was performed using the Fleiss kappa statistic and the concordance rate and it was visualized by Bland-Altman plots. To investigate the association between biomarkers and patient data, Pearson's correlation analysis was applied. RESULTS For the evaluation of intratumoral stromal TILs, ICC of 0.505 (95% CI: 0.35-0.64) was obtained, kappa values were in the range of 0.21 to 0.38, and concordance rates in the range of 0.61 to 0.72. For the evaluation of tumor-front TILs, ICC was 0.52 (95% CI: 0.32-0.67), the overall kappa value ranged from 0.24 to 0.30, and the concordance rate ranged from 0.66 to 0.72. For estimating the TSR, the ICC was 0.48 (95% CI: 0.35-0.60), the kappa value was 0.49 and the concordance rate was 0.76. We observed a significant correlation between tumor grade and the median of TSR (0.29 (95% CI: 0.032-0.51), p-value = 0.03). CONCLUSIONS The agreement between pathologists in estimating these markers corresponds to poor-to-moderate agreement; implementing immune scores in daily practice requires more concentration in inter-observer agreements.
Collapse
Affiliation(s)
- Azar Kazemi
- Institute of General and Surgical Pathology, Technical University of Munich, 81675 Munich, Germany
- Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad 9177948564, Iran
| | - Masoumeh Gharib
- Department of Pathology, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad 9137913316, Iran
| | - Nema Mohamadian Roshan
- Department of Pathology, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad 9137913316, Iran
| | - Shirin Taraz Jamshidi
- Department of Pathology, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad 9137913316, Iran
| | - Fabian Stögbauer
- Institute of General and Surgical Pathology, Technical University of Munich, 81675 Munich, Germany
| | - Saeid Eslami
- Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad 9177948564, Iran
- Pharmaceutical Sciences Research Center, Institute of Pharmaceutical Technology, Mashhad University of Medical Sciences, Mashhad 9177948954, Iran
- Department of Medical Informatics, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands
| | - Peter J Schüffler
- Institute of General and Surgical Pathology, Technical University of Munich, 81675 Munich, Germany
| |
Collapse
|
11
|
Ke J, Shen Y, Lu Y, Guo Y, Shen D. Mine local homogeneous representation by interaction information clustering with unsupervised learning in histopathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107520. [PMID: 37031665 DOI: 10.1016/j.cmpb.2023.107520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/13/2023] [Accepted: 03/28/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The success of data-driven deep learning for histopathology images often depends on high-quality training sets and fine-grained annotations. However, as tumors are heterogeneous and annotations are expensive, unsupervised learning approaches are desirable to obtain full automation. METHODS In this paper, an Interaction Information Clustering (IIC) method is proposed to extract locally homogeneous features in mutually exclusive clusters. Trained in an unsupervised paradigm, the framework learns invariant information from multiple spatially adjacent regions for improved classification. Additionally, an adaptive Conditional Random Field (CRF) model is incorporated to detect spatially adjacent image patches of high morphological homogeneity in an offset-constraint free manner. RESULTS Empirically, the proposed model achieves an observable improvement of 11.4% on the downstream patch-level classification accuracy, compared with state-of-the-art unsupervised learning approaches. CONCLUSION Furthermore, evaluated with our clinically collected histopathology whole-slide images, the proposed model shows high consistency in tissue distribution compared with well-trained supervised learning, which is of important diagnostic significance in clinical practice.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai 200240, China.
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Yizhou Lu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW 2751, Australia
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China
| |
Collapse
|
12
|
Su R, He H, Sun C, Wang X, Liu X. Prediction of drug-induced hepatotoxicity based on histopathological whole slide images. Methods 2023; 212:31-38. [PMID: 36706825 DOI: 10.1016/j.ymeth.2023.01.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 12/30/2022] [Accepted: 01/19/2023] [Indexed: 01/26/2023] Open
Abstract
Liver is an important metabolic organ in human body and is sensitive to toxic chemicals or drugs. Adverse reactions caused by drug hepatotoxicity will damage the liver and hepatotoxicity is the leading cause of removal of approved drugs from the market. Therefore, it is of great significance to identify liver toxicity as early as possible in the drug development process. In this study, we developed a predictive model for drug hepatotoxicity based on histopathological whole slide images (WSI) which are the by-product of drug experiments and have received little attention. To better represent the WSIs, we constructed a graph representation for each WSI by dividing it into small patches, taking sampled patches as nodes and calculating the correlation coefficients between node features as the edges of the graph structure. Then a WSI-level graph convolutional network (GCN) was built to effectively extract the node information of the graph and predict the toxicity. In addition, we introduced a gated attention global context vector (gaGCV) to combine the global context to make node features to contain more comprehensive information. The results validated on rat liver in vivo data from the Open TG-GATES show that the use of WSI for the prediction of toxicity is feasible and effective.
Collapse
Affiliation(s)
- Ran Su
- School of Computer Software, College of Intelligence and Computing, Tianjin University, China
| | - Hao He
- School of Computer Software, College of Intelligence and Computing, Tianjin University, China
| | | | - Xiaomin Wang
- National Clinical Research Center for Infectious Diseases, Shenzhen, Guangdong, China.
| | - Xiaofeng Liu
- Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.
| |
Collapse
|
13
|
Su L, Wang Z, Shi Y, Li A, Wang M. Local augmentation based consistency learning for semi-supervised pathology image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107446. [PMID: 36871546 DOI: 10.1016/j.cmpb.2023.107446] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Labeling pathology images is often costly and time-consuming, which is quite detrimental for supervised pathology image classification that relies heavily on sufficient labeled data during training. Exploring semi-supervised methods based on image augmentation and consistency regularization may effectively alleviate this problem. Nevertheless, traditional image-based augmentation (e.g., flip) produces only a single enhancement to an image, whereas combining multiple image sources may mix unimportant image regions resulting in poor performance. In addition, the regularization losses used in these augmentation approaches typically enforce the consistency of image level predictions, and meanwhile simply require each prediction of augmented image to be consistent bilaterally, which may force pathology image features with better predictions to be wrongly aligned towards the features with worse predictions. METHODS To tackle these problems, we propose a novel semi-supervised method called Semi-LAC for pathology image classification. Specifically, we first present local augmentation technique to randomly apply different augmentations produces to each local pathology patch, which can boost the diversity of pathology image and avoid mixing unimportant regions in other images. Moreover, we further propose the directional consistency loss to enforce restrictions on the consistency of both features and prediction results, thus improving the ability of the network to obtain robust representations and achieve accurate predictions. RESULTS The proposed method is evaluated on Bioimaging2015 and BACH datasets, and the extensive experiments show the superior performance of our Semi-LAC compared with state-of-the-art methods for pathology image classification. CONCLUSIONS We conclude that using the Semi-LAC method can effectively reduce the cost for annotating pathology images, and enhance the ability of classification networks to represent pathology images by using local augmentation techniques and directional consistency loss.
Collapse
Affiliation(s)
- Lei Su
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Zhi Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Yi Shi
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
14
|
Li H, Zou L, Kowah JAH, He D, Liu Z, Ding X, Wen H, Wang L, Yuan M, Liu X. A compact review of progress and prospects of deep learning in drug discovery. J Mol Model 2023; 29:117. [PMID: 36976427 DOI: 10.1007/s00894-023-05492-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Accepted: 02/27/2023] [Indexed: 03/29/2023]
Abstract
BACKGROUND Drug discovery processes, such as new drug development, drug synergy, and drug repurposing, consume significant yearly resources. Computer-aided drug discovery can effectively improve the efficiency of drug discovery. Traditional computer methods such as virtual screening and molecular docking have achieved many gratifying results in drug development. However, with the rapid growth of computer science, data structures have changed considerably; with more extensive and dimensional data and more significant amounts of data, traditional computer methods can no longer be applied well. Deep learning methods are based on deep neural network structures that can handle high-dimensional data very well, so they are used in current drug development. RESULTS This review summarized the applications of deep learning methods in drug discovery, such as drug target discovery, drug de novo design, drug recommendation, drug synergy, and drug response prediction. While applying deep learning methods to drug discovery suffers from a lack of data, transfer learning is an excellent solution to this problem. Furthermore, deep learning methods can extract deeper features and have higher predictive power than other machine learning methods. Deep learning methods have great potential in drug discovery and are expected to facilitate drug discovery development.
Collapse
Affiliation(s)
- Huijun Li
- College of Medicine, Guangxi University, Nanning, 530004, China
| | - Lin Zou
- College of Medicine, Guangxi University, Nanning, 530004, China
| | | | - Dongqiong He
- College of Chemistry and Chemical Engineering, Guangxi University, Nanning, 530004, China
| | - Zifan Liu
- College of Medicine, Guangxi University, Nanning, 530004, China
| | - Xuejie Ding
- College of Medicine, Guangxi University, Nanning, 530004, China
| | - Hao Wen
- College of Chemistry and Chemical Engineering, Guangxi University, Nanning, 530004, China
| | - Lisheng Wang
- College of Medicine, Guangxi University, Nanning, 530004, China
| | - Mingqing Yuan
- College of Medicine, Guangxi University, Nanning, 530004, China
| | - Xu Liu
- College of Medicine, Guangxi University, Nanning, 530004, China.
| |
Collapse
|
15
|
Sun K, Chen Y, Bai B, Gao Y, Xiao J, Yu G. Automatic Classification of Histopathology Images across Multiple Cancers Based on Heterogeneous Transfer Learning. Diagnostics (Basel) 2023; 13:diagnostics13071277. [PMID: 37046497 PMCID: PMC10093253 DOI: 10.3390/diagnostics13071277] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/07/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Background: Current artificial intelligence (AI) in histopathology typically specializes on a single task, resulting in a heavy workload of collecting and labeling a sufficient number of images for each type of cancer. Heterogeneous transfer learning (HTL) is expected to alleviate the data bottlenecks and establish models with performance comparable to supervised learning (SL). Methods: An accurate source domain model was trained using 28,634 colorectal patches. Additionally, 1000 sentinel lymph node patches and 1008 breast patches were used to train two target domain models. The feature distribution difference between sentinel lymph node metastasis or breast cancer and CRC was reduced by heterogeneous domain adaptation, and the maximum mean difference between subdomains was used for knowledge transfer to achieve accurate classification across multiple cancers. Result: HTL on 1000 sentinel lymph node patches (L-HTL-1000) outperforms SL on 1000 sentinel lymph node patches (L-SL-1-1000) (average area under the curve (AUC) and standard deviation of L-HTL-1000 vs. L-SL-1-1000: 0.949 ± 0.004 vs. 0.931 ± 0.008, p value = 0.008). There is no significant difference between L-HTL-1000 and SL on 7104 patches (L-SL-2-7104) (0.949 ± 0.004 vs. 0.948 ± 0.008, p value = 0.742). Similar results are observed for breast cancer. B-HTL-1008 vs. B-SL-1-1008: 0.962 ± 0.017 vs. 0.943 ± 0.018, p value = 0.008; B-HTL-1008 vs. B-SL-2-5232: 0.962 ± 0.017 vs. 0.951 ± 0.023, p value = 0.148. Conclusions: HTL is capable of building accurate AI models for similar cancers using a small amount of data based on a large dataset for a certain type of cancer. HTL holds great promise for accelerating the development of AI in histopathology.
Collapse
|
16
|
Garg S, Singh P. Transfer Learning Based Lightweight Ensemble Model for Imbalanced Breast Cancer Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1529-1539. [PMID: 35536810 DOI: 10.1109/tcbb.2022.3174091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Automated classification of breast cancer can often save lives, as manual detection is usually time-consuming & expensive. Since the last decade, deep learning techniques have been most widely used for the automatic classification of breast cancer using histopathology images. This paper has performed the binary and multi-class classification of breast cancer using a transfer learning-based ensemble model. To analyze the correctness and reliability of the proposed model, we have used an imbalance IDC dataset, an imbalance BreakHis dataset in the binary class scenario, and a balanced BACH dataset for the multi-class classification. A lightweight shallow CNN model with batch normalization technology to accelerate convergence is aggregated with lightweight MobileNetV2 to improve learning and adaptability. The aggregation output is fed into a multilayer perceptron to complete the final classification task. The experimental study on all three datasets was performed and compared with the recent works. We have fine-tuned three different pre-trained models (ResNet50, InceptionV4, and MobilNetV2) and compared it with the proposed lightweight ensemble model in terms of execution time, number of parameters, model size, etc. In both the evaluation phases, it is seen that our model outperforms in all three datasets.
Collapse
|
17
|
Baloi A, Costea C, Gutt R, Balacescu O, Turcu F, Belean B. Hexagonal-Grid-Layout Image Segmentation Using Shock Filters: Computational Complexity Case Study for Microarray Image Analysis Related to Machine Learning Approaches. SENSORS (BASEL, SWITZERLAND) 2023; 23:2582. [PMID: 36904788 PMCID: PMC10007319 DOI: 10.3390/s23052582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/17/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
Hexagonal grid layouts are advantageous in microarray technology; however, hexagonal grids appear in many fields, especially given the rise of new nanostructures and metamaterials, leading to the need for image analysis on such structures. This work proposes a shock-filter-based approach driven by mathematical morphology for the segmentation of image objects disposed in a hexagonal grid. The original image is decomposed into a pair of rectangular grids, such that their superposition generates the initial image. Within each rectangular grid, the shock-filters are once again used to confine the foreground information for each image object into an area of interest. The proposed methodology was successfully applied for microarray spot segmentation, whereas its character of generality is underlined by the segmentation results obtained for two other types of hexagonal grid layouts. Considering the segmentation accuracy through specific quality measures for microarray images, such as the mean absolute error and the coefficient of variation, high correlations of our computed spot intensity features with the annotated reference values were found, indicating the reliability of the proposed approach. Moreover, taking into account that the shock-filter PDE formalism is targeting the one-dimensional luminance profile function, the computational complexity to determine the grid is minimized. The order of growth for the computational complexity of our approach is at least one order of magnitude lower when compared with state-of-the-art microarray segmentation approaches, ranging from classical to machine learning ones.
Collapse
Affiliation(s)
- Aurel Baloi
- Research Center for Integrated Analysis and Territorial Management, University of Bucharest, 4-12 Regina Elisabeta, 030018 Bucharest, Romania
- Faculty of Administration and Business, University of Bucharest, 030018 Bucharest, Romania
| | - Carmen Costea
- Department of Mathematics, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Robert Gutt
- Center of Advanced Research and Technologies for Alternative Energies, National Institute for Research and Development of Isotopic and Molecular Technologies, 400293 Cluj-Napoca, Romania
| | - Ovidiu Balacescu
- Department of Genetics, Genomics and Experimental Pathology, The Oncology Institute, Prof. Dr. Ion Chiricuta, 400015 Cluj-Napoca, Romania
| | - Flaviu Turcu
- Center of Advanced Research and Technologies for Alternative Energies, National Institute for Research and Development of Isotopic and Molecular Technologies, 400293 Cluj-Napoca, Romania
- Faculty of Physics, Babes-Bolyai University, 400084 Cluj-Napoca, Romania
| | - Bogdan Belean
- Center of Advanced Research and Technologies for Alternative Energies, National Institute for Research and Development of Isotopic and Molecular Technologies, 400293 Cluj-Napoca, Romania
| |
Collapse
|
18
|
Amin MS, Ahn H. FabNet: A Features Agglomeration-Based Convolutional Neural Network for Multiscale Breast Cancer Histopathology Images Classification. Cancers (Basel) 2023; 15:cancers15041013. [PMID: 36831359 PMCID: PMC9954749 DOI: 10.3390/cancers15041013] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 01/31/2023] [Accepted: 01/31/2023] [Indexed: 02/08/2023] Open
Abstract
The definitive diagnosis of histology specimen images is largely based on the radiologist's comprehensive experience; however, due to the fine to the coarse visual appearance of such images, experts often disagree with their assessments. Sophisticated deep learning approaches can help to automate the diagnosis process of the images and reduce the analysis duration. More efficient and accurate automated systems can also increase the diagnostic impartiality by reducing the difference between the operators. We propose a FabNet model that can learn the fine-to-coarse structural and textural features of multi-scale histopathological images by using accretive network architecture that agglomerate hierarchical feature maps to acquire significant classification accuracy. We expand on a contemporary design by incorporating deep and close integration to finely combine features across layers. Our deep layer accretive model structure combines the feature hierarchy in an iterative and hierarchically manner that infers higher accuracy and fewer parameters. The FabNet can identify malignant tumors from images and patches from histopathology images. We assessed the efficiency of our suggested model standard cancer datasets, which included breast cancer as well as colon cancer histopathology images. Our proposed avant garde model significantly outperforms existing state-of-the-art models in respect of the accuracy, F1 score, precision, and sensitivity, with fewer parameters.
Collapse
|
19
|
Kumar A, Vishwakarma A, Bajaj V. CRCCN-Net: Automated framework for classification of colorectal tissue using histopathological images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
20
|
Prakosa SW, Leu JS, Hsieh HY, Avian C, Bai CH, Vítek S. Implementing a Compression Technique on the Progressive Contextual Excitation Network for Smart Farming Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:9717. [PMID: 36560087 PMCID: PMC9781053 DOI: 10.3390/s22249717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
The utilization of computer vision in smart farming is becoming a trend in constructing an agricultural automation scheme. Deep learning (DL) is famous for the accurate approach to addressing the tasks in computer vision, such as object detection and image classification. The superiority of the deep learning model on the smart farming application, called Progressive Contextual Excitation Network (PCENet), has also been studied in our recent study to classify cocoa bean images. However, the assessment of the computational time on the PCENet model shows that the original model is only 0.101s or 9.9 FPS on the Jetson Nano as the edge platform. Therefore, this research demonstrates the compression technique to accelerate the PCENet model using pruning filters. From our experiment, we can accelerate the current model and achieve 16.7 FPS assessed in the Jetson Nano. Moreover, the accuracy of the compressed model can be maintained at 86.1%, while the original model is 86.8%. In addition, our approach is more accurate than ResNet18 as the state-of-the-art only reaches 82.7%. The assessment using the corn leaf disease dataset indicates that the compressed model can achieve an accuracy of 97.5%, while the accuracy of the original PCENet is 97.7%.
Collapse
Affiliation(s)
- Setya Widyawan Prakosa
- Department of Electronic and Computer Engineering (ECE), National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - Jenq-Shiou Leu
- Department of Electronic and Computer Engineering (ECE), National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - He-Yen Hsieh
- Department of Electronic and Computer Engineering (ECE), National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - Cries Avian
- Department of Electronic and Computer Engineering (ECE), National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - Chia-Hung Bai
- Department of Electronic and Computer Engineering (ECE), National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - Stanislav Vítek
- Faculty of Electrical Engineering, Czech Technical University in Prague, Technicka 2, 16627 Prague, Czech Republic
| |
Collapse
|
21
|
Classification of breast cancer histology images using MSMV-PFENet. Sci Rep 2022; 12:17447. [PMID: 36261463 PMCID: PMC9581896 DOI: 10.1038/s41598-022-22358-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 10/13/2022] [Indexed: 01/12/2023] Open
Abstract
Deep learning has been used extensively in histopathological image classification, but people in this field are still exploring new neural network architectures for more effective and efficient cancer diagnosis. Here, we propose multi-scale, multi-view progressive feature encoding network (MSMV-PFENet) for effective classification. With respect to the density of cell nuclei, we selected the regions potentially related to carcinogenesis at multiple scales from each view. The progressive feature encoding network then extracted the global and local features from these regions. A bidirectional long short-term memory analyzed the encoding vectors to get a category score, and finally the majority voting method integrated different views to classify the histopathological images. We tested our method on the breast cancer histology dataset from the ICIAR 2018 grand challenge. The proposed MSMV-PFENet achieved 93.0[Formula: see text] and 94.8[Formula: see text] accuracies at the patch and image levels, respectively. This method can potentially benefit the clinical cancer diagnosis.
Collapse
|
22
|
Chen H, Gomez C, Huang CM, Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med 2022; 5:156. [PMID: 36261476 PMCID: PMC9581990 DOI: 10.1038/s41746-022-00699-2] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Collapse
Affiliation(s)
- Haomin Chen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Catalina Gomez
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Chien-Ming Huang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
23
|
Automated histological classification for digital pathology images of colonoscopy specimen via deep learning. Sci Rep 2022; 12:12804. [PMID: 35896791 PMCID: PMC9329279 DOI: 10.1038/s41598-022-16885-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 07/18/2022] [Indexed: 11/09/2022] Open
Abstract
Colonoscopy is an effective tool to detect colorectal lesions and needs the support of pathological diagnosis. This study aimed to develop and validate deep learning models that automatically classify digital pathology images of colon lesions obtained from colonoscopy-related specimen. Histopathological slides of colonoscopic biopsy or resection specimens were collected and grouped into six classes by disease category: adenocarcinoma, tubular adenoma (TA), traditional serrated adenoma (TSA), sessile serrated adenoma (SSA), hyperplastic polyp (HP), and non-specific lesions. Digital photographs were taken of each pathological slide to fine-tune two pre-trained convolutional neural networks, and the model performances were evaluated. A total of 1865 images were included from 703 patients, of which 10% were used as a test dataset. For six-class classification, the mean diagnostic accuracy was 97.3% (95% confidence interval [CI], 96.0–98.6%) by DenseNet-161 and 95.9% (95% CI 94.1–97.7%) by EfficientNet-B7. The per-class area under the receiver operating characteristic curve (AUC) was highest for adenocarcinoma (1.000; 95% CI 0.999–1.000) by DenseNet-161 and TSA (1.000; 95% CI 1.000–1.000) by EfficientNet-B7. The lowest per-class AUCs were still excellent: 0.991 (95% CI 0.983–0.999) for HP by DenseNet-161 and 0.995 for SSA (95% CI 0.992–0.998) by EfficientNet-B7. Deep learning models achieved excellent performances for discriminating adenocarcinoma from non-adenocarcinoma lesions with an AUC of 0.995 or 0.998. The pathognomonic area for each class was appropriately highlighted in digital images by saliency map, particularly focusing epithelial lesions. Deep learning models might be a useful tool to help the diagnosis for pathologic slides of colonoscopy-related specimens.
Collapse
|
24
|
Chronic Lymphocytic Leukemia Progression Diagnosis with Intrinsic Cellular Patterns via Unsupervised Clustering. Cancers (Basel) 2022; 14:cancers14102398. [PMID: 35626003 PMCID: PMC9139505 DOI: 10.3390/cancers14102398] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Distinguishing between chronic lymphocytic leukemia (CLL), accelerated CLL (aCLL), and full-blown transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications. Identifying cellular phenotypes via unsupervised clustering provides the most robust analytic performance in analyzing digitized pathology slides. This study serves as a proof of concept that using an unsupervised machine learning scheme can enhance diagnostic accuracy. Abstract Identifying the progression of chronic lymphocytic leukemia (CLL) to accelerated CLL (aCLL) or transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications as it prompts a major change in patient management. However, the differentiation between these disease phases may be challenging in routine practice. Unsupervised learning has gained increased attention because of its substantial potential in data intrinsic pattern discovery. Here, we demonstrate that cellular feature engineering, identifying cellular phenotypes via unsupervised clustering, provides the most robust analytic performance in analyzing digitized pathology slides (accuracy = 0.925, AUC = 0.978) when compared to alternative approaches, such as mixed features, supervised features, unsupervised/mixed/supervised feature fusion and selection, as well as patch-based convolutional neural network (CNN) feature extraction. We further validate the reproducibility and robustness of unsupervised feature extraction via stability and repeated splitting analysis, supporting its utility as a diagnostic aid in identifying CLL patients with histologic evidence of disease progression. The outcome of this study serves as proof of principle using an unsupervised machine learning scheme to enhance the diagnostic accuracy of the heterogeneous histology patterns that pathologists might not easily see.
Collapse
|
25
|
Gupta L, Klinkhammer BM, Seikrit C, Fan N, Bouteldja N, Gräbel P, Gadermayr M, Boor P, Merhof D. Large-scale extraction of interpretable features provides new insights into kidney histopathology – a proof-of-concept study. J Pathol Inform 2022; 13:100097. [PMID: 36268111 PMCID: PMC9576990 DOI: 10.1016/j.jpi.2022.100097] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 04/14/2022] [Accepted: 05/02/2022] [Indexed: 11/21/2022] Open
Abstract
Whole slide images contain a magnitude of quantitative information that may not be fully explored in qualitative visual assessments. We propose: (1) a novel pipeline for extracting a comprehensive set of visual features, which are detectable by a pathologist, as well as sub-visual features, which are not discernible by human experts and (2) perform detailed analyses on renal images from mice with experimental unilateral ureteral obstruction. An important criterion for these features is that they are easy to interpret, as opposed to features obtained from neural networks. We extract and compare features from pathological and healthy control kidneys to learn how the compartments (glomerulus, Bowman's capsule, tubule, interstitium, artery, and arterial lumen) are affected by the pathology. We define feature selection methods to extract the most informative and discriminative features. We perform statistical analyses to understand the relation of the extracted features, both individually, and in combinations, with tissue morphology and pathology. Particularly for the presented case-study, we highlight features that are affected in each compartment. With this, prior biological knowledge, such as the increase in interstitial nuclei, is confirmed and presented in a quantitative way, alongside with novel findings, like color and intensity changes in glomeruli and Bowman's capsule. The proposed approach is therefore an important step towards quantitative, reproducible, and rater-independent analysis in histopathology.
Collapse
Affiliation(s)
- Laxmi Gupta
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Corresponding author.
| | | | - Claudia Seikrit
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
- Division of Nephrology and Clinical Immunology, RWTH Aachen University, Aachen, Germany
| | - Nina Fan
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Nassim Bouteldja
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
| | - Philipp Gräbel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Michael Gadermayr
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Salzburg University of Applied Sciences, Puch/Salzburg, Austria
| | - Peter Boor
- Institute of Pathology, University Hospital Aachen, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
26
|
Kumar N, Verma R, Chen C, Lu C, Fu P, Willis J, Madabhushi A. Computer-extracted features of nuclear morphology in hematoxylin and eosin images distinguish stage II and IV colon tumors. J Pathol 2022; 257:17-28. [PMID: 35007352 PMCID: PMC9007877 DOI: 10.1002/path.5864] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 12/15/2021] [Accepted: 01/07/2022] [Indexed: 11/12/2022]
Abstract
We assessed the utility of quantitative features of colon cancer nuclei, extracted from digitized hematoxylin and eosin-stained whole slide images (WSIs), to distinguish between stage II and stage IV colon cancers. Our discovery cohort comprised 100 stage II and stage IV colon cancer cases sourced from the University Hospitals Cleveland Medical Center (UHCMC). We performed initial (independent) model validation on 51 (143) stage II and 79 (54) stage IV colon cancer cases from UHCMC (The Cancer Genome Atlas's Colon Adenocarcinoma, TCGA-COAD, cohort). Our approach comprised the following steps: (1) a fully convolutional deep neural network with VGG-18 architecture was trained to locate cancer on WSIs; (2) another deep-learning model based on Mask-RCNN with Resnet-50 architecture was used to segment all nuclei from within the identified cancer region; (3) a total of 26 641 quantitative morphometric features pertaining to nuclear shape, size, and texture were extracted from within and outside tumor nuclei; (4) a random forest classifier was trained to distinguish between stage II and stage IV colon cancers using the five most discriminatory features selected by the Wilcoxon rank-sum test. Our trained classifier using these top five features yielded an AUC of 0.81 and 0.78, respectively, on the held-out cases in the UHCMC and TCGA validation sets. For 197 TCGA-COAD cases, the Cox proportional hazards model yielded a hazard ratio of 2.20 (95% CI 1.24-3.88) with a concordance index of 0.71, using only the top five features for risk stratification of overall survival. The Kaplan-Meier estimate also showed statistically significant separation between the low-risk and high-risk patients, with a log-rank P value of 0.0097. Finally, unsupervised clustering of the top five features revealed that stage IV colon cancers with peritoneal spread were morphologically more similar to stage II colon cancers with no long-term metastases than to stage IV colon cancers with hematogenous spread. © 2022 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Neeraj Kumar
- Department of Computing ScienceUniversity of Alberta and Alberta Machine Intelligence InstituteEdmontonCanada
| | - Ruchika Verma
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Chuheng Chen
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Cheng Lu
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Pingfu Fu
- Department of Population and Quantitative Health SciencesCase Western Reserve UniversityClevelandOHUSA
| | - Joseph Willis
- Department of PathologyCase Western Reserve UniversityClevelandOHUSA
- University Hospitals Cleveland Medical CenterClevelandOHUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland Veterans Administration Medical CenterClevelandOHUSA
| |
Collapse
|
27
|
Chen H, Li C, Li X, Rahaman MM, Hu W, Li Y, Liu W, Sun C, Sun H, Huang X, Grzegorzek M. IL-MCAM: An interactive learning and multi-channel attention mechanism-based weakly supervised colorectal histopathology image classification approach. Comput Biol Med 2022; 143:105265. [PMID: 35123138 DOI: 10.1016/j.compbiomed.2022.105265] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 01/21/2022] [Accepted: 01/22/2022] [Indexed: 12/24/2022]
Abstract
In recent years, colorectal cancer has become one of the most significant diseases that endanger human health. Deep learning methods are increasingly important for the classification of colorectal histopathology images. However, existing approaches focus more on end-to-end automatic classification using computers rather than human-computer interaction. In this paper, we propose an IL-MCAM framework. It is based on attention mechanisms and interactive learning. The proposed IL-MCAM framework includes two stages: automatic learning (AL) and interactivity learning (IL). In the AL stage, a multi-channel attention mechanism model containing three different attention mechanism channels and convolutional neural networks is used to extract multi-channel features for classification. In the IL stage, the proposed IL-MCAM framework continuously adds misclassified images to the training set in an interactive approach, which improves the classification ability of the MCAM model. We carried out a comparison experiment on our dataset and an extended experiment on the HE-NCT-CRC-100K dataset to verify the performance of the proposed IL-MCAM framework, achieving classification accuracies of 98.98% and 99.77%, respectively. In addition, we conducted an ablation experiment and an interchangeability experiment to verify the ability and interchangeability of the three channels. The experimental results show that the proposed IL-MCAM framework has excellent performance in the colorectal histopathological image classification tasks.
Collapse
Affiliation(s)
- Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Xiaoyan Li
- Department of Pathology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, China.
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Yixin Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Changhao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Shenyang Institute of Automation, Chinese Academy of Sciences, China
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, China
| | - Xinyu Huang
- Institute of Medical Informatics, University of Luebeck, Germany
| | | |
Collapse
|
28
|
Chen RJ, Lu MY, Wang J, Williamson DFK, Rodig SJ, Lindeman NI, Mahmood F. Pathomic Fusion: An Integrated Framework for Fusing Histopathology and Genomic Features for Cancer Diagnosis and Prognosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:757-770. [PMID: 32881682 DOI: 10.1109/tmi.2020.3021387] [Citation(s) in RCA: 183] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Cancer diagnosis, prognosis, mymargin and therapeutic response predictions are based on morphological information from histology slides and molecular profiles from genomic data. However, most deep learning-based objective outcome prediction and grading paradigms are based on histology or genomics alone and do not make use of the complementary information in an intuitive manner. In this work, we propose Pathomic Fusion, an interpretable strategy for end-to-end multimodal fusion of histology image and genomic (mutations, CNV, RNA-Seq) features for survival outcome prediction. Our approach models pairwise feature interactions across modalities by taking the Kronecker product of unimodal feature representations, and controls the expressiveness of each representation via a gating-based attention mechanism. Following supervised learning, we are able to interpret and saliently localize features across each modality, and understand how feature importance shifts when conditioning on multimodal input. We validate our approach using glioma and clear cell renal cell carcinoma datasets from the Cancer Genome Atlas (TCGA), which contains paired whole-slide image, genotype, and transcriptome data with ground truth survival and histologic grade labels. In a 15-fold cross-validation, our results demonstrate that the proposed multimodal fusion paradigm improves prognostic determinations from ground truth grading and molecular subtyping, as well as unimodal deep networks trained on histology and genomic data alone. The proposed method establishes insight and theory on how to train deep networks on multimodal biomedical data in an intuitive manner, which will be useful for other problems in medicine that seek to combine heterogeneous data streams for understanding diseases and predicting response and resistance to treatment. Code and trained models are made available at: https://github.com/mahmoodlab/PathomicFusion.
Collapse
|
29
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
30
|
Yan J, Chen H, Li X, Yao J. Deep Contrastive Learning Based Tissue Clustering for Annotation-free Histopathology Image Analysis. Comput Med Imaging Graph 2022; 97:102053. [DOI: 10.1016/j.compmedimag.2022.102053] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/04/2022] [Indexed: 01/18/2023]
|
31
|
Yu G, Sun K, Xu C, Shi XH, Wu C, Xie T, Meng RQ, Meng XH, Wang KS, Xiao HM, Deng HW. Accurate recognition of colorectal cancer with semi-supervised deep learning on pathological images. Nat Commun 2021; 12:6311. [PMID: 34728629 PMCID: PMC8563931 DOI: 10.1038/s41467-021-26643-8] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 10/12/2021] [Indexed: 02/07/2023] Open
Abstract
Machine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.
Collapse
Affiliation(s)
- Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Chao Xu
- Department of Biostatistics and Epidemiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - Xing-Hua Shi
- Department of Computer & Information Sciences, College of Science and Technology, Temple University, Philadelphia, PA, 19122, USA
| | - Chong Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - Ting Xie
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Run-Qi Meng
- Electronic Information Science and Technology, School of Physics and Electronics, Central South University, 410083, Changsha, Hunan, China
| | - Xiang-He Meng
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Kuan-Song Wang
- Department of Pathology, Xiangya Hospital, School of Basic Medical Science, Central South University, 410078, Changsha, Hunan, China.
| | - Hong-Mei Xiao
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China.
| | - Hong-Wen Deng
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China.
- Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|
32
|
Yang R, Yan C, Lu S, Li J, Ji J, Yan R, Yuan F, Zhu Z, Yu Y. Tracking cancer lesions on surgical samples of gastric cancer by artificial intelligent algorithms. J Cancer 2021; 12:6473-6483. [PMID: 34659538 PMCID: PMC8489126 DOI: 10.7150/jca.63879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 08/29/2021] [Indexed: 01/10/2023] Open
Abstract
To quickly locate cancer lesions, especially suspected metastatic lesions after gastrectomy, AI algorithms of object detection and semantic segmentation were established. A total of 509 macroscopic images from 381 patients were collected. The RFB-SSD object detection algorithm and ResNet50-PSPNet semantic segmentation algorithm were used. Another 57 macroscopic images from 48 patients were collected for prospective verification. We used mAP as the metrics of object detection. The best mAP was 95.90% with an average of 89.89% in the test set. The mAP reached 92.60% in validation set. We used mIoU for evaluation of semantic segmentation. The best mIoU was 80.97% with an average of 79.26% in the test set. In addition, 81 out of 92 (88.04%) gastric specimens were accurately predicted for the cancer lesion located at the serosa by ResNet50-PSPNet semantic segmentation model. The positive rate and accuracy of AI prediction were different based on cancer invasive depth. The metastatic lymph nodes were predicted in 24 cases by semantic segmentation model. Among them, 18 cases were confirmed by pathology. The predictive accuracy was 75.00%. Our well-trained AI algorithms effectively identified the subtle features of gastric cancer in resected specimens that may be missed by naked eyes. Taken together, AI algorithms could assist clinical doctors quickly locating cancer lesions and improve their work efficiency.
Collapse
Affiliation(s)
- Ruixin Yang
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Chao Yan
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Sheng Lu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Jun Li
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Jun Ji
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Ranlin Yan
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Fei Yuan
- Department of Pathology of Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Zhenggang Zhu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Yingyan Yu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| |
Collapse
|
33
|
Su L, Liu Y, Wang M, Li A. Semi-HIC: A novel semi-supervised deep learning method for histopathological image classification. Comput Biol Med 2021; 137:104788. [PMID: 34461503 DOI: 10.1016/j.compbiomed.2021.104788] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 08/17/2021] [Accepted: 08/18/2021] [Indexed: 11/30/2022]
Abstract
Histopathological images provide a gold standard for cancer recognition and diagnosis. Existing approaches for histopathological image classification are supervised learning methods that demand a large amount of labeled data to obtain satisfying performance, which have to face the challenge of limited data annotation due to prohibitive time cost. To circumvent this shortage, a promising strategy is to design semi-supervised learning methods. Recently, a novel semi-supervised approach called Learning by Association (LA) is proposed, which achieves promising performance in nature image classification. However, there are still great challenges in its application to histopathological image classification due to the wide inter-class similarity and intra-class heterogeneity in histopathological images. To address these issues, we propose a novel semi-supervised deep learning method called Semi-HIC for histopathological image classification. Particularly, we introduce a new semi-supervised loss function combining an association cycle consistency (ACC) loss and a maximal conditional association (MCA) loss, which can take advantage of a large number of unlabeled patches and address the problems of inter-class similarity and intra-class variation in histopathological images, and thereby remarkably improve classification performance for histopathological images. Besides, we employ an efficient network architecture with cascaded Inception blocks (CIBs) to learn rich and discriminative embeddings from patches. Experimental results on both the Bioimaging 2015 challenge dataset and the BACH dataset demonstrate our Semi-HIC method compares favorably with existing deep learning methods for histopathological image classification and consistently outperforms the semi-supervised LA method.
Collapse
Affiliation(s)
- Lei Su
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| | - Yu Liu
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China; Research Centers for Biomedical Engineering, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China; Research Centers for Biomedical Engineering, University of Science and Technology of China, 443 Huangshan Road, Hefei, 230027, China.
| |
Collapse
|
34
|
Wan Y, Yang P, Xu L, Yang J, Luo C, Wang J, Chen F, Wu Y, Lu Y, Ruan D, Niu T. Radiomics analysis combining unsupervised learning and handcrafted features: A multiple-disease study. Med Phys 2021; 48:7003-7015. [PMID: 34453332 DOI: 10.1002/mp.15199] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE To study and investigate the synergistic benefit of incorporating both conventional handcrafted and learning-based features in disease identification across a wide range of clinical setups. METHODS AND MATERIALS In this retrospective study, we collected 170, 150, 209, and 137 patients with four different disease types associated with identification objectives : Lymph node metastasis status of gastric cancer (GC), 5-year survival status of patients with high-grade osteosarcoma (HOS), early recurrence status of intrahepatic cholangiocarcinoma (ICC), and pathological grades of pancreatic neuroendocrine tumors (pNETs). Computed tomography (CT) and magnetic resonance imaging (MRI) were used to derive image features for GC/HOS/pNETs and ICC, respectively. In each study, 67 universal handcrafted features and study-specific features based on the sparse autoencoder (SAE) method were extracted and fed into the subsequent feature selection and learning model to predict the corresponding disease identification. Models using handcrafted alone, SAE alone, and hybrid features were optimized and their performance was compared. Prominent features were analyzed both qualitatively and quantitatively to generate study-specific and cross-study insight. In addition to direct performance gain assessment, correlation analysis was performed to assess the complementarity between handcrafted features and SAE features. RESULTS On the independent hold-off test, the handcrafted, SAE, and hybrid features based prediction yielded area under the curve of 0.761 versus 0.769 versus 0.829 for GC, 0.629 versus 0.740 versus 0.709 for HOS, 0.717 versus 0.718 versus 0.758 for ICC, and 0.739 versus 0.715 versus 0.771 for pNETs studies, respectively. In three out of the four studies, prediction using the hybrid features yields the best performance, demonstrating the general benefit in using hybrid features. Prediction with SAE features alone had the best performance in the HOS study, which may be explained by the complexity of HOS prognosis and the possibility of a slight overfit due to higher correlation between handcrafted and SAE features. CONCLUSION This study demonstrated the general benefit of combing handcrafted and learning-based features in radiomics modeling. It also clearly illustrates the task-specific and data-specific dependency on the performance gain and suggests that while the common methodology of feature combination may be applied across various studies and tasks, study-specific feature selection and model optimization are still necessary to achieve high accuracy and robustness.
Collapse
Affiliation(s)
- Yidong Wan
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China
| | - Pengfei Yang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Lei Xu
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China
| | - Jing Yang
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China
| | - Chen Luo
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Jing Wang
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China.,Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Feng Chen
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yan Wu
- Department of Orthopaedics, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yun Lu
- The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Dan Ruan
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, California, USA
| | - Tianye Niu
- Nuclear & Radiological Engineering and Medical Physics Programs, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
35
|
Javaid A, Shahab O, Adorno W, Fernandes P, May E, Syed S. Machine Learning Predictive Outcomes Modeling in Inflammatory Bowel Diseases. Inflamm Bowel Dis 2021; 28:819-829. [PMID: 34417815 PMCID: PMC9165557 DOI: 10.1093/ibd/izab187] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Indexed: 12/14/2022]
Abstract
There is a rising interest in use of big data approaches to personalize treatment of inflammatory bowel diseases (IBDs) and to predict and prevent outcomes such as disease flares and therapeutic nonresponse. Machine learning (ML) provides an avenue to identify and quantify features across vast quantities of data to produce novel insights in disease management. In this review, we cover current approaches in ML-driven predictive outcomes modeling for IBD and relate how advances in other fields of medicine may be applied to improve future IBD predictive models. Numerous studies have incorporated clinical, laboratory, or omics data to predict significant outcomes in IBD, including hospitalizations, outpatient corticosteroid use, biologic response, and refractory disease after colectomy, among others, with considerable health care dollars saved as a result. Encouraging results in other fields of medicine support efforts to use ML image analysis-including analysis of histopathology, endoscopy, and radiology-to further advance outcome predictions in IBD. Though obstacles to clinical implementation include technical barriers, bias within data sets, and incongruence between limited data sets preventing model validation in larger cohorts, ML-predictive analytics have the potential to transform the clinical management of IBD. Future directions include the development of models that synthesize all aforementioned approaches to produce more robust predictive metrics.
Collapse
Affiliation(s)
- Aamir Javaid
- Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, Charlottesville, VA, USA
| | - Omer Shahab
- Division of Gastroenterology and Hepatology, Department of Medicine, Virginia Commonwealth University, Richmond, VA, USA
| | - William Adorno
- School of Data Science, University of Virginia, Charlottesville, VA, USA
| | - Philip Fernandes
- Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, Charlottesville, VA, USA
| | - Eve May
- Division of Gastroenterology and Hepatology, Department of Pediatrics, Children’s National Hospital, Washington, DC, USA
| | - Sana Syed
- Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, Charlottesville, VA, USA,School of Data Science, University of Virginia, Charlottesville, VA, USA,Address Correspondence to: Sana Syed, MD, MSCR, MSDS, Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, 409 Lane Rd, Room 2035B, Charlottesville, VA, 22908, USA ()
| |
Collapse
|
36
|
Transfer Learning Approach for Classification of Histopathology Whole Slide Images. SENSORS 2021; 21:s21165361. [PMID: 34450802 PMCID: PMC8401188 DOI: 10.3390/s21165361] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 08/06/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
Collapse
|
37
|
Wang Q, Liu W, Chen X, Wang X, Chen G, Zhu X. Quantification of scar collagen texture and prediction of scar development via second harmonic generation images and a generative adversarial network. BIOMEDICAL OPTICS EXPRESS 2021; 12:5305-5319. [PMID: 34513258 PMCID: PMC8407811 DOI: 10.1364/boe.431096] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 05/29/2023]
Abstract
Widely used for medical analysis, the texture of the human scar tissue is characterized by irregular and extensive types. The quantitative detection and analysis of the scar texture as enabled by image analysis technology is of great significance to clinical practice. However, the existing methods remain disadvantaged by various shortcomings, such as the inability to fully extract the features of texture. Hence, the integration of second harmonic generation (SHG) imaging and deep learning algorithm is proposed in this study. Through combination with Tamura texture features, a regression model of the scar texture can be constructed to develop a novel method of computer-aided diagnosis, which can assist clinical diagnosis. Based on wavelet packet transform (WPT) and generative adversarial network (GAN), the model is trained with scar texture images of different ages. Generalized Boosted Regression Trees (GBRT) is also adopted to perform regression analysis. Then, the extracted features are further used to predict the age of scar. The experimental results obtained by our proposed model are better compared to the previously published methods. It thus contributes to the better understanding of the mechanism behind scar development and possibly the further development of SHG for skin analysis and clinic practice.
Collapse
Affiliation(s)
- Qing Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xiumei Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| |
Collapse
|
38
|
A Transfer Learning Architecture Based on a Support Vector Machine for Histopathology Image Classification. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11146380] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Recently, digital pathology is an essential application for clinical practice and medical research. Due to the lack of large annotated datasets, the deep transfer learning technique is often used to classify histopathology images. A softmax classifier is often used to perform classification tasks. Besides, a Support Vector Machine (SVM) classifier is also popularly employed, especially for binary classification problems. Accurately determining the category of the histopathology images is vital for the diagnosis of diseases. In this paper, the conventional softmax classifier and the SVM classifier-based transfer learning approach are evaluated to classify histopathology cancer images in a binary breast cancer dataset and a multiclass lung and colon cancer dataset. In order to achieve better classification accuracy, a methodology that attaches SVM classifier to the fully-connected (FC) layer of the softmax-based transfer learning model is proposed. The proposed architecture involves a first step training the newly added FC layer on the target dataset using the softmax-based model and a second step training the SVM classifier with the newly trained FC layer. Cross-validation is used to ensure no bias for the evaluation of the performance of the models. Experimental results reveal that the conventional SVM classifier-based model is the least accurate on either binary or multiclass cancer datasets. The conventional softmax-based model shows moderate classification accuracy, while the proposed synthetic architecture achieves the best classification accuracy.
Collapse
|
39
|
Liao J, Lam HK, Jia G, Gulati S, Bernth J, Poliyivets D, Xu Y, Liu H, Hayee B. A case study on computer-aided diagnosis of nonerosive reflux disease using deep learning techniques. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.02.049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
40
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 361] [Impact Index Per Article: 90.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
41
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
42
|
Wang KS, Yu G, Xu C, Meng XH, Zhou J, Zheng C, Deng Z, Shang L, Liu R, Su S, Zhou X, Li Q, Li J, Wang J, Ma K, Qi J, Hu Z, Tang P, Deng J, Qiu X, Li BY, Shen WD, Quan RP, Yang JT, Huang LY, Xiao Y, Yang ZC, Li Z, Wang SC, Ren H, Liang C, Guo W, Li Y, Xiao H, Gu Y, Yun JP, Huang D, Song Z, Fan X, Chen L, Yan X, Li Z, Huang ZC, Huang J, Luttrell J, Zhang CY, Zhou W, Zhang K, Yi C, Wu C, Shen H, Wang YP, Xiao HM, Deng HW. Accurate diagnosis of colorectal cancer based on histopathology images using artificial intelligence. BMC Med 2021; 19:76. [PMID: 33752648 PMCID: PMC7986569 DOI: 10.1186/s12916-021-01942-5] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/16/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate and robust pathological image analysis for colorectal cancer (CRC) diagnosis is time-consuming and knowledge-intensive, but is essential for CRC patients' treatment. The current heavy workload of pathologists in clinics/hospitals may easily lead to unconscious misdiagnosis of CRC based on daily image analyses. METHODS Based on a state-of-the-art transfer-learned deep convolutional neural network in artificial intelligence (AI), we proposed a novel patch aggregation strategy for clinic CRC diagnosis using weakly labeled pathological whole-slide image (WSI) patches. This approach was trained and validated using an unprecedented and enormously large number of 170,099 patches, > 14,680 WSIs, from > 9631 subjects that covered diverse and representative clinical cases from multi-independent-sources across China, the USA, and Germany. RESULTS Our innovative AI tool consistently and nearly perfectly agreed with (average Kappa statistic 0.896) and even often better than most of the experienced expert pathologists when tested in diagnosing CRC WSIs from multicenters. The average area under the receiver operating characteristics curve (AUC) of AI was greater than that of the pathologists (0.988 vs 0.970) and achieved the best performance among the application of other AI methods to CRC diagnosis. Our AI-generated heatmap highlights the image regions of cancer tissue/cells. CONCLUSIONS This first-ever generalizable AI system can handle large amounts of WSIs consistently and robustly without potential bias due to fatigue commonly experienced by clinical pathologists. It will drastically alleviate the heavy clinical burden of daily pathology diagnosis and improve the treatment for CRC patients. This tool is generalizable to other cancer diagnosis based on image recognition.
Collapse
Affiliation(s)
- K S Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - G Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Xu
- Department of Biostatistics and Epidemiology, The University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - X H Meng
- Laboratory of Molecular and Statistical Genetics, College of Life Sciences, Hunan Normal University, Changsha, 410081, Hunan, China
| | - J Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Zheng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Deng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - L Shang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - R Liu
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - S Su
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - X Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - Q Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - K Ma
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Qi
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Hu
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - P Tang
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
| | - X Qiu
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - B Y Li
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - W D Shen
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - R P Quan
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - J T Yang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - L Y Huang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Y Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Z C Yang
- Department of Pharmacology, Xiangya School of Pharmaceutical Sciences, Central South University, Changsha, 410078, Hunan, China
| | - Z Li
- School of Life Sciences, Central South University, Changsha, 410013, Hunan, China
| | - S C Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, Hunan, China
| | - H Ren
- Department of Pathology, Gongli Hospital, Second Military Medical University, Shanghai, 200135, China
- Department of Pathology, the Peace Hospital Affiliated to Changzhi Medical College, Changzhi, 046000, China
| | - C Liang
- Pathological Laboratory of Adicon Medical Laboratory Co., Ltd, Hangzhou, 310023, Zhejiang, China
| | - W Guo
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - Y Li
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - H Xiao
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - Y Gu
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - J P Yun
- Department of Pathology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, China
| | - D Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Z Song
- Department of Pathology, Chinese PLA General Hospital, Beijing, 100853, China
| | - X Fan
- Department of Pathology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - L Chen
- Department of Pathology, The first affiliated hospital, Air Force Medical University, Xi'an, 710032, China
| | - X Yan
- Institute of Pathology and southwest cancer center, Southwest Hospital, Third Military Medical University, Chongqing, 400038, China
| | - Z Li
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Z C Huang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Huang
- Department of Anatomy and Neurobiology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Luttrell
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - C Y Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - W Zhou
- College of Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - K Zhang
- Department of Computer Science, Bioinformatics Facility of Xavier NIH RCMI Cancer Research Center, Xavier University of Louisiana, New Orleans, LA, 70125, USA
| | - C Yi
- Department of Pathology, Ochsner Medical Center, New Orleans, LA, 70121, USA
| | - C Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - H Shen
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA
| | - Y P Wang
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - H M Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
| | - H W Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA.
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|
43
|
Wells A, Patel S, Lee JB, Motaparthi K. Artificial intelligence in dermatopathology: Diagnosis, education, and research. J Cutan Pathol 2021; 48:1061-1068. [PMID: 33421167 DOI: 10.1111/cup.13954] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 12/03/2020] [Accepted: 12/29/2020] [Indexed: 01/25/2023]
Abstract
Artificial intelligence (AI) utilizes computer algorithms to carry out tasks with human-like intelligence. Convolutional neural networks, a type of deep learning AI, can classify basal cell carcinoma, seborrheic keratosis, and conventional nevi, highlighting the potential for deep learning algorithms to improve diagnostic workflow in dermatopathology of highly routine diagnoses. Additionally, convolutional neural networks can support the diagnosis of melanoma and may help predict disease outcomes. Capabilities of machine learning in dermatopathology can extend beyond clinical diagnosis to education and research. Intelligent tutoring systems can teach visual diagnoses in inflammatory dermatoses, with measurable cognitive effects on learners. Natural language interfaces can instruct dermatopathology trainees to produce diagnostic reports that capture relevant detail for diagnosis in compliance with guidelines. Furthermore, deep learning can power computation- and population-based research. However, there are many limitations of deep learning that need to be addressed before broad incorporation into clinical practice. The current potential of AI in dermatopathology is to supplement diagnosis, and dermatopathologist guidance is essential for the development of useful deep learning algorithms. Herein, the recent progress of AI in dermatopathology is reviewed with emphasis on how deep learning can influence diagnosis, education, and research.
Collapse
Affiliation(s)
- Amy Wells
- Department of Dermatology, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Shaan Patel
- Department of Dermatology, Temple University Lewis Katz School of Medicine, Philadelphia, Pennsylvania, USA
| | - Jason B Lee
- Department of Dermatology and Cutaneous Biology, Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Kiran Motaparthi
- Department of Dermatology, University of Florida College of Medicine, Gainesville, Florida, USA
| |
Collapse
|
44
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 255] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
45
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|
46
|
Li Y, Chai Y, Yin H, Chen B. A novel feature learning framework for high-dimensional data classification. INT J MACH LEARN CYB 2020. [DOI: 10.1007/s13042-020-01188-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
47
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
48
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|
49
|
Javed S, Mahmood A, Werghi N, Benes K, Rajpoot N. Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9204-9219. [PMID: 32966218 DOI: 10.1109/tip.2020.3023795] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In computational pathology, automated tissue phenotyping in cancer histology images is a fundamental tool for profiling tumor microenvironments. Current tissue phenotyping methods use features derived from image patches which may not carry biological significance. In this work, we propose a novel multiplex cellular community-based algorithm for tissue phenotyping integrating cell-level features within a graph-based hierarchical framework. We demonstrate that such integration offers better performance compared to prior deep learning and texture-based methods as well as to cellular community based methods using uniplex networks. To this end, we construct celllevel graphs using texture, alpha diversity and multi-resolution deep features. Using these graphs, we compute cellular connectivity features which are then employed for the construction of a patch-level multiplex network. Over this network, we compute multiplex cellular communities using a novel objective function. The proposed objective function computes a low-dimensional subspace from each cellular network and subsequently seeks a common low-dimensional subspace using the Grassmann manifold. We evaluate our proposed algorithm on three publicly available datasets for tissue phenotyping, demonstrating a significant improvement over existing state-of-the-art methods.
Collapse
|
50
|
Duan W, Zhang J, Zhang L, Lin Z, Chen Y, Hao X, Wang Y, Zhang H. Evaluation of an artificial intelligent hydrocephalus diagnosis model based on transfer learning. Medicine (Baltimore) 2020; 99:e21229. [PMID: 32702895 PMCID: PMC7373556 DOI: 10.1097/md.0000000000021229] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
To design and develop artificial intelligence (AI) hydrocephalus (HYC) imaging diagnostic model using a transfer learning algorithm and evaluate its application in the diagnosis of HYC by non-contrast material-enhanced head computed tomographic (CT) images.A training and validation dataset of non-contrast material-enhanced head CT examinations that comprised of 1000 patients with HYC and 1000 normal people with no HYC accumulating to 28,500 images. Images were pre-processed, and the feature variables were labeled. The feature variables were extracted by the neural network for transfer learning. AI algorithm performance was tested on a separate dataset containing 250 examinations of HYC and 250 of normal. Resident, attending and consultant in the department of radiology were also tested with the test sets, their results were compared with the AI model.Final model performance for HYC showed 93.6% sensitivity (95% confidence interval: 77%, 97%) and 94.4% specificity (95% confidence interval: 79%, 98%), with area under the characteristic curve of 0.93. Accuracy rate of model, resident, attending, and consultant were 94.0%, 93.4%, 95.6%, and 97.0%.AI can effectively identify the characteristics of HYC from CT images of the brain and automatically analyze the images. In the future, AI can provide auxiliary diagnosis of image results and reduce the burden on junior doctors.
Collapse
Affiliation(s)
- Weike Duan
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| | - Jinsen Zhang
- Department of Neurosurgery, Huashan Hospital, Fudan University
| | - Liang Zhang
- Shanghai Nanoperception Information Technology Co. Ltd, Shanghai, P.R. China
| | - Zongsong Lin
- Shanghai Nanoperception Information Technology Co. Ltd, Shanghai, P.R. China
| | - Yuhang Chen
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| | - Xiaowei Hao
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| | - Yixin Wang
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Hongri Zhang
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| |
Collapse
|