1
|
Boissin C, Wang Y, Sharma A, Weitz P, Karlsson E, Robertson S, Hartman J, Rantalainen M. Deep learning-based risk stratification of preoperative breast biopsies using digital whole slide images. Breast Cancer Res 2024; 26:90. [PMID: 38831336 PMCID: PMC11145850 DOI: 10.1186/s13058-024-01840-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/15/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Nottingham histological grade (NHG) is a well established prognostic factor in breast cancer histopathology but has a high inter-assessor variability with many tumours being classified as intermediate grade, NHG2. Here, we evaluate if DeepGrade, a previously developed model for risk stratification of resected tumour specimens, could be applied to risk-stratify tumour biopsy specimens. METHODS A total of 11,955,755 tiles from 1169 whole slide images of preoperative biopsies from 896 patients diagnosed with breast cancer in Stockholm, Sweden, were included. DeepGrade, a deep convolutional neural network model, was applied for the prediction of low- and high-risk tumours. It was evaluated against clinically assigned grades NHG1 and NHG3 on the biopsy specimen but also against the grades assigned to the corresponding resection specimen using area under the operating curve (AUC). The prognostic value of the DeepGrade model in the biopsy setting was evaluated using time-to-event analysis. RESULTS Based on preoperative biopsy images, the DeepGrade model predicted resected tumour cases of clinical grades NHG1 and NHG3 with an AUC of 0.908 (95% CI: 0.88; 0.93). Furthermore, out of the 432 resected clinically-assigned NHG2 tumours, 281 (65%) were classified as DeepGrade-low and 151 (35%) as DeepGrade-high. Using a multivariable Cox proportional hazards model the hazard ratio between DeepGrade low- and high-risk groups was estimated as 2.01 (95% CI: 1.06; 3.79). CONCLUSIONS DeepGrade provided prediction of tumour grades NHG1 and NHG3 on the resection specimen using only the biopsy specimen. The results demonstrate that the DeepGrade model can provide decision support to identify high-risk tumours based on preoperative biopsies, thus improving early treatment decisions.
Collapse
Affiliation(s)
- Constance Boissin
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Yinxi Wang
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Abhinav Sharma
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Philippe Weitz
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Emelie Karlsson
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | | | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
- MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
- MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden.
| |
Collapse
|
2
|
Karampuri A, Kundur S, Perugu S. Exploratory drug discovery in breast cancer patients: A multimodal deep learning approach to identify novel drug candidates targeting RTK signaling. Comput Biol Med 2024; 174:108433. [PMID: 38642491 DOI: 10.1016/j.compbiomed.2024.108433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 04/04/2024] [Accepted: 04/07/2024] [Indexed: 04/22/2024]
Abstract
Breast cancer, a highly formidable and diverse malignancy predominantly affecting women globally, poses a significant threat due to its intricate genetic variability, rendering it challenging to diagnose accurately. Various therapies such as immunotherapy, radiotherapy, and diverse chemotherapy approaches like drug repurposing and combination therapy are widely used depending on cancer subtype and metastasis severity. Our study revolves around an innovative drug discovery strategy targeting potential drug candidates specific to RTK signalling, a prominently targeted receptor class in cancer. To accomplish this, we have developed a multimodal deep neural network (MM-DNN) based QSAR model integrating omics datasets to elucidate genomic, proteomic expression data, and drug responses, validated rigorously. The results showcase an R2 value of 0.917 and an RMSE value of 0.312, affirming the model's commendable predictive capabilities. Structural analogs of drug molecules specific to RTK signalling were sourced from the PubChem database, followed by meticulous screening to eliminate dissimilar compounds. Leveraging the MM-DNN-based QSAR model, we predicted the biological activity of these molecules, subsequently clustering them into three distinct groups. Feature importance analysis was performed. Consequently, we successfully identified prime drug candidates tailored for each potential downstream regulatory protein within the RTK signalling pathway. This method makes the early stages of drug development faster by removing inactive compounds, providing a hopeful path in combating breast cancer.
Collapse
Affiliation(s)
- Anush Karampuri
- Department of Biotechnology, National Institute of Technology, Warangal, 500604, India
| | - Sunitha Kundur
- Department of Biotechnology, National Institute of Technology, Warangal, 500604, India
| | - Shyam Perugu
- Department of Biotechnology, National Institute of Technology, Warangal, 500604, India.
| |
Collapse
|
3
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024; 40:1-25. [PMID: 38517775 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
4
|
Khan S, Khan A. SkinViT: A transformer based method for Melanoma and Nonmelanoma classification. PLoS One 2023; 18:e0295151. [PMID: 38150449 PMCID: PMC10752524 DOI: 10.1371/journal.pone.0295151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
Over the past few decades, skin cancer has emerged as a major global health concern. The efficacy of skin cancer treatment greatly depends upon early diagnosis and effective treatment. The automated classification of Melanoma and Nonmelanoma is quite challenging task due to presence of high visual similarities across different classes and variabilities within each class. According to the best of our knowledge, this study represents the classification of Melanoma and Nonmelanoma utilising Basal Cell Carcinoma (BCC) and Squamous Cell Carcinoma (SCC) under the Nonmelanoma class for the first time. Therefore, this research focuses on automated detection of different skin cancer types to provide assistance to the dermatologists in timely diagnosis and treatment of Melanoma and Nonmelanoma patients. Recently, artificial intelligence (AI) methods have gained popularity where Convolutional Neural Networks (CNNs) are employed to accurately classify various skin diseases. However, CNN has limitation in its ability to capture global contextual information which may lead to missing important information. In order to address this issue, this research explores the outlook attention mechanism inspired by vision outlooker, which improves important features while suppressing noisy features. The proposed SkinViT architecture integrates an outlooker block, transformer block and MLP head block to efficiently capture both fine level and global features in order to enhance the accuracy of Melanoma and Nonmelanoma classification. The proposed SkinViT method is assessed by different performance metrics such as recall, precision, classification accuracy, and F1 score. We performed extensive experiments on three datasets, Dataset1 which is extracted from ISIC2019, Dataset2 collected from various online dermatological database and Dataset3 combines both datasets. The proposed SkinViT achieved 0.9109 accuracy on Dataset1, 0.8911 accuracy on Dataset3 and 0.8611 accuracy on Dataset2. Moreover, the proposed SkinViT method outperformed other SOTA models and displayed higher accuracy compared to the previous work in the literature. The proposed method demonstrated higher performance efficiency in classification of Melanoma and Nonmelanoma dermoscopic images. This work is expected to inspire further research in implementing a system for detecting skin cancer that can assist dermatologists in timely diagnosing Melanoma and Nonmelanoma patients.
Collapse
Affiliation(s)
- Somaiya Khan
- School of Electronics Engineering, Beijing University of Posts and Telecommunications, Beijing, China
| | - Ali Khan
- School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
5
|
Liu Z, Li H, Li W, Zhang F, Ouyang W, Wang S, Zhi A, Pan X. Development of an Expert-Level Right Ventricular Abnormality Detection Algorithm Based on Deep Learning. Interdiscip Sci 2023; 15:653-662. [PMID: 37470945 DOI: 10.1007/s12539-023-00581-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 07/21/2023]
Abstract
PURPOSE Studies relating to the right ventricle (RV) are inadequate, and specific diagnostic algorithms still need to be improved. This essay is designed to make exploration and verification on an algorithm of deep learning based on imaging and clinical data to detect RV abnormalities. METHODS The Automated Cardiac Diagnosis Challenge dataset includes 20 subjects with RV abnormalities (an RV cavity volume which is higher than 110 mL/m2 or RV ejection fraction which is lower than 40%) and 20 normal subjects who suffered from both cardiac MRI. The subjects were separated into training and validation sets in a ratio of 7:3 and were modeled by utilizing a nerve net of deep-learning and six machine-learning algorithms. Eight MRI specialists from multiple centers independently determined whether each subject in the validation group had RV abnormalities. Model performance was evaluated based on the AUC, accuracy, recall, sensitivity and specificity. Furthermore, a preliminary assessment of patient disease risk was performed based on clinical information using a nomogram. RESULTS The deep-learning neural network outperformed the other six machine-learning algorithms, with an AUC value of 1 (95% confidence interval: 1-1) on both training group and validation group. This algorithm surpassed most human experts (87.5%). In addition, the nomogram model could evaluate a population with a disease risk of 0.2-0.8. CONCLUSIONS A deep-learning algorithm could effectively identify patients with RV abnormalities. This AI algorithm developed specifically for right ventricular abnormalities will improve the detection of right ventricular abnormalities at all levels of care units and facilitate the timely diagnosis and treatment of related diseases. In addition, this study is the first to validate the algorithm's ability to classify RV abnormalities by comparing it with human experts.
Collapse
Affiliation(s)
- Zeye Liu
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Hang Li
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Wenchao Li
- Pediatric Cardiac Surgery, Henan Provincial People's Hospital, Huazhong Fuwai Hospital, Zhengzhou University People's Hospital, Zhengzhou, 450000, China
| | - Fengwen Zhang
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Wenbin Ouyang
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Shouzheng Wang
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Aihua Zhi
- Department of Medical Imaging, Fuwai Yunnan Cardiovascular Hospital, Kunming, 650000, China
| | - Xiangbin Pan
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China.
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China.
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China.
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China.
| |
Collapse
|
6
|
Wang H, Huang G, Zhao Z, Cheng L, Juncker-Jensen A, Nagy ML, Lu X, Zhang X, Chen DZ. CCF-GNN: A Unified Model Aggregating Appearance, Microenvironment, and Topology for Pathology Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3179-3193. [PMID: 37027573 DOI: 10.1109/tmi.2023.3249343] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Pathology images contain rich information of cell appearance, microenvironment, and topology features for cancer analysis and diagnosis. Among such features, topology becomes increasingly important in analysis for cancer immunotherapy. By analyzing geometric and hierarchically structured cell distribution topology, oncologists can identify densely-packed and cancer-relevant cell communities (CCs) for making decisions. Compared to commonly-used pixel-level Convolution Neural Network (CNN) features and cell-instance-level Graph Neural Network (GNN) features, CC topology features are at a higher level of granularity and geometry. However, topological features have not been well exploited by recent deep learning (DL) methods for pathology image classification due to lack of effective topological descriptors for cell distribution and gathering patterns. In this paper, inspired by clinical practice, we analyze and classify pathology images by comprehensively learning cell appearance, microenvironment, and topology in a fine-to-coarse manner. To describe and exploit topology, we design Cell Community Forest (CCF), a novel graph that represents the hierarchical formulation process of big-sparse CCs from small-dense CCs. Using CCF as a new geometric topological descriptor of tumor cells in pathology images, we propose CCF-GNN, a GNN model that successively aggregates heterogeneous features (e.g., appearance, microenvironment) from cell-instance-level, cell-community-level, into image-level for pathology image classification. Extensive cross-validation experiments show that our method significantly outperforms alternative methods on H&E-stained and immunofluorescence images for disease grading tasks with multiple cancer types. Our proposed CCF-GNN establishes a new topological data analysis (TDA) based method, which facilitates integrating multi-level heterogeneous features of point clouds (e.g., for cells) into a unified DL framework.
Collapse
|
7
|
Alirezazadeh P, Dornaika F. Boosted Additive Angular Margin Loss for breast cancer diagnosis from histopathological images. Comput Biol Med 2023; 166:107528. [PMID: 37774559 DOI: 10.1016/j.compbiomed.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 09/11/2023] [Accepted: 09/19/2023] [Indexed: 10/01/2023]
Abstract
Pathologists use biopsies and microscopic examination to accurately diagnose breast cancer. This process is time-consuming, labor-intensive, and costly. Convolutional neural networks (CNNs) offer an efficient and highly accurate approach to reduce analysis time and automate the diagnostic workflow in pathology. However, the softmax loss commonly used in existing CNNs leads to noticeable ambiguity in decision boundaries and lacks a clear constraint for minimizing within-class variance. In response to this problem, a solution in the form of softmax losses based on angular margin was developed. These losses were introduced in the context of face recognition, with the goal of integrating an angular margin into the softmax loss. This integration improves discrimination features during CNN training by effectively increasing the distance between different classes while reducing the variance within each class. Despite significant progress, these losses are limited to target classes only when margin penalties are applied, which may not lead to optimal effectiveness. In this paper, we introduce Boosted Additive Angular Margin Loss (BAM) to obtain highly discriminative features for breast cancer diagnosis from histopathological images. BAM not only penalizes the angle between deep features and their target class weights, but also considers angles between deep features and non-target class weights. We performed extensive experiments on the publicly available BreaKHis dataset. BAM achieved remarkable accuracies of 99.79%, 99.86%, 99.96%, and 97.65% for magnification levels of 40X, 100X, 200X, and 400X, respectively. These results show an improvement in accuracy of 0.13%, 0.34%, and 0.21% for 40X, 100X, and 200X magnifications, respectively, compared to the baseline methods. Additional experiments were performed on the BACH dataset for breast cancer classification and on the widely accepted LFW and YTF datasets for face recognition to evaluate the generalization ability of the proposed loss function. The results show that BAM outperforms state-of-the-art methods by increasing the decision space between classes and minimizing intra-class variance, resulting in improved discriminability.
Collapse
Affiliation(s)
| | - Fadi Dornaika
- Ho Chi Minh City Open University, Ho Chi Minh City, Viet Nam.
| |
Collapse
|
8
|
Xu C, Yi K, Jiang N, Li X, Zhong M, Zhang Y. MDFF-Net: A multi-dimensional feature fusion network for breast histopathology image classification. Comput Biol Med 2023; 165:107385. [PMID: 37633086 DOI: 10.1016/j.compbiomed.2023.107385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 07/23/2023] [Accepted: 08/14/2023] [Indexed: 08/28/2023]
Abstract
Breast cancer is a common malignancy and early detection and treatment of it is crucial. Computer-aided diagnosis (CAD) based on deep learning has significantly advanced medical diagnostics, enhancing accuracy and efficiency in recent years. Despite the convenience, this technology also has certain limitations. When the morphological characteristics of the patient's pathological section are not evident or complex, certain small lesions or cells deep within the lesion cannot be recognized, and misdiagnosis is prone to occur. As a result, MDFF-Net, a CNN-based multidimensional feature fusion network, is proposed. The model consists of a one-dimensional feature extraction network, a two-dimensional feature extraction network, and a feature fusion classification network. The basic part of the two-dimensional feature extraction network is stacked by modules integrated with multi-scale channel shuffling networks and channel attention modules. Furthermore, inspired by natural language processing, this model integrates a one-dimensional feature extraction network to extract detailed information in the image to avoid misdiagnosis caused by insufficient information extraction such as cell morphological characteristics and differentiation degree. Finally, the extracted one-dimensional and two-dimensional features are fused in the feature fusion network and employed for the final classification. The effectiveness of MDFF-Net and classical classification models were evaluated on the BreakHis and the BACH datasets. According to experimental results, MDFF-Net achieves an accuracy of 98.86% on the BreakHis and 86.25% on the BACH dataset. Furthermore, to further assess the effectiveness of the model in other classification tasks, the colon cancer and the lung cancer datasets were employed for additional experiments, achieving a classification accuracy of 100% in both cases.
Collapse
Affiliation(s)
- Cheng Xu
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Ke Yi
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Nan Jiang
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Xiong Li
- School of Software, East China Jiaotong University, Nanchang, 330013, China
| | - Meiling Zhong
- School of Materials Science and Engineering, East China Jiaotong University, 330013, Nanchang, China
| | - Yuejin Zhang
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China.
| |
Collapse
|
9
|
Wang J, Quan H, Wang C, Yang G. Pyramid-based self-supervised learning for histopathological image classification. Comput Biol Med 2023; 165:107336. [PMID: 37708715 DOI: 10.1016/j.compbiomed.2023.107336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 07/14/2023] [Accepted: 08/07/2023] [Indexed: 09/16/2023]
Abstract
Large-scale labeled datasets are crucial for the success of supervised learning in medical imaging. However, annotating histopathological images is a time-consuming and labor-intensive task that requires highly trained professionals. To address this challenge, self-supervised learning (SSL) can be utilized to pre-train models on large amounts of unsupervised data and transfer the learned representations to various downstream tasks. In this study, we propose a self-supervised Pyramid-based Local Wavelet Transformer (PLWT) model for effectively extracting rich image representations. The PLWT model extracts both local and global features to pre-train a large number of unlabeled histopathology images in a self-supervised manner. Wavelet is used to replace average pooling in the downsampling of the multi-head attention, achieving a significant reduction in information loss during the transmission of image features. Additionally, we introduce a Local Squeeze-and-Excitation (Local SE) module in the feedforward network in combination with the inverse residual to capture local image information. We evaluate PLWT's performance on three histopathological images and demonstrate the impact of pre-training. Our experiment results indicate that PLWT with self-supervised learning performs highly competitive when compared with other SSL methods, and the transferability of visual representations generated by SSL on domain-relevant histopathological images exceeds that of the supervised baseline trained on ImageNet.
Collapse
Affiliation(s)
- Junjie Wang
- Ningbo Artificial Intelligence Institute of Shanghai Jiao Tong University, Zhejiang 315000, PR China; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PR China.
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110016, PR China.
| | - Chengguang Wang
- Ningbo Industrial Internet Institute, Zhejiang 315000, PR China.
| | - Genke Yang
- Ningbo Artificial Intelligence Institute of Shanghai Jiao Tong University, Zhejiang 315000, PR China; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PR China.
| |
Collapse
|
10
|
Yusoff M, Haryanto T, Suhartanto H, Mustafa WA, Zain JM, Kusmardi K. Accuracy Analysis of Deep Learning Methods in Breast Cancer Classification: A Structured Review. Diagnostics (Basel) 2023; 13:diagnostics13040683. [PMID: 36832171 PMCID: PMC9955565 DOI: 10.3390/diagnostics13040683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer is diagnosed using histopathological imaging. This task is extremely time-consuming due to high image complexity and volume. However, it is important to facilitate the early detection of breast cancer for medical intervention. Deep learning (DL) has become popular in medical imaging solutions and has demonstrated various levels of performance in diagnosing cancerous images. Nonetheless, achieving high precision while minimizing overfitting remains a significant challenge for classification solutions. The handling of imbalanced data and incorrect labeling is a further concern. Additional methods, such as pre-processing, ensemble, and normalization techniques, have been established to enhance image characteristics. These methods could influence classification solutions and be used to overcome overfitting and data balancing issues. Hence, developing a more sophisticated DL variant could improve classification accuracy while reducing overfitting. Technological advancements in DL have fueled automated breast cancer diagnosis growth in recent years. This paper reviewed studies on the capability of DL to classify histopathological breast cancer images, as the objective of this study was to systematically review and analyze current research on the classification of histopathological images. Additionally, literature from the Scopus and Web of Science (WOS) indexes was reviewed. This study assessed recent approaches for histopathological breast cancer image classification in DL applications for papers published up until November 2022. The findings of this study suggest that DL methods, especially convolution neural networks and their hybrids, are the most cutting-edge approaches currently in use. To find a new technique, it is necessary first to survey the landscape of existing DL approaches and their hybrid methods to conduct comparisons and case studies.
Collapse
Affiliation(s)
- Marina Yusoff
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Toto Haryanto
- Department of Computer Science, IPB University, Bogor 16680, Indonesia
| | - Heru Suhartanto
- Faculty of Computer Science, Universitas Indonesia, Depok 16424, Indonesia
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, Padang Besar 02100, Perlis, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Jasni Mohamad Zain
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
| | - Kusmardi Kusmardi
- Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia/Cipto Mangunkusumo Hospital, Jakarta 10430, Indonesia
- Human Cancer Research Cluster, Indonesia Medical Education and Research Institute, Universitas Indonesia, Jakarta 10430, Indonesia
| |
Collapse
|
11
|
Wrapper-based deep feature optimization for activity recognition in the wearable sensor networks of healthcare systems. Sci Rep 2023; 13:965. [PMID: 36653370 PMCID: PMC9846703 DOI: 10.1038/s41598-022-27192-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 12/28/2022] [Indexed: 01/19/2023] Open
Abstract
The Human Activity Recognition (HAR) problem leverages pattern recognition to classify physical human activities as they are captured by several sensor modalities. Remote monitoring of an individual's activities has gained importance due to the reduction in travel and physical activities during the pandemic. Research on HAR enables one person to either remotely monitor or recognize another person's activity via the ubiquitous mobile device or by using sensor-based Internet of Things (IoT). Our proposed work focuses on the accurate classification of daily human activities from both accelerometer and gyroscope sensor data after converting into spectrogram images. The feature extraction process follows by leveraging the pre-trained weights of two popular and efficient transfer learning convolutional neural network models. Finally, a wrapper-based feature selection method has been employed for selecting the optimal feature subset that both reduces the training time and improves the final classification performance. The proposed HAR model has been tested on the three benchmark datasets namely, HARTH, KU-HAR and HuGaDB and has achieved 88.89%, 97.97% and 93.82% respectively on these datasets. It is to be noted that the proposed HAR model achieves an improvement of about 21%, 20% and 6% in the overall classification accuracies while utilizing only 52%, 45% and 60% of the original feature set for HuGaDB, KU-HAR and HARTH datasets respectively. This proves the effectiveness of our proposed wrapper-based feature selection HAR methodology.
Collapse
|
12
|
A Multi-Stage Approach to Breast Cancer Classification Using Histopathology Images. Diagnostics (Basel) 2022; 13:diagnostics13010126. [PMID: 36611418 PMCID: PMC9818545 DOI: 10.3390/diagnostics13010126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/16/2022] [Accepted: 12/25/2022] [Indexed: 01/03/2023] Open
Abstract
Breast cancer is one of the deadliest diseases worldwide among women. Early diagnosis and proper treatment can save many lives. Breast image analysis is a popular method for detecting breast cancer. Computer-aided diagnosis of breast images helps radiologists do the task more efficiently and appropriately. Histopathological image analysis is an important diagnostic method for breast cancer, which is basically microscopic imaging of breast tissue. In this work, we developed a deep learning-based method to classify breast cancer using histopathological images. We propose a patch-classification model to classify the image patches, where we divide the images into patches and pre-process these patches with stain normalization, regularization, and augmentation methods. We use machine-learning-based classifiers and ensembling methods to classify the image patches into four categories: normal, benign, in situ, and invasive. Next, we use the patch information from this model to classify the images into two classes (cancerous and non-cancerous) and four other classes (normal, benign, in situ, and invasive). We introduce a model to utilize the 2-class classification probabilities and classify the images into a 4-class classification. The proposed method yields promising results and achieves a classification accuracy of 97.50% for 4-class image classification and 98.6% for 2-class image classification on the ICIAR BACH dataset.
Collapse
|