1
|
Majanga V, Mnkandla E, Wang Z, Moulla DK. Automatic Blob Detection Method for Cancerous Lesions in Unsupervised Breast Histology Images. Bioengineering (Basel) 2025; 12:364. [PMID: 40281724 PMCID: PMC12024787 DOI: 10.3390/bioengineering12040364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Revised: 02/20/2025] [Accepted: 03/11/2025] [Indexed: 04/29/2025] Open
Abstract
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the visual inspection of medical images, which is ineffective, particularly for large and visible cancerous lesions in such images. Additionally, conventional methods face challenges in analyzing objects in large images due to overlapping/intersecting objects and the inability to resolve their image boundaries/edges. Nevertheless, the early detection of breast cancer lesions is a key determinant for diagnosis and treatment. In this study, we present a deep learning-based technique for breast cancer lesion detection, namely blob detection, which automatically detects hidden and inaccessible cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Secondly, a stain normalization technique is applied to the augmented images to separate nucleus features from tissue structures. Thirdly, morphology operation techniques, namely erosion, dilation, opening, and a distance transform, are used to enhance the images by highlighting foreground and background pixels while removing overlapping regions from the highlighted nucleus objects in the image. Subsequently, image segmentation is handled via the connected components method, which groups highlighted pixel components with similar intensity values and assigns them to their relevant labeled components (binary masks). These binary masks are then used in the active contours method for further segmentation by highlighting the boundaries/edges of ROIs. Finally, a deep learning recurrent neural network (RNN) model automatically detects and extracts cancerous lesions and their edges from the histology images via the blob detection method. This proposed approach utilizes the capabilities of both the connected components method and the active contours method to resolve the limitations of blob detection. This detection method is evaluated on 27,249 unsupervised, augmented human breast cancer histology dataset images, and it shows a significant evaluation result in the form of a 98.82% F1 accuracy score.
Collapse
|
2
|
Bahrambanan F, Alizamir M, Moradveisi K, Heddam S, Kim S, Kim S, Soleimani M, Afshar S, Taherkhani A. The development of an efficient artificial intelligence-based classification approach for colorectal cancer response to radiochemotherapy: deep learning vs. machine learning. Sci Rep 2025; 15:62. [PMID: 39748016 PMCID: PMC11696929 DOI: 10.1038/s41598-024-84023-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 12/19/2024] [Indexed: 01/04/2025] Open
Abstract
Colorectal cancer (CRC) is a form of cancer that impacts both the rectum and colon. Typically, it begins with a small abnormal growth known as a polyp, which can either be non-cancerous or cancerous. Therefore, early detection of colorectal cancer as the second deadliest cancer after lung cancer, can be highly beneficial. Moreover, the standard treatment for locally advanced colorectal cancer, which is widely accepted around the world, is chemoradiotherapy. Then, in this study, seven artificial intelligence models including decision tree, K-nearest neighbors, Adaboost, random forest, Gradient Boosting, multi-layer perceptron, and convolutional neural network were implemented to detect patients responder and non-responder to radiochemotherapy. For finding the potential predictors (genes), three feature selection strategies were employed including mutual information, F-classif, and Chi-Square. Based on feature selection models, four different scenarios were developed and five, ten, twenty and thirty features selected for designing a more accurate classification paradigm. The results of this study confirm that random forest, Gradient Boosting, decision tree, and K-nearest neighbors provided more accurate results in terms of accuracy, by 93.8%. Moreover, Among the feature selection methods, mutual information and F-classif showed the best results, while Chi-Square produced the worst results. Therefore, the suggested artificial intelligence models can be successfully applied as a robust approach for classification of colorectal cancer response to radiochemotherapy for medical studies.
Collapse
Affiliation(s)
- Fatemeh Bahrambanan
- Research Center for Molecular Medicine, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Meysam Alizamir
- Institute of Research and Development, Duy Tan University, Da Nang, Vietnam.
- School of Engineering & Technology, Duy Tan University, Da Nang, Vietnam.
| | - Kayhan Moradveisi
- Civil Engineering Department, University of Kurdistan, Sanandaj, Iran
| | - Salim Heddam
- Faculty of Science, Agronomy Department, Hydraulics Division, University 20 Août 1955, Route El Hadaik BP 26, 21000, Skikda, Algeria
| | - Sungwon Kim
- Department of Railroad Construction and Safety Engineering, Dongyang University, Yeongju, 36040, Republic of Korea
| | - Seunghyun Kim
- Department of Biology, University of California San Diego, San Diego, CA, 92093, USA
| | - Meysam Soleimani
- Department of Pharmaceutical Biotechnology, School of Pharmacy, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Saeid Afshar
- Department of Molecular Medicine and Genetics, Medical School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Amir Taherkhani
- Research Center for Molecular Medicine, Hamadan University of Medical Sciences, Hamadan, Iran
| |
Collapse
|
3
|
Mahbod A, Dorffner G, Ellinger I, Woitek R, Hatamikia S. Improving generalization capability of deep learning-based nuclei instance segmentation by non-deterministic train time and deterministic test time stain normalization. Comput Struct Biotechnol J 2024; 23:669-678. [PMID: 38292472 PMCID: PMC10825317 DOI: 10.1016/j.csbj.2023.12.042] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 02/01/2024] Open
Abstract
With the advent of digital pathology and microscopic systems that can scan and save whole slide histological images automatically, there is a growing trend to use computerized methods to analyze acquired images. Among different histopathological image analysis tasks, nuclei instance segmentation plays a fundamental role in a wide range of clinical and research applications. While many semi- and fully-automatic computerized methods have been proposed for nuclei instance segmentation, deep learning (DL)-based approaches have been shown to deliver the best performances. However, the performance of such approaches usually degrades when tested on unseen datasets. In this work, we propose a novel method to improve the generalization capability of a DL-based automatic segmentation approach. Besides utilizing one of the state-of-the-art DL-based models as a baseline, our method incorporates non-deterministic train time and deterministic test time stain normalization, and ensembling to boost the segmentation performance. We trained the model with one single training set and evaluated its segmentation performance on seven test datasets. Our results show that the proposed method provides up to 4.9%, 5.4%, and 5.9% better average performance in segmenting nuclei based on Dice score, aggregated Jaccard index, and panoptic quality score, respectively, compared to the baseline segmentation model.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| |
Collapse
|
4
|
Islam T, Hoque ME, Ullah M, Islam T, Nishu NA, Islam R. CNN-based deep learning approach for classification of invasive ductal and metastasis types of breast carcinoma. Cancer Med 2024; 13:e70069. [PMID: 39215495 PMCID: PMC11364780 DOI: 10.1002/cam4.70069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 04/04/2024] [Accepted: 07/23/2024] [Indexed: 09/04/2024] Open
Abstract
OBJECTIVE Breast cancer is one of the leading cancer causes among women worldwide. It can be classified as invasive ductal carcinoma (IDC) or metastatic cancer. Early detection of breast cancer is challenging due to the lack of early warning signs. Generally, a mammogram is recommended by specialists for screening. Existing approaches are not accurate enough for real-time diagnostic applications and thus require better and smarter cancer diagnostic approaches. This study aims to develop a customized machine-learning framework that will give more accurate predictions for IDC and metastasis cancer classification. METHODS This work proposes a convolutional neural network (CNN) model for classifying IDC and metastatic breast cancer. The study utilized a large-scale dataset of microscopic histopathological images to automatically perceive a hierarchical manner of learning and understanding. RESULTS It is evident that using machine learning techniques significantly (15%-25%) boost the effectiveness of determining cancer vulnerability, malignancy, and demise. The results demonstrate an excellent performance ensuring an average of 95% accuracy in classifying metastatic cells against benign ones and 89% accuracy was obtained in terms of detecting IDC. CONCLUSIONS The results suggest that the proposed model improves classification accuracy. Therefore, it could be applied effectively in classifying IDC and metastatic cancer in comparison to other state-of-the-art models.
Collapse
Affiliation(s)
- Tobibul Islam
- Department of Biomedical EngineeringMilitary Institute of Science and TechnologyDhakaBangladesh
| | - Md Enamul Hoque
- Department of Biomedical EngineeringMilitary Institute of Science and TechnologyDhakaBangladesh
| | - Mohammad Ullah
- Center for Advance Intelligent MaterialsUniversiti Malaysia PahangKuantanMalaysia
| | - Toufiqul Islam
- Department of SurgeryM Abdur Rahim Medical CollegeDinajpurBangladesh
| | | | - Rabiul Islam
- Department of Electrical and Computer EngineeringTexas A&M UniversityCollege StationTexasUSA
| |
Collapse
|
5
|
Bai D, Zhou N, Liu X, Liang Y, Lu X, Wang J, Liang L, Wang Z. The diagnostic value of multimodal imaging based on MR combined with ultrasound in benign and malignant breast diseases. Clin Exp Med 2024; 24:110. [PMID: 38780895 PMCID: PMC11116236 DOI: 10.1007/s10238-024-01377-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 05/13/2024] [Indexed: 05/25/2024]
Abstract
We aimed to construct and validate a multimodality MRI combined with ultrasound based on radiomics for the evaluation of benign and malignant breast diseases. The preoperative enhanced MRI and ultrasound images of 131 patients with breast diseases confirmed by pathology in Aerospace Center Hospital from January 2021 to August 2023 were retrospectively analyzed, including 73 benign diseases and 58 malignant diseases. Ultrasound and 3.0 T multiparameter MRI scans were performed in all patients. Then, all the data were divided into training set and validation set in a 7:3 ratio. Regions of interest were drawn layer by layer based on ultrasound and MR enhanced sequences to extract radiomics features. The optimal radiomic features were selected by the best feature screening method. Logistic Regression classifier was used to establish models according to the best features, including ultrasound model, MRI model, ultrasound combined with MRI model. The model efficacy was evaluated by the area under the curve (AUC) of the receiver operating characteristic, sensitivity, specificity, and accuracy. The F-test based on ANOVA was used to screen out 20 best ultrasonic features, 11 best MR Features, and 14 best features from the combined model. Among them, texture features accounted for the largest proportion, accounting for 79%.The ultrasound combined with MR Image fusion model based on logistic regression classifier had the best diagnostic performance. The AUC of the training group and the validation group were 0.92 and 091, the sensitivity was 0.80 and 0.67, the specificity was 0.90 and 0.94, and the accuracy was 0.84 and 0.79, respectively. It was better than the simple ultrasound model (AUC of validation set was 0.82) or the simple MR model (AUC of validation set was 0.85). Compared with the traditional ultrasound or magnetic resonance diagnosis of breast diseases, the multimodal model of MRI combined with ultrasound based on radiomics can more accurately predict the benign and malignant breast diseases, thus providing a better basis for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Dong Bai
- Department of Radiology, Aerospace Center Hospital, Beijing, China
| | - Nan Zhou
- Department of Ultrasound, Aerospace Center Hospital, Beijing, China
| | - Xiaofei Liu
- Department of Radiology, Liangxiang Hospital, Fangshan District, Beijing, China
| | - Yuanzi Liang
- Department of Radiology, Aerospace Center Hospital, Beijing, China
| | - Xiaojun Lu
- Department of Radiology, Aerospace Center Hospital, Beijing, China
| | - Jiajun Wang
- Department of Ultrasound, Aerospace Center Hospital, Beijing, China
| | - Lei Liang
- Department of Ultrasound, Aerospace Center Hospital, Beijing, China.
| | - Zhiqun Wang
- Department of Radiology, Aerospace Center Hospital, Beijing, China.
| |
Collapse
|
6
|
Rai HM, Yoo J, Atif Moqurrab S, Dashkevych S. Advancements in traditional machine learning techniques for detection and diagnosis of fatal cancer types: Comprehensive review of biomedical imaging datasets. MEASUREMENT 2024; 225:114059. [DOI: 10.1016/j.measurement.2023.114059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
|
7
|
Zaki M, Elallam O, Jami O, EL Ghoubali D, Jhilal F, Alidrissi N, Ghazal H, Habib N, Abbad F, Benmoussa A, Bakkali F. Advancing Tumor Cell Classification and Segmentation in Ki-67 Images: A Systematic Review of Deep Learning Approaches. LECTURE NOTES IN NETWORKS AND SYSTEMS 2024:94-112. [DOI: 10.1007/978-3-031-52385-4_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
8
|
Khazaee Fadafen M, Rezaee K. Ensemble-based multi-tissue classification approach of colorectal cancer histology images using a novel hybrid deep learning framework. Sci Rep 2023; 13:8823. [PMID: 37258631 DOI: 10.1038/s41598-023-35431-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/17/2023] [Indexed: 06/02/2023] Open
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world, so digital pathology is essential for assessing prognosis. Due to the increasing resolution and quantity of whole slide images (WSIs), as well as the lack of annotated information, previous methodologies cannot be generalized as effective decision-making systems. Since deep learning (DL) methods can handle large-scale applications, they can provide a viable alternative to histopathology image (HI) analysis. DL architectures, however, may not be sufficient to classify CRC tissues based on anatomical histopathology data. A dilated ResNet (dResNet) structure and attention module are used to generate deep feature maps in order to classify multiple tissues in HIs. In addition, neighborhood component analysis (NCA) overcomes the constraint of computational complexity. Data is fed into a deep support vector machine (SVM) based on an ensemble learning algorithm called DeepSVM after the features have been selected. CRC-5000 and NCT-CRC-HE-100 K datasets were analyzed to validate and test the hybrid procedure. We demonstrate that the hybrid model achieves 98.75% and 99.76% accuracy on CRC datasets. The results showed that only pathologists' labels could successfully classify unseen WSIs. Furthermore, the hybrid deep learning method outperforms state-of-the-art approaches in terms of computational efficiency and time. Using the proposed mechanism for tissue analysis, it will be possible to correctly predict CRC based on accurate pathology image classification.
Collapse
Affiliation(s)
- Masoud Khazaee Fadafen
- Department of Electrical Engineering, Technical and Vocational University (TVU), Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran.
| |
Collapse
|
9
|
Abhishek A, Jha RK, Sinha R, Jha K. Automated detection and classification of leukemia on a subject-independent test dataset using deep transfer learning supported by Grad-CAM visualization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
10
|
Steyaert S, Pizurica M, Nagaraj D, Khandelwal P, Hernandez-Boussard T, Gentles AJ, Gevaert O. Multimodal data fusion for cancer biomarker discovery with deep learning. NAT MACH INTELL 2023; 5:351-362. [PMID: 37693852 PMCID: PMC10484010 DOI: 10.1038/s42256-023-00633-5] [Citation(s) in RCA: 76] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 02/17/2023] [Indexed: 09/12/2023]
Abstract
Technological advances now make it possible to study a patient from multiple angles with high-dimensional, high-throughput multi-scale biomedical data. In oncology, massive amounts of data are being generated ranging from molecular, histopathology, radiology to clinical records. The introduction of deep learning has significantly advanced the analysis of biomedical data. However, most approaches focus on single data modalities leading to slow progress in methods to integrate complementary data types. Development of effective multimodal fusion approaches is becoming increasingly important as a single modality might not be consistent and sufficient to capture the heterogeneity of complex diseases to tailor medical care and improve personalised medicine. Many initiatives now focus on integrating these disparate modalities to unravel the biological processes involved in multifactorial diseases such as cancer. However, many obstacles remain, including lack of usable data as well as methods for clinical validation and interpretation. Here, we cover these current challenges and reflect on opportunities through deep learning to tackle data sparsity and scarcity, multimodal interpretability, and standardisation of datasets.
Collapse
Affiliation(s)
- Sandra Steyaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | | | | | - Tina Hernandez-Boussard
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Andrew J Gentles
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| |
Collapse
|
11
|
The Systematic Review of Artificial Intelligence Applications in Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 13:diagnostics13010045. [PMID: 36611337 PMCID: PMC9818874 DOI: 10.3390/diagnostics13010045] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/16/2022] [Accepted: 12/17/2022] [Indexed: 12/28/2022] Open
Abstract
Several studies have demonstrated the value of artificial intelligence (AI) applications in breast cancer diagnosis. The systematic review of AI applications in breast cancer diagnosis includes several studies that compare breast cancer diagnosis and AI. However, they lack systematization, and each study appears to be conducted uniquely. The purpose and contributions of this study are to offer elaborative knowledge on the applications of AI in the diagnosis of breast cancer through citation analysis in order to categorize the main area of specialization that attracts the attention of the academic community, as well as thematic issue analysis to identify the species being researched in each category. In this study, a total number of 17,900 studies addressing breast cancer and AI published between 2012 and 2022 were obtained from these databases: IEEE, Embase: Excerpta Medica Database Guide-Ovid, PubMed, Springer, Web of Science, and Google Scholar. We applied inclusion and exclusion criteria to the search; 36 studies were identified. The vast majority of AI applications used classification models for the prediction of breast cancer. Howbeit, accuracy (99%) has the highest number of performance metrics, followed by specificity (98%) and area under the curve (0.95). Additionally, the Convolutional Neural Network (CNN) was the best model of choice in several studies. This study shows that the quantity and caliber of studies that use AI applications in breast cancer diagnosis will continue to rise annually. As a result, AI-based applications are viewed as a supplement to doctors' clinical reasoning, with the ultimate goal of providing quality healthcare that is both affordable and accessible to everyone worldwide.
Collapse
|
12
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
13
|
Wisaeng K. Breast Cancer Detection in Mammogram Images Using K-Means++ Clustering Based on Cuckoo Search Optimization. Diagnostics (Basel) 2022; 12:3088. [PMID: 36553095 PMCID: PMC9777540 DOI: 10.3390/diagnostics12123088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 11/30/2022] [Accepted: 12/06/2022] [Indexed: 12/13/2022] Open
Abstract
Traditional breast cancer detection algorithms require manual extraction of features from mammogram images and professional medical knowledge. Still, the quality of mammogram images hampers this and extracting high-quality features, which can result in very long processing times. Therefore, this paper proposes a new K-means++ clustering based on Cuckoo Search Optimization (KM++CSO) for breast cancer detection. The pre-processing method is used to improve the proposed KM++CSO method more segmentation efficiently. Furthermore, the interpretability is further enhanced using mathematical morphology and OTSU's threshold. To this end, we tested the effectiveness of the KM++CSO methods on the mammogram image analysis society of the Mini-Mammographic Image Analysis Society (Mini-MIAS), the Digital Database for Screening Mammography (DDSM), and the Breast Cancer Digital Repository (BCDR) dataset through cross-validation. We maximize the accuracy and Jaccard index score, which is a measure that indicates the similarity between detected cancer and their corresponding reference cancer regions. The experimental results showed that the detection method obtained an accuracy of 96.42% (Mini-MIAS), 95.49% (DDSM), and 96.92% (BCDR). On overage, the KM++CSO method obtained 96.27% accuracy for three publicly available datasets. In addition, the detection results provided the 91.05% Jaccard index score.
Collapse
Affiliation(s)
- Kittipol Wisaeng
- Technology and Business Information System Unit, Mahasarakham Business School, Mahasarakham University, Mahasarakham 44150, Thailand
| |
Collapse
|
14
|
Park Y, Kim M, Ashraf M, Ko YS, Yi MY. MixPatch: A New Method for Training Histopathology Image Classifiers. Diagnostics (Basel) 2022; 12:diagnostics12061493. [PMID: 35741303 PMCID: PMC9221905 DOI: 10.3390/diagnostics12061493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Mujin Kim
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Murtaza Ashraf
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul 04805, Korea;
| | - Mun Yong Yi
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
- Correspondence:
| |
Collapse
|
15
|
Meirelles AL, Kurc T, Saltz J, Teodoro G. Effective active learning in digital pathology: A case study in tumor infiltrating lymphocytes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106828. [PMID: 35500506 DOI: 10.1016/j.cmpb.2022.106828] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 04/09/2022] [Accepted: 04/19/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they require a large amount of annotated training data from expert pathologists. The aim of this study is to minimize the data annotation need in these analyses. METHODS Active learning (AL) is an iterative approach to training deep learning models. It was used in our context with a Tumor Infiltrating Lymphocytes (TIL) classification task to minimize annotation. State-of-the-art AL methods were evaluated with the TIL application and we have proposed and evaluated a more efficient and effective AL acquisition method. The proposed method uses data grouping based on imaging features and model prediction uncertainty to select meaningful training samples (image patches). RESULTS An experimental evaluation with a collection of cancer tissue images shows that: (i) Our approach reduces the number of patches required to attain a given AUC as compared to other approaches, and (ii) our optimization (subpooling) leads to AL execution time improvement of about 2.12×. CONCLUSIONS This strategy enabled TIL based deep learning analyses using smaller annotation demand. We expect this approach may be used to build other analyses in digital pathology with fewer training samples.
Collapse
Affiliation(s)
- André Ls Meirelles
- Department of Computer Science, University of Brasília, Brasília, 70910-900, Brazil
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, 11794-8322, USA
| | - Joel Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, 11794-8322, USA
| | - George Teodoro
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, 31270-901, Brazil.
| |
Collapse
|
16
|
Histopathological image recognition of breast cancer based on three-channel reconstructed color slice feature fusion. Biochem Biophys Res Commun 2022; 619:159-165. [DOI: 10.1016/j.bbrc.2022.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 05/22/2022] [Accepted: 06/02/2022] [Indexed: 11/22/2022]
|
17
|
Meirelles ALS, Kurc T, Kong J, Ferreira R, Saltz JH, Teodoro G. Building Efficient CNN Architectures for Histopathology Images Analysis: A Case-Study in Tumor-Infiltrating Lymphocytes Classification. Front Med (Lausanne) 2022; 9:894430. [PMID: 35712087 PMCID: PMC9197439 DOI: 10.3389/fmed.2022.894430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/11/2022] [Indexed: 11/13/2022] Open
Abstract
Background Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they are computationally very demanding. The aim of our study is to reduce their computational cost to enable their use with large tissue image datasets. Methods We propose a method called Network Auto-Reduction (NAR) that simplifies a Convolutional Neural Network (CNN) by reducing the network to minimize the computational cost of doing a prediction. NAR performs a compound scaling in which the width, depth, and resolution dimensions of the network are reduced together to maintain a balance among them in the resulting simplified network. We compare our method with a state-of-the-art solution called ResRep. The evaluation is carried out with popular CNN architectures and a real-world application that identifies distributions of tumor-infiltrating lymphocytes in tissue images. Results The experimental results show that both ResRep and NAR are able to generate simplified, more efficient versions of ResNet50 V2. The simplified versions by ResRep and NAR require 1.32× and 3.26× fewer floating-point operations (FLOPs), respectively, than the original network without a loss in classification power as measured by the Area under the Curve (AUC) metric. When applied to a deeper and more computationally expensive network, Inception V4, NAR is able to generate a version that requires 4× lower than the original version with the same AUC performance. Conclusions NAR is able to achieve substantial reductions in the execution cost of two popular CNN architectures, while resulting in small or no loss in model accuracy. Such cost savings can significantly improve the use of deep learning methods in digital pathology. They can enable studies with larger tissue image datasets and facilitate the use of less expensive and more accessible graphics processing units (GPUs), thus reducing the computing costs of a study.
Collapse
Affiliation(s)
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY, United States
| | - Jun Kong
- Department of Mathematics and Statistics and Computer Science, Georgia State University, Atlanta, GA, United States
| | - Renato Ferreira
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Joel H. Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY, United States
| | - George Teodoro
- Department of Computer Science, Universidade de Brasília, Brasília, Brazil
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
18
|
Ukwuoma CC, Hossain MA, Jackson JK, Nneji GU, Monday HN, Qin Z. Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:1152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION AND BACKGROUND Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Md Altab Hossain
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| |
Collapse
|
19
|
Huang Z, Yang L, Chen J, Li S, Huang J, Chen Y, Liu J, Wang H, Yu H. CCDC134 as a Prognostic-Related Biomarker in Breast Cancer Correlating With Immune Infiltrates. Front Oncol 2022; 12:858487. [PMID: 35311121 PMCID: PMC8927640 DOI: 10.3389/fonc.2022.858487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 02/08/2022] [Indexed: 12/24/2022] Open
Abstract
Background The expression of Coiled-Coil Domain Containing 134(CCDC134) is up-regulated in different pan-cancer species. However, its prognostic value and correlation with immune infiltration in breast cancer are unclear. Therefore, we evaluated the prognostic role of CCDC134 in breast cancer and its correlation with immune invasion. Methods We downloaded the transcription profile of CCDC134 between breast cancer and normal tissues from the Cancer Genome Atlas (TCGA). CCDC134 protein expression was assessed by the Clinical Proteomic Cancer Analysis Consortium (CPTAC) and the Human Protein Atlas. Gene set enrichment analysis (GSEA) was also used for pathway analysis. Receiver operating characteristic (ROC) curve was used to differentiate breast cancer from adjacent normal tissues. Kaplan-Meier method was used to evaluate the effect of CCDC134 on survival rate. The protein-protein interaction (PPI) network is built from STRING. Function expansion analysis is performed using the ClusterProfiler package. Through tumor Immune Estimation Resource (TIMER) and tumor Immune System Interaction database (TISIDB) to determine the relationship between CCDC134 expression level and immune infiltration. CTD database is used to predict drugs that inhibit CCDC134 and PubChem database is used to determine the molecular structure of identified drugs. Results The expression of CCDC134 in breast cancer tissues was significantly higher than that of CCDC134 mRNA expression in adjacent normal tissues. ROC curve analysis showed that the AUC value of CCDC134 was 0.663. Kaplan-meier survival analysis showed that patients with high CCDC134 had a lower prognosis (57.27 months vs 36.96 months, P = 2.0E-6). Correlation analysis showed that CCDC134 mRNA expression was associated with tumor purity immune invasion. In addition, CTD database analysis identified abrine, Benzo (A) Pyrene, bisphenol A, Soman, Sunitinib, Tetrachloroethylene, Valproic Acid as seven targeted therapy drugs that may be effective treatments for seven targeted therapeutics. It may be an effective treatment for inhibiting CCDC134. Conclusion In breast cancer, upregulated CCDC134 is significantly associated with lower survival and immune infiltrates invasion. Our study suggests that CCDC134 can serve as a biomarker of poor prognosis and a potential immunotherapy target in breast cancer. Seven drugs with significant potential to inhibit CCDC134 were identified.
Collapse
Affiliation(s)
- Zhijian Huang
- Department of Breast Surgical Oncology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China.,The Graduate School of Fujian Medical University, Fuzhou, China
| | - Linhui Yang
- Department of Breast Surgical Oncology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China
| | - Jian Chen
- Department of Breast Surgical Oncology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China
| | - Shixiong Li
- Department of Breast Surgical Oncology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China
| | - Jing Huang
- Department of Pharmacy, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China
| | - Yijie Chen
- Department of Ultrasound, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China
| | - Jingbo Liu
- Pathology Department, Daqing Longnan Hospital, The Fifth Affiliated Hospital of Qiqihar Medical College, Daqing, China
| | - Hongyan Wang
- Department of Pathology, Daqing Oilfield General Hospital, Daqing, China
| | - Hui Yu
- Department of Pharmacy, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, China
| |
Collapse
|
20
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
21
|
Rashmi R, Prasad K, Udupa CBK. Breast histopathological image analysis using image processing techniques for diagnostic puposes: A methodological review. J Med Syst 2021; 46:7. [PMID: 34860316 PMCID: PMC8642363 DOI: 10.1007/s10916-021-01786-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/21/2021] [Indexed: 12/24/2022]
Abstract
Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
22
|
Zhang S, Yuan Z, Wang Y, Bai Y, Chen B, Wang H. REUR: A unified deep framework for signet ring cell detection in low-resolution pathological images. Comput Biol Med 2021; 136:104711. [PMID: 34388466 DOI: 10.1016/j.compbiomed.2021.104711] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 11/15/2022]
Abstract
Detecting signet ring cells (SRCs) in pathological images is essential for carcinoma diagnosis. However, it is time consuming for pathologists to detect SRCs manually from pathological images, and the accuracy of detecting them is also relatively low because of their small sizes. Recently, the exploration of deep learning methods in pathology analysis has been widely investigated by researchers. Nevertheless, the automatic detection of SRCs from real pathological images faces two problems. One is that labeled pathological images are insufficient and usually incomplete. The other is that the training data and the real clinical data have a large difference in resolution. Hence, adopting the transfer learning method affects the performance of deep learning methods. To address these two problems, we present a unified framework named REUR [RetinaNet combining USRNet (unfolding super-resolution network) with the RGHMC (revised gradient harmonizing mechanism classification) loss] that can accurately detect SRCs in low-resolution (LR) pathological images. First, the framework with the super-resolution (SR) module can address the difference in resolution between the training data and the real clinical data. Second, the framework with the label correction module can obtain the revised ground-truth labels from noisy examples, which are embedded into the gradient harmonizing mechanism to acquire the RGHMC loss. The results of the numerical experiments showed that the framework can perform better than other one-stage detectors based on the RetinaNet architecture in the high-resolution (HR) noisy dataset. It achieved a kappa value of 0.74 and an accuracy of 0.89 in the test with 27 randomly selected whole slide images (WSIs), and, thus, it can assist pathologists in better analyzing WSIs. The framework provides an essential method in computer-aided diagnosis for medical applications.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yang Bai
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| |
Collapse
|
23
|
Wang H, Cao J, Feng J, Xie Y, Yang D, Chen B. Mixed 2D and 3D convolutional network with multi-scale context for lesion segmentation in breast DCE-MRI. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102607] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
24
|
Choudhary T, Mishra V, Goswami A, Sarangapani J. A transfer learning with structured filter pruning approach for improved breast cancer classification on point-of-care devices. Comput Biol Med 2021; 134:104432. [PMID: 33964737 DOI: 10.1016/j.compbiomed.2021.104432] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 04/16/2021] [Accepted: 04/21/2021] [Indexed: 01/01/2023]
Abstract
BACKGROUND AND OBJECTIVE A significant progress has been made in automated medical diagnosis with the advent of deep learning methods in recent years. However, deploying a deep learning model for mobile and small-scale, low-cost devices is a major bottleneck. Further, breast cancer is more prevalent currently, and ductal carcinoma being its most common type. Although many machine/deep learning methods have already been investigated, still, there is a need for further improvement. METHOD This paper proposes a novel deep convolutional neural network (CNN) based transfer learning approach complemented with structured filter pruning for histopathological image classification, and to bring down the run-time resource requirement of the trained deep learning models. In the proposed method, first, the less important filters are pruned from the convolutional layers and then the pruned models are trained on the histopathological image dataset. RESULTS We performed extensive experiments using three popular pre-trained CNNs, VGG19, ResNet34, and ResNet50. With VGG19 pruned model, we achieved an accuracy of 91.25% outperforming earlier methods on the same dataset and architecture while reducing 63.46% FLOPs. Whereas, with the ResNet34 pruned model, the accuracy increases to 91.80% with 40.63% fewer FLOPs. Moreover, with the ResNet50 model, we achieved an accuracy of 92.07% with 30.97% less FLOPs. CONCLUSION The experimental results reveal that the pre-trained model's performance complemented with filter pruning exceeds original pre-trained models. Another important outcome of the research is that the pruned model with reduced resource requirements can be deployed in point-of-care devices for automated diagnosis applications with ease.
Collapse
Affiliation(s)
| | - Vipul Mishra
- Bennett University, Greater Noida, Uttar Pradesh, 201310, India.
| | - Anurag Goswami
- Bennett University, Greater Noida, Uttar Pradesh, 201310, India.
| | | |
Collapse
|
25
|
Chouhan N, Khan A, Shah JZ, Hussnain M, Khan MW. Deep convolutional neural network and emotional learning based breast cancer detection using digital mammography. Comput Biol Med 2021; 132:104318. [PMID: 33744608 DOI: 10.1016/j.compbiomed.2021.104318] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 03/02/2021] [Accepted: 03/02/2021] [Indexed: 12/25/2022]
Abstract
Breast cancer is one of the deadly diseases among women. However, the chances of death are highly reduced if it gets diagnosed and treated at its early stage. Mammography is one of the reliable methods used by the radiologist to detect breast cancer at its initial stage. Therefore, an automatic and secure breast cancer detection system that accurately detects abnormalities not only increases the radiologist's diagnostic confidence but also provides more objective evidence. In this work, an automatic Diverse Features based Breast Cancer Detection (DFeBCD) system is proposed to classify a mammogram as normal or abnormal. Four sets of distinct feature types are used. Among them, features based on taxonomic indexes, statistical measures and local binary patterns are static. The proposed DFeBCD dynamically extracts the fourth set of features from mammogram images using a highway-network based deep convolution neural network (CNN). Two classifiers, Support Vector Machine (SVM) and Emotional Learning inspired Ensemble Classifier (ELiEC), are trained on these distinct features using a standard IRMA mammogram dataset. The reliability of the system performance is ensured by applying 5-folds cross-validation. Through experiments, we have observed that the performance of the DFeBCD system on dynamically generated features through highway network-based CNN is better than that of all the three individual sets of ad-hoc features. Furthermore, the hybridization of all four types of features improves the system's performance by nearly 2-3%. The performance of both the classifiers is comparable using the individual sets of ad-hoc features. However, the ELiEC classifier's performance is better than SVM using both hybrid and dynamic features.
Collapse
Affiliation(s)
- Naveed Chouhan
- Department of Computer & Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan
| | - Asifullah Khan
- Department of Computer & Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan; Deep Learning Lab, Center for Mathematical Sciences (CMS), Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan.
| | - Jehan Zeb Shah
- Instrumentation Control & Computer Complex (ICCC), P.O. Box 2191, Islamabad, Pakistan.
| | - Mazhar Hussnain
- Department of Computer & Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan
| | - Muhammad Waleed Khan
- Department of Computer & Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan; PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan.
| |
Collapse
|
26
|
Ilan Y. Second-Generation Digital Health Platforms: Placing the Patient at the Center and Focusing on Clinical Outcomes. Front Digit Health 2020; 2:569178. [PMID: 34713042 PMCID: PMC8521820 DOI: 10.3389/fdgth.2020.569178] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/02/2020] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence (AI) digital health systems have drawn much attention over the last decade. However, their implementation into medical practice occurs at a much slower pace than expected. This paper reviews some of the achievements of first-generation AI systems, and the barriers facing their implementation into medical practice. The development of second-generation AI systems is discussed with a focus on overcoming some of these obstacles. Second-generation systems are aimed at focusing on a single subject and on improving patients' clinical outcomes. A personalized closed-loop system designed to improve end-organ function and the patient's response to chronic therapies is presented. The system introduces a platform which implements a personalized therapeutic regimen and introduces quantifiable individualized-variability patterns into its algorithm. The platform is designed to achieve a clinically meaningful endpoint by ensuring that chronic therapies will have sustainable effect while overcoming compensatory mechanisms associated with disease progression and drug resistance. Second-generation systems are expected to assist patients and providers in adopting and implementing of these systems into everyday care.
Collapse
|