1
|
Durán-Díaz I, Sarmiento A, Fondón I, Bodineau C, Tomé M, Durán RV. A Robust Method for the Unsupervised Scoring of Immunohistochemical Staining. Entropy (Basel) 2024; 26:165. [PMID: 38392420 PMCID: PMC10888407 DOI: 10.3390/e26020165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 02/02/2024] [Accepted: 02/07/2024] [Indexed: 02/24/2024]
Abstract
Immunohistochemistry is a powerful technique that is widely used in biomedical research and clinics; it allows one to determine the expression levels of some proteins of interest in tissue samples using color intensity due to the expression of biomarkers with specific antibodies. As such, immunohistochemical images are complex and their features are difficult to quantify. Recently, we proposed a novel method, including a first separation stage based on non-negative matrix factorization (NMF), that achieved good results. However, this method was highly dependent on the parameters that control sparseness and non-negativity, as well as on algorithm initialization. Furthermore, the previously proposed method required a reference image as a starting point for the NMF algorithm. In the present work, we propose a new, simpler and more robust method for the automated, unsupervised scoring of immunohistochemical images based on bright field. Our work is focused on images from tumor tissues marked with blue (nuclei) and brown (protein of interest) stains. The new proposed method represents a simpler approach that, on the one hand, avoids the use of NMF in the separation stage and, on the other hand, circumvents the need for a control image. This new approach determines the subspace spanned by the two colors of interest using principal component analysis (PCA) with dimension reduction. This subspace is a two-dimensional space, allowing for color vector determination by considering the point density peaks. A new scoring stage is also developed in our method that, again, avoids reference images, making the procedure more robust and less dependent on parameters. Semi-quantitative image scoring experiments using five categories exhibit promising and consistent results when compared to manual scoring carried out by experts.
Collapse
Affiliation(s)
- Iván Durán-Díaz
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Auxiliadora Sarmiento
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Irene Fondón
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Clément Bodineau
- Department of Pathology, Brigham and Women's Hospital, Boston, MA 02115, USA
- Department of Genetics, Harvard Medical School, Boston, MA 02115, USA
| | - Mercedes Tomé
- Centro Andaluz de Biología Molecular y Medicina Regenerativa-CABIMER, Consejo Superior de Investigaciones Científicas, Universidad de Sevilla, Universidad Pablo de Olavide, 41092 Seville, Spain
| | - Raúl V Durán
- Centro Andaluz de Biología Molecular y Medicina Regenerativa-CABIMER, Consejo Superior de Investigaciones Científicas, Universidad de Sevilla, Universidad Pablo de Olavide, 41092 Seville, Spain
| |
Collapse
|
2
|
Albalawi E, Thakur A, Ramakrishna MT, Bhatia Khan S, SankaraNarayanan S, Almarri B, Hadi TH. Oral squamous cell carcinoma detection using EfficientNet on histopathological images. Front Med (Lausanne) 2024; 10:1349336. [PMID: 38348235 PMCID: PMC10859441 DOI: 10.3389/fmed.2023.1349336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/28/2023] [Indexed: 02/15/2024] Open
Abstract
Introduction Oral Squamous Cell Carcinoma (OSCC) poses a significant challenge in oncology due to the absence of precise diagnostic tools, leading to delays in identifying the condition. Current diagnostic methods for OSCC have limitations in accuracy and efficiency, highlighting the need for more reliable approaches. This study aims to explore the discriminative potential of histopathological images of oral epithelium and OSCC. By utilizing a database containing 1224 images from 230 patients, captured at varying magnifications and publicly available, a customized deep learning model based on EfficientNetB3 was developed. The model's objective was to differentiate between normal epithelium and OSCC tissues by employing advanced techniques such as data augmentation, regularization, and optimization. Methods The research utilized a histopathological imaging database for Oral Cancer analysis, incorporating 1224 images from 230 patients. These images, taken at various magnifications, formed the basis for training a specialized deep learning model built upon the EfficientNetB3 architecture. The model underwent training to distinguish between normal epithelium and OSCC tissues, employing sophisticated methodologies including data augmentation, regularization techniques, and optimization strategies. Results The customized deep learning model achieved significant success, showcasing a remarkable 99% accuracy when tested on the dataset. This high accuracy underscores the model's efficacy in effectively discerning between normal epithelium and OSCC tissues. Furthermore, the model exhibited impressive precision, recall, and F1-score metrics, reinforcing its potential as a robust diagnostic tool for OSCC. Discussion This research demonstrates the promising potential of employing deep learning models to address the diagnostic challenges associated with OSCC. The model's ability to achieve a 99% accuracy rate on the test dataset signifies a considerable leap forward in earlier and more accurate detection of OSCC. Leveraging advanced techniques in machine learning, such as data augmentation and optimization, has shown promising results in improving patient outcomes through timely and precise identification of OSCC.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Mahesh Thyluru Ramakrishna
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science, Engineering and Environment, University of Salford, Salford, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - Suresh SankaraNarayanan
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Badar Almarri
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Theyazn Hassn Hadi
- Applied College in Abqaiq, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
3
|
Li J, Wang D, Zhang C. Establishment of a pathomic-based machine learning model to predict CD276 (B7-H3) expression in colon cancer. Front Oncol 2024; 13:1232192. [PMID: 38260829 PMCID: PMC10802857 DOI: 10.3389/fonc.2023.1232192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 11/29/2023] [Indexed: 01/24/2024] Open
Abstract
CD276 is a promising prognostic indicator and an attractive therapeutic target in various malignancies. However, current methods for CD276 detection are time-consuming and expensive, limiting extensive studies and applications of CD276. We aimed to develop a pathomic model for CD276 prediction from H&E-stained pathological images, and explore the underlying mechanism of the pathomic features by associating the pathomic model with transcription profiles. A dataset of colon adenocarcinoma (COAD) patients was retrieved from the Cancer Genome Atlas (TCGA) database. The dataset was divided into the training and validation sets according to the ratio of 8:2 by a stratified sampling method. Using the gradient boosting machine (GBM) algorithm, we established a pathomic model to predict CD276 expression in COAD. Univariate and multivariate Cox regression analyses were conducted to assess the predictive performance of the pathomic model for overall survival in COAD. Gene Set Enrichment Analysis (GESA) was performed to explore the underlying biological mechanisms of the pathomic model. The pathomic model formed by three pathomic features for CD276 prediction showed an area under the curve (AUC) of 0.833 (95%CI: 0.784-0.882) in the training set and 0.758 (95%CI: 0.637-0.878) in the validation set, respectively. The calibration curves and Hosmer-Lemeshow goodness of fit test showed that the prediction probability of high/low expression of CD276 was in favorable agreement with the real situation in both the training and validation sets (P=0.176 and 0.255, respectively). The DCA curves suggested that the pathomic model acquired high clinical benefit. All the subjects were categorized into high pathomic score (PS) (PS-H) and low PS (PS-L) groups according to the cutoff value of PS. Univariate and multivariate Cox regression analysis indicated that PS was a risk factor for overall survival in COAD. Furthermore, through GESA analysis, we found several immune and inflammatory-related pathways and genes were associated with the pathomic model. We constructed a pathomics-based machine learning model for CD276 prediction directly from H&E-stained images in COAD. Through integrated analysis of the pathomic model and transcriptomics, the interpretability of the pathomic model provide a theoretical basis for further hypothesis and experimental research.
Collapse
Affiliation(s)
- Jia Li
- Department of Gastroenterology, The 983rd Hospital of Joint Logistic Support Force of PLA, Tianjin, China
| | - Dongxu Wang
- Department of Gastroenterology, The 983rd Hospital of Joint Logistic Support Force of PLA, Tianjin, China
| | - Chenxin Zhang
- Department of General Surgery, The 983rd Hospital of Joint Logistic Support Force of PLA, Tianjin, China
| |
Collapse
|
4
|
Jia Y, Liu J, Chen L, Zhao T, Wang Y. THItoGene: a deep learning method for predicting spatial transcriptomics from histological images. Brief Bioinform 2023; 25:bbad464. [PMID: 38145948 PMCID: PMC10749789 DOI: 10.1093/bib/bbad464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/10/2023] [Accepted: 11/18/2023] [Indexed: 12/27/2023] Open
Abstract
Spatial transcriptomics unveils the complex dynamics of cell regulation and transcriptomes, but it is typically cost-prohibitive. Predicting spatial gene expression from histological images via artificial intelligence offers a more affordable option, yet existing methods fall short in extracting deep-level information from pathological images. In this paper, we present THItoGene, a hybrid neural network that utilizes dynamic convolutional and capsule networks to adaptively sense potential molecular signals in histological images for exploring the relationship between high-resolution pathology image phenotypes and regulation of gene expression. A comprehensive benchmark evaluation using datasets from human breast cancer and cutaneous squamous cell carcinoma has demonstrated the superior performance of THItoGene in spatial gene expression prediction. Moreover, THItoGene has demonstrated its capacity to decipher both the spatial context and enrichment signals within specific tissue regions. THItoGene can be freely accessed at https://github.com/yrjia1015/THItoGene.
Collapse
Affiliation(s)
- Yuran Jia
- Institute for Bioinformatics, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150040, China
| | - Junliang Liu
- Institute for Bioinformatics, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150040, China
| | - Li Chen
- School of Life Sciences, Westlake University, Hangzhou, Zhejiang 310024, China
| | - Tianyi Zhao
- School of Medicine and Health, Harbin Institute of Technology, Harbin, 150040, China
| | - Yadong Wang
- School of Medicine and Health, Harbin Institute of Technology, Harbin, 150040, China
| |
Collapse
|
5
|
Albahli S, Nazir T. A Circular Box-Based Deep Learning Model for the Identification of Signet Ring Cells from Histopathological Images. Bioengineering (Basel) 2023; 10:1147. [PMID: 37892876 PMCID: PMC10604551 DOI: 10.3390/bioengineering10101147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/16/2023] [Accepted: 09/18/2023] [Indexed: 10/29/2023] Open
Abstract
Signet ring cell (SRC) carcinoma is a particularly serious type of cancer that is a leading cause of death all over the world. SRC carcinoma has a more deceptive onset than other carcinomas and is mostly encountered in its later stages. Thus, the recognition of SRCs at their initial stages is a challenge because of different variants and sizes and illumination changes. The recognition process of SRCs at their early stages is costly because of the requirement for medical experts. A timely diagnosis is important because the level of the disease determines the severity, cure, and survival rate of victims. To tackle the current challenges, a deep learning (DL)-based methodology is proposed in this paper, i.e., custom CircleNet with ResNet-34 for SRC recognition and classification. We chose this method because of the circular shapes of SRCs and achieved better performance due to the CircleNet method. We utilized a challenging dataset for experimentation and performed augmentation to increase the dataset samples. The experiments were conducted using 35,000 images and attained 96.40% accuracy. We performed a comparative analysis and confirmed that our method outperforms the other methods.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia;
| | - Tahira Nazir
- Faculty of Computing, Riphah International University, Islamabad 44600, Pakistan
| |
Collapse
|
6
|
Tabatabaei Z, Wang Y, Colomer A, Oliver Moll J, Zhao Z, Naranjo V. WWFedCBMIR: World-Wide Federated Content-Based Medical Image Retrieval. Bioengineering (Basel) 2023; 10:1144. [PMID: 37892874 PMCID: PMC10604333 DOI: 10.3390/bioengineering10101144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/24/2023] [Accepted: 09/25/2023] [Indexed: 10/29/2023] Open
Abstract
The paper proposes a federated content-based medical image retrieval (FedCBMIR) tool that utilizes federated learning (FL) to address the challenges of acquiring a diverse medical data set for training CBMIR models. CBMIR is a tool to find the most similar cases in the data set to assist pathologists. Training such a tool necessitates a pool of whole-slide images (WSIs) to train the feature extractor (FE) to extract an optimal embedding vector. The strict regulations surrounding data sharing in hospitals makes it difficult to collect a rich data set. FedCBMIR distributes an unsupervised FE to collaborative centers for training without sharing the data set, resulting in shorter training times and higher performance. FedCBMIR was evaluated by mimicking two experiments, including two clients with two different breast cancer data sets, namely BreaKHis and Camelyon17 (CAM17), and four clients with the BreaKHis data set at four different magnifications. FedCBMIR increases the F1 score (F1S) of each client from 96% to 98.1% in CAM17 and from 95% to 98.4% in BreaKHis, with 11.44 fewer hours in training time. FedCBMIR provides 98%, 96%, 94%, and 97% F1S in the BreaKHis experiment with a generalized model and accomplishes this in 25.53 fewer hours of training.
Collapse
Affiliation(s)
- Zahra Tabatabaei
- Department of Artificial Intelligence, Tyris Tech S.L., 46021 Valencia, Spain
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-Tech, Universitat Politècnica de València, 46021 Valencia, Spain
| | - Yuandou Wang
- Multiscale Networked Systems, Universiteit van Amsterdam, 1098XH Amsterdam, The Netherlands
| | - Adrián Colomer
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-Tech, Universitat Politècnica de València, 46021 Valencia, Spain
- ValgrAI—Valencian Graduate School and Research Network for Artificial Intelligence, 46022 Valencia, Spain
| | - Javier Oliver Moll
- Department of Artificial Intelligence, Tyris Tech S.L., 46021 Valencia, Spain
| | - Zhiming Zhao
- Multiscale Networked Systems, Universiteit van Amsterdam, 1098XH Amsterdam, The Netherlands
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-Tech, Universitat Politècnica de València, 46021 Valencia, Spain
| |
Collapse
|
7
|
Kumaraswamy E, Kumar S, Sharma M. An Invasive Ductal Carcinomas Breast Cancer Grade Classification Using an Ensemble of Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13111977. [PMID: 37296828 DOI: 10.3390/diagnostics13111977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/11/2023] [Accepted: 06/02/2023] [Indexed: 06/12/2023] Open
Abstract
Invasive Ductal Carcinoma Breast Cancer (IDC-BC) is the most common type of cancer and its asymptomatic nature has led to an increased mortality rate globally. Advancements in artificial intelligence and machine learning have revolutionized the medical field with the development of AI-enabled computer-aided diagnosis (CAD) systems, which help in determining diseases at an early stage. CAD systems assist pathologists in their decision-making process to produce more reliable outcomes in order to treat patients well. In this work, the potential of pre-trained convolutional neural networks (CNNs) (i.e., EfficientNetV2L, ResNet152V2, DenseNet201), singly or as an ensemble, was thoroughly explored. The performances of these models were evaluated for IDC-BC grade classification using the DataBiox dataset. Data augmentation was used to avoid the issues of data scarcity and data imbalances. The performance of the best model was compared to three different balanced datasets of Databiox (i.e., 1200, 1400, and 1600 images) to determine the implications of this data augmentation. Furthermore, the effects of the number of epochs were analysed to ensure the coherency of the most optimal model. The experimental results analysis revealed that the proposed ensemble model outperformed the existing state-of-the-art techniques in relation to classifying the IDC-BC grades of the Databiox dataset. The proposed ensemble model of the CNNs achieved a 94% classification accuracy and attained a significant area under the ROC curves for grades 1, 2, and 3, i.e., 96%, 94%, and 96%, respectively.
Collapse
Affiliation(s)
- Eelandula Kumaraswamy
- School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara 144411, Punjab, India
| | - Sumit Kumar
- School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara 144411, Punjab, India
- Division of Research & Development, Lovely Professional University, Phagwara 144411, Punjab, India
| | - Manoj Sharma
- Department of ECE, Giani Zail Singh Campus College of Engineering & Technology, MRSPTU, Bathinda 151001, Punjab, India
| |
Collapse
|
8
|
Khanagar SB, Alkadi L, Alghilan MA, Kalagi S, Awawdeh M, Bijai LK, Vishwanathaiah S, Aldhebaib A, Singh OG. Application and Performance of Artificial Intelligence (AI) in Oral Cancer Diagnosis and Prediction Using Histopathological Images: A Systematic Review. Biomedicines 2023; 11:1612. [PMID: 37371706 DOI: 10.3390/biomedicines11061612] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 05/27/2023] [Accepted: 05/31/2023] [Indexed: 06/29/2023] Open
Abstract
Oral cancer (OC) is one of the most common forms of head and neck cancer and continues to have the lowest survival rates worldwide, even with advancements in research and therapy. The prognosis of OC has not significantly improved in recent years, presenting a persistent challenge in the biomedical field. In the field of oncology, artificial intelligence (AI) has seen rapid development, with notable successes being reported in recent times. This systematic review aimed to critically appraise the available evidence regarding the utilization of AI in the diagnosis, classification, and prediction of oral cancer (OC) using histopathological images. An electronic search of several databases, including PubMed, Scopus, Embase, the Cochrane Library, Web of Science, Google Scholar, and the Saudi Digital Library, was conducted for articles published between January 2000 and January 2023. Nineteen articles that met the inclusion criteria were then subjected to critical analysis utilizing QUADAS-2, and the certainty of the evidence was assessed using the GRADE approach. AI models have been widely applied in diagnosing oral cancer, differentiating normal and malignant regions, predicting the survival of OC patients, and grading OC. The AI models used in these studies displayed an accuracy in a range from 89.47% to 100%, sensitivity from 97.76% to 99.26%, and specificity ranging from 92% to 99.42%. The models' abilities to diagnose, classify, and predict the occurrence of OC outperform existing clinical approaches. This demonstrates the potential for AI to deliver a superior level of precision and accuracy, helping pathologists significantly improve their diagnostic outcomes and reduce the probability of errors. Considering these advantages, regulatory bodies and policymakers should expedite the process of approval and marketing of these products for application in clinical scenarios.
Collapse
Affiliation(s)
- Sanjeev B Khanagar
- Preventive Dental Science Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Lubna Alkadi
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Maryam A Alghilan
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Sara Kalagi
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Mohammed Awawdeh
- Preventive Dental Science Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Lalitytha Kumar Bijai
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Maxillofacial Surgery and Diagnostic Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Satish Vishwanathaiah
- Department of Preventive Dental Sciences, Division of Pediatric Dentistry, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
| | - Ali Aldhebaib
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Radiological Sciences Program, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Oinam Gokulchandra Singh
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Radiological Sciences Program, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| |
Collapse
|
9
|
Ziyambe B, Yahya A, Mushiri T, Tariq MU, Abbas Q, Babar M, Albathan M, Asim M, Hussain A, Jabbar S. A Deep Learning Framework for the Prediction and Diagnosis of Ovarian Cancer in Pre- and Post-Menopausal Women. Diagnostics (Basel) 2023; 13:diagnostics13101703. [PMID: 37238188 DOI: 10.3390/diagnostics13101703] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/17/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Ovarian cancer ranks as the fifth leading cause of cancer-related mortality in women. Late-stage diagnosis (stages III and IV) is a major challenge due to the often vague and inconsistent initial symptoms. Current diagnostic methods, such as biomarkers, biopsy, and imaging tests, face limitations, including subjectivity, inter-observer variability, and extended testing times. This study proposes a novel convolutional neural network (CNN) algorithm for predicting and diagnosing ovarian cancer, addressing these limitations. In this paper, CNN was trained on a histopathological image dataset, divided into training and validation subsets and augmented before training. The model achieved a remarkable accuracy of 94%, with 95.12% of cancerous cases correctly identified and 93.02% of healthy cells accurately classified. The significance of this study lies in overcoming the challenges associated with the human expert examination, such as higher misclassification rates, inter-observer variability, and extended analysis times. This study presents a more accurate, efficient, and reliable approach to predicting and diagnosing ovarian cancer. Future research should explore recent advances in this field to enhance the effectiveness of the proposed method further.
Collapse
Affiliation(s)
- Blessed Ziyambe
- Department of Electrical Engineering, Harare Polytechnic College, Causeway Harare P.O. Box CY407, Zimbabwe
| | - Abid Yahya
- Department of Electrical, Computer and Telecommunications Engineering, Botswana International University of Science and Technology, Palapye 10071, Botswana
| | - Tawanda Mushiri
- Department of Industrial and Mechatronics Engineering, Faculty of Engineering & the Built Environment, University of Zimbabwe, Mt. Pleasant, 630 Churchill Avenue, Harare, Zimbabwe
| | | | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Muhammad Babar
- Robotics and Internet of Things Laboratory, Prince Sultan University, Riyadh 12435, Saudi Arabia
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Muhammad Asim
- EIAS Data Science Laboratory, Prince Sultan University, Riyadh 12435, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan
| | - Sohail Jabbar
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| |
Collapse
|
10
|
Rafiq A, Chursin A, Awad Alrefaei W, Rashed Alsenani T, Aldehim G, Abdel Samee N, Menzli LJ. Detection and Classification of Histopathological Breast Images Using a Fusion of CNN Frameworks. Diagnostics (Basel) 2023; 13:diagnostics13101700. [PMID: 37238186 DOI: 10.3390/diagnostics13101700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is responsible for the deaths of thousands of women each year. The diagnosis of breast cancer (BC) frequently makes the use of several imaging techniques. On the other hand, incorrect identification might occasionally result in unnecessary therapy and diagnosis. Therefore, the accurate identification of breast cancer can save a significant number of patients from undergoing unnecessary surgery and biopsy procedures. As a result of recent developments in the field, the performance of deep learning systems used for medical image processing has showed significant benefits. Deep learning (DL) models have found widespread use for the aim of extracting important features from histopathologic BC images. This has helped to improve the classification performance and has assisted in the automation of the process. In recent times, both convolutional neural networks (CNNs) and hybrid models of deep learning-based approaches have demonstrated impressive performance. In this research, three different types of CNN models are proposed: a straightforward CNN model (1-CNN), a fusion CNN model (2-CNN), and a three CNN model (3-CNN). The findings of the experiment demonstrate that the techniques based on the 3-CNN algorithm performed the best in terms of accuracy (90.10%), recall (89.90%), precision (89.80%), and f1-Score (89.90%). In conclusion, the CNN-based approaches that have been developed are contrasted with more modern machine learning and deep learning models. The application of CNN-based methods has resulted in a significant increase in the accuracy of the BC classification.
Collapse
Affiliation(s)
- Ahsan Rafiq
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Alexander Chursin
- Higher School of Industrial Policy and Entrepreneurship, RUDN University, 6 Miklukho-Maklaya St, Moscow 117198, Russia
| | - Wejdan Awad Alrefaei
- Department of Programming and Computer Sciences, Applied College in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 16245, Saudi Arabia
| | - Tahani Rashed Alsenani
- Department of Biology, College of Sciences in Yanbu, Taibah University, Yanbu 46522, Saudi Arabia
| | - Ghadah Aldehim
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Leila Jamel Menzli
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
11
|
Amin MS, Ahn H. FabNet: A Features Agglomeration-Based Convolutional Neural Network for Multiscale Breast Cancer Histopathology Images Classification. Cancers (Basel) 2023; 15:cancers15041013. [PMID: 36831359 PMCID: PMC9954749 DOI: 10.3390/cancers15041013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 01/31/2023] [Accepted: 01/31/2023] [Indexed: 02/08/2023] Open
Abstract
The definitive diagnosis of histology specimen images is largely based on the radiologist's comprehensive experience; however, due to the fine to the coarse visual appearance of such images, experts often disagree with their assessments. Sophisticated deep learning approaches can help to automate the diagnosis process of the images and reduce the analysis duration. More efficient and accurate automated systems can also increase the diagnostic impartiality by reducing the difference between the operators. We propose a FabNet model that can learn the fine-to-coarse structural and textural features of multi-scale histopathological images by using accretive network architecture that agglomerate hierarchical feature maps to acquire significant classification accuracy. We expand on a contemporary design by incorporating deep and close integration to finely combine features across layers. Our deep layer accretive model structure combines the feature hierarchy in an iterative and hierarchically manner that infers higher accuracy and fewer parameters. The FabNet can identify malignant tumors from images and patches from histopathology images. We assessed the efficiency of our suggested model standard cancer datasets, which included breast cancer as well as colon cancer histopathology images. Our proposed avant garde model significantly outperforms existing state-of-the-art models in respect of the accuracy, F1 score, precision, and sensitivity, with fewer parameters.
Collapse
|
12
|
Obayya M, Maashi MS, Nemri N, Mohsen H, Motwakel A, Osman AE, Alneil AA, Alsaid MI. Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis. Cancers (Basel) 2023; 15. [PMID: 36765839 DOI: 10.3390/cancers15030885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/20/2023] [Accepted: 01/25/2023] [Indexed: 02/04/2023] Open
Abstract
Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%.
Collapse
|
13
|
Das M, Dash R, Mishra SK. Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network. Int J Environ Res Public Health 2023; 20:2131. [PMID: 36767498 PMCID: PMC9915186 DOI: 10.3390/ijerph20032131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 01/19/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
Worldwide, oral cancer is the sixth most common type of cancer. India is in 2nd position, with the highest number of oral cancer patients. To the population of oral cancer patients, India contributes to almost one-third of the total count. Among several types of oral cancer, the most common and dominant one is oral squamous cell carcinoma (OSCC). The major reason for oral cancer is tobacco consumption, excessive alcohol consumption, unhygienic mouth condition, betel quid eating, viral infection (namely human papillomavirus), etc. The early detection of oral cancer type OSCC, in its preliminary stage, gives more chances for better treatment and proper therapy. In this paper, author proposes a convolutional neural network model, for the automatic and early detection of OSCC, and for experimental purposes, histopathological oral cancer images are considered. The proposed model is compared and analyzed with state-of-the-art deep learning models like VGG16, VGG19, Alexnet, ResNet50, ResNet101, Mobile Net and Inception Net. The proposed model achieved a cross-validation accuracy of 97.82%, which indicates the suitability of the proposed approach for the automatic classification of oral cancer data.
Collapse
Affiliation(s)
- Madhusmita Das
- Department of Computer Application, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar 751030, India
| | - Rasmita Dash
- Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar 751030, India
| | - Sambit Kumar Mishra
- Department of Computer Science and Engineering, SRM University-AP, Guntur 522240, India
| |
Collapse
|
14
|
Sethy PK, Geetha Devi A, Padhan B, Behera SK, Sreedhar S, Das K. Lung cancer histopathological image classification using wavelets and AlexNet. J Xray Sci Technol 2023; 31:211-221. [PMID: 36463485 DOI: 10.3233/xst-221301] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Among malignant tumors, lung cancer has the highest morbidity and fatality rates worldwide. Screening for lung cancer has been investigated for decades in order to reduce mortality rates of lung cancer patients, and treatment options have improved dramatically in recent years. Pathologists utilize various techniques to determine the stage, type, and subtype of lung cancers, but one of the most common is a visual assessment of histopathology slides. The most common subtypes of lung cancer are adenocarcinoma and squamous cell carcinoma, lung benign, and distinguishing between them requires visual inspection by a skilled pathologist. The purpose of this article was to develop a hybrid network for the categorization of lung histopathology images, and it did so by combining AlexNet, wavelet, and support vector machines. In this study, we feed the integrated discrete wavelet transform (DWT) coefficients and AlexNet deep features into linear support vector machines (SVMs) for lung nodule sample classification. The LC25000 Lung and colon histopathology image dataset, which contains 5,000 digital histopathology images in three categories of benign (normal cells), adenocarcinoma, and squamous carcinoma cells (both are cancerous cells) is used in this study to train and test SVM classifiers. The study results of using a 10-fold cross-validation method achieve an accuracy of 99.3% and an area under the curve (AUC) of 0.99 in classifying these digital histopathology images of lung nodule samples.
Collapse
Affiliation(s)
| | - A Geetha Devi
- Department of Electronics and Communication Engineering, PVP Siddhartha Institute of Technology, Vijayawada, AP, India
| | - Bikash Padhan
- Department of Electronics, Sambalpur University, Jyoti Vihar, Burla, India
| | | | | | - Kalyan Das
- Department Computer Science Engineering and Application, Sambalpur University Institute of Information Technology, Burla, India
| |
Collapse
|
15
|
Hamza MA, Mengash HA, Nour MK, Alasmari N, Aziz ASA, Mohammed GP, Zamani AS, Abdelmageed AA. Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging. Cancers (Basel) 2022; 14. [PMID: 36551644 DOI: 10.3390/cancers14246159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 12/15/2022] Open
Abstract
Medical imaging has attracted growing interest in the field of healthcare regarding breast cancer (BC). Globally, BC is a major cause of mortality amongst women. Now, the examination of histopathology images is the medical gold standard for cancer diagnoses. However, the manual process of microscopic inspections is a laborious task, and the results might be misleading as a result of human error occurring. Thus, the computer-aided diagnoses (CAD) system can be utilized for accurately detecting cancer within essential time constraints, as earlier diagnosis is the key to curing cancer. The classification and diagnosis of BC utilizing the deep learning algorithm has gained considerable attention. This article presents a model of an improved bald eagle search optimization with a synergic deep learning mechanism for breast cancer diagnoses using histopathological images (IBESSDL-BCHI). The proposed IBESSDL-BCHI model concentrates on the identification and classification of BC using HIs. To do so, the presented IBESSDL-BCHI model follows an image preprocessing method using a median filtering (MF) technique as a preprocessing step. In addition, feature extraction using a synergic deep learning (SDL) model is carried out, and the hyperparameters related to the SDL mechanism are tuned by the use of the IBES model. Lastly, long short-term memory (LSTM) was utilized to precisely categorize the HIs into two major classes, such as benign and malignant. The performance validation of the IBESSDL-BCHI system was tested utilizing the benchmark dataset, and the results demonstrate that the IBESSDL-BCHI model has shown better general efficiency for BC classification.
Collapse
|
16
|
Gou F, Liu J, Zhu J, Wu J. A Multimodal Auxiliary Classification System for Osteosarcoma Histopathological Images Based on Deep Active Learning. Healthcare (Basel) 2022; 10:2189. [PMID: 36360530 PMCID: PMC9690420 DOI: 10.3390/healthcare10112189] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 10/29/2023] Open
Abstract
Histopathological examination is an important criterion in the clinical diagnosis of osteosarcoma. With the improvement of hardware technology and computing power, pathological image analysis systems based on artificial intelligence have been widely used. However, classifying numerous intricate pathology images by hand is a tiresome task for pathologists. The lack of labeling data makes the system costly and difficult to build. This study constructs a classification assistance system (OHIcsA) based on active learning (AL) and a generative adversarial network (GAN). The system initially uses a small, labeled training set to train the classifier. Then, the most informative samples from the unlabeled images are selected for expert annotation. To retrain the network, the final chosen images are added to the initial labeled dataset. Experiments on real datasets show that our proposed method achieves high classification performance with an AUC value of 0.995 and an accuracy value of 0.989 using a small amount of labeled data. It reduces the cost of building a medical system. Clinical diagnosis can be aided by the system's findings, which can also increase the effectiveness and verifiable accuracy of doctors.
Collapse
Affiliation(s)
- Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
17
|
Alsaleh L, Li C, Couetil JL, Ye Z, Huang K, Zhang J, Chen C, Johnson TS. Spatial Transcriptomic Analysis Reveals Associations between Genes and Cellular Topology in Breast and Prostate Cancers. Cancers (Basel) 2022; 14:4856. [PMID: 36230778 PMCID: PMC9562681 DOI: 10.3390/cancers14194856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 09/26/2022] [Accepted: 09/28/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Cancer is the leading cause of death worldwide with breast and prostate cancer the most common among women and men, respectively. Gene expression and image features are independently prognostic of patient survival; but until the advent of spatial transcriptomics (ST), it was not possible to determine how gene expression of cells was tied to their spatial relationships (i.e., topology). METHODS We identify topology-associated genes (TAGs) that correlate with 700 image topological features (ITFs) in breast and prostate cancer ST samples. Genes and image topological features are independently clustered and correlated with each other. Themes among genes correlated with ITFs are investigated by functional enrichment analysis. RESULTS Overall, topology-associated genes (TAG) corresponding to extracellular matrix (ECM) and Collagen Type I Trimer gene ontology terms are common to both prostate and breast cancer. In breast cancer specifically, we identify the ZAG-PIP Complex as a TAG. In prostate cancer, we identify distinct TAGs that are enriched for GI dysmotility and the IgA immunoglobulin complex. We identified TAGs in every ST slide regardless of cancer type. CONCLUSIONS These TAGs are enriched for ontology terms, illustrating the biological relevance to our image topology features and their potential utility in diagnostic and prognostic models.
Collapse
Affiliation(s)
- Lujain Alsaleh
- Department of Biostatistics and Health Data Science, Indiana University, Indianapolis, IN 46202, USA
| | - Chen Li
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - Justin L. Couetil
- Department of Medical and Molecular Genetics, Indiana University, Indianapolis, IN 46202, USA
| | - Ze Ye
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - Kun Huang
- Department of Biostatistics and Health Data Science, Indiana University, Indianapolis, IN 46202, USA
- Department of Medical and Molecular Genetics, Indiana University, Indianapolis, IN 46202, USA
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
- Melvin and Bren Simon Comprehensive Cancer Center, Indiana University, Indianapolis, IN 46202, USA
| | - Jie Zhang
- Department of Medical and Molecular Genetics, Indiana University, Indianapolis, IN 46202, USA
- Melvin and Bren Simon Comprehensive Cancer Center, Indiana University, Indianapolis, IN 46202, USA
| | - Chao Chen
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - Travis S. Johnson
- Department of Biostatistics and Health Data Science, Indiana University, Indianapolis, IN 46202, USA
- Melvin and Bren Simon Comprehensive Cancer Center, Indiana University, Indianapolis, IN 46202, USA
- Indiana Biosciences Research Institute, Indianapolis, IN 46202, USA
| |
Collapse
|
18
|
Zhou W, Deng Z, Liu Y, Shen H, Deng H, Xiao H. Global Research Trends of Artificial Intelligence on Histopathological Images: A 20-Year Bibliometric Analysis. Int J Environ Res Public Health 2022; 19:ijerph191811597. [PMID: 36141871 PMCID: PMC9517580 DOI: 10.3390/ijerph191811597] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/31/2022] [Accepted: 09/01/2022] [Indexed: 06/13/2023]
Abstract
Cancer has become a major threat to global health care. With the development of computer science, artificial intelligence (AI) has been widely applied in histopathological images (HI) analysis. This study analyzed the publications of AI in HI from 2001 to 2021 by bibliometrics, exploring the research status and the potential popular directions in the future. A total of 2844 publications from the Web of Science Core Collection were included in the bibliometric analysis. The country/region, institution, author, journal, keyword, and references were analyzed by using VOSviewer and CiteSpace. The results showed that the number of publications has grown rapidly in the last five years. The USA is the most productive and influential country with 937 publications and 23,010 citations, and most of the authors and institutions with higher numbers of publications and citations are from the USA. Keyword analysis showed that breast cancer, prostate cancer, colorectal cancer, and lung cancer are the tumor types of greatest concern. Co-citation analysis showed that classification and nucleus segmentation are the main research directions of AI-based HI studies. Transfer learning and self-supervised learning in HI is on the rise. This study performed the first bibliometric analysis of AI in HI from multiple indicators, providing insights for researchers to identify key cancer types and understand the research trends of AI application in HI.
Collapse
Affiliation(s)
- Wentong Zhou
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Ziheng Deng
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Yong Liu
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Hui Shen
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongwen Deng
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongmei Xiao
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| |
Collapse
|
19
|
Khalil MA, Lee YC, Lien HC, Jeng YM, Wang CW. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 12. [PMID: 35454038 DOI: 10.3390/diagnostics12040990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/13/2022] [Accepted: 04/13/2022] [Indexed: 12/12/2022] Open
Abstract
Breast cancer is the leading cause of death for women globally. In clinical practice, pathologists visually scan over enormous amounts of gigapixel microscopic tissue slide images, which is a tedious and challenging task. In breast cancer diagnosis, micro-metastases and especially isolated tumor cells are extremely difficult to detect and are easily neglected because tiny metastatic foci might be missed in visual examinations by medical doctors. However, the literature poorly explores the detection of isolated tumor cells, which could be recognized as a viable marker to determine the prognosis for T1NoMo breast cancer patients. To address these issues, we present a deep learning-based framework for efficient and robust lymph node metastasis segmentation in routinely used histopathological hematoxylin−eosin-stained (H−E) whole-slide images (WSI) in minutes, and a quantitative evaluation is conducted using 188 WSIs, containing 94 pairs of H−E-stained WSIs and immunohistochemical CK(AE1/AE3)-stained WSIs, which are used to produce a reliable and objective reference standard. The quantitative results demonstrate that the proposed method achieves 89.6% precision, 83.8% recall, 84.4% F1-score, and 74.9% mIoU, and that it performs significantly better than eight deep learning approaches, including two recently published models (v3_DCNN and Xception-65), and three variants of Deeplabv3+ with three different backbones, namely, U-Net, SegNet, and FCN, in precision, recall, F1-score, and mIoU (p<0.001). Importantly, the proposed system is shown to be capable of identifying tiny metastatic foci in challenging cases, for which there are high probabilities of misdiagnosis in visual inspection, while the baseline approaches tend to fail in detecting tiny metastatic foci. For computational time comparison, the proposed method takes 2.4 min for processing a WSI utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card, and is notably faster than the baseline methods (4-times faster than U-Net and SegNet, 5-times faster than FCN, 2-times faster than the 3 different variants of Deeplabv3+, 1.4-times faster than v3_DCNN, and 41-times faster than Xception-65).
Collapse
|
20
|
Ali M, Ali R. Multi-Input Dual-Stream Capsule Network for Improved Lung and Colon Cancer Classification. Diagnostics (Basel) 2021; 11:1485. [PMID: 34441419 PMCID: PMC8393706 DOI: 10.3390/diagnostics11081485] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/12/2021] [Accepted: 08/13/2021] [Indexed: 12/19/2022] Open
Abstract
Lung and colon cancers are two of the most common causes of death and morbidity in humans. One of the most important aspects of appropriate treatment is the histopathological diagnosis of such cancers. As a result, the main goal of this study is to use a multi-input capsule network and digital histopathology images to build an enhanced computerized diagnosis system for detecting squamous cell carcinomas and adenocarcinomas of the lungs, as well as adenocarcinomas of the colon. Two convolutional layer blocks are used in the proposed multi-input capsule network. The CLB (Convolutional Layers Block) employs traditional convolutional layers, whereas the SCLB (Separable Convolutional Layers Block) employs separable convolutional layers. The CLB block takes unprocessed histopathology images as input, whereas the SCLB block takes uniquely pre-processed histopathological images. The pre-processing method uses color balancing, gamma correction, image sharpening, and multi-scale fusion as the major processes because histopathology slide images are typically red blue. All three channels (Red, Green, and Blue) are adequately compensated during the color balancing phase. The dual-input technique aids the model's ability to learn features more effectively. On the benchmark LC25000 dataset, the empirical analysis indicates a significant improvement in classification results. The proposed model provides cutting-edge performance in all classes, with 99.58% overall accuracy for lung and colon abnormalities based on histopathological images.
Collapse
Affiliation(s)
- Mumtaz Ali
- School of Computer Science, Huazhong University of Science and Technology, Wuhan 430074, China
- Department of Computer Systems Engineering, Sukkur IBA University, Sukkur 65200, Pakistan
| | - Riaz Ali
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
21
|
Senousy Z, Abdelsamea MM, Mohamed MM, Gaber MM. 3E-Net: Entropy-Based Elastic Ensemble of Deep Convolutional Neural Networks for Grading of Invasive Breast Carcinoma Histopathological Microscopic Images. Entropy (Basel) 2021; 23:620. [PMID: 34065765 PMCID: PMC8156865 DOI: 10.3390/e23050620] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/13/2021] [Accepted: 05/14/2021] [Indexed: 12/21/2022]
Abstract
Automated grading systems using deep convolution neural networks (DCNNs) have proven their capability and potential to distinguish between different breast cancer grades using digitized histopathological images. In digital breast pathology, it is vital to measure how confident a DCNN is in grading using a machine-confidence metric, especially with the presence of major computer vision challenging problems such as the high visual variability of the images. Such a quantitative metric can be employed not only to improve the robustness of automated systems, but also to assist medical professionals in identifying complex cases. In this paper, we propose Entropy-based Elastic Ensemble of DCNN models (3E-Net) for grading invasive breast carcinoma microscopy images which provides an initial stage of explainability (using an uncertainty-aware mechanism adopting entropy). Our proposed model has been designed in a way to (1) exclude images that are less sensitive and highly uncertain to our ensemble model and (2) dynamically grade the non-excluded images using the certain models in the ensemble architecture. We evaluated two variations of 3E-Net on an invasive breast carcinoma dataset and we achieved grading accuracy of 96.15% and 99.50%.
Collapse
Affiliation(s)
- Zakaria Senousy
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
| | - Mohammed M. Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
- Faculty of Computers and Information, Assiut University, Assiut 71515, Egypt
| | - Mona Mostafa Mohamed
- Department of Zoology, Faculty of Science, Cairo University, Giza 12613, Egypt;
- Faculty of Basic Sciences, Galala University, Suez 435611, Egypt
| | - Mohamed Medhat Gaber
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| |
Collapse
|
22
|
Chen L, Zeng H, Zhang M, Luo Y, Ma X. Histopathological image and gene expression pattern analysis for predicting molecular features and prognosis of head and neck squamous cell carcinoma. Cancer Med 2021; 10:4615-4628. [PMID: 33987946 PMCID: PMC8267162 DOI: 10.1002/cam4.3965] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 04/15/2021] [Accepted: 04/19/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Histopathological image features offer a quantitative measurement of cellular morphology, and probably help for better diagnosis and prognosis in head and neck squamous cell carcinoma (HNSCC). METHODS We first used histopathological image features and machine-learning algorithms to predict molecular features of 212 HNSCC patients from The Cancer Genome Atlas (TCGA). Next, we divided TCGA-HNSCC cohort into training set (n = 149) and test set (n = 63), and obtained tissue microarrays as an external validation set (n = 126). We identified the gene expression profile correlated to image features by bioinformatics analysis. RESULTS Histopathological image features combined with random forest may predict five somatic mutations, transcriptional subtypes, and methylation subtypes, with area under curve (AUC) ranging from 0.828 to 0.968. The prediction model based on image features could predict overall survival, with 5-year AUC of 0.831, 0.782, and 0.751 in training, test, and validation sets. We next established an integrative prognostic model of image features and gene expressions, which obtained better performance in training set (5-year AUC = 0.860) and test set (5-year AUC = 0.826). According to histopathological transcriptomics risk score (HTRS) generated by the model, high-risk and low-risk patients had different survival in training set (HR = 4.09, p < 0.001) and test set (HR=3.08, p = 0.019). Multivariate analysis suggested that HTRS was an independent predictor in training set (HR = 5.17, p < 0.001). The nomogram combining HTRS and clinical factors had higher net benefit than conventional clinical evaluation. CONCLUSIONS Histopathological image features provided a promising approach to predict mutations, molecular subtypes, and prognosis of HNSCC. The integration of image features and gene expression data had potential for improving prognosis prediction in HNSCC.
Collapse
Affiliation(s)
- Linyan Chen
- Department of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Hao Zeng
- Department of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Mingxuan Zhang
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Yuling Luo
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- Department of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
23
|
Lwin TT, Yoneyama A, Maruyama H, Takeda T. Visualization Ability of Phase-Contrast Synchrotron-Based X-Ray Imaging Using an X-Ray Interferometer in Soft Tissue Tumors. Technol Cancer Res Treat 2021; 20:15330338211010121. [PMID: 33896273 PMCID: PMC8085371 DOI: 10.1177/15330338211010121] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Phase-contrast synchrotron-based X-ray imaging using an X-ray interferometer provides high sensitivity and high spatial resolution, and it has the ability to depict the fine morphological structures of biological soft tissues, including tumors. In this study, we quantitatively compared phase-contrast synchrotron-based X-ray computed tomography images and images of histopathological hematoxylin-eosin-stained sections of spontaneously occurring rat testicular tumors that contained different types of cells. The absolute densities measured on the phase-contrast synchrotron-based X-ray computed tomography images correlated well with the densities of the nuclear chromatin in the histological images, thereby demonstrating the ability of phase-contrast synchrotron-based X-ray imaging using an X-ray interferometer to reliably identify the characteristics of cancer cells within solid soft tissue tumors. In addition, 3-dimensional synchrotron-based phase-contrast X-ray computed tomography enables screening for different structures within tumors, such as solid, cystic, and fibrous tissues, and blood clots, from any direction and with a spatial resolution down to 26 μm. Thus, phase-contrast synchrotron-based X-ray imaging using an X-ray interferometer shows potential for being useful in preclinical cancer research by providing the ability to depict the characteristics of tumor cells and by offering 3-dimensional information capabilities.
Collapse
Affiliation(s)
- Thet-Thet Lwin
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan.,Graduate School of Medical Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
| | | | - Hiroko Maruyama
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan.,Graduate School of Medical Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
| | - Tohoru Takeda
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan.,Graduate School of Medical Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
| |
Collapse
|
24
|
Musulin J, Štifanić D, Zulijani A, Ćabov T, Dekanić A, Car Z. An Enhanced Histopathology Analysis: An AI-Based System for Multiclass Grading of Oral Squamous Cell Carcinoma and Segmenting of Epithelial and Stromal Tissue. Cancers (Basel) 2021; 13:1784. [PMID: 33917952 PMCID: PMC8068326 DOI: 10.3390/cancers13081784] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/02/2021] [Accepted: 04/07/2021] [Indexed: 12/12/2022] Open
Abstract
Oral squamous cell carcinoma is most frequent histological neoplasm of head and neck cancers, and although it is localized in a region that is accessible to see and can be detected very early, this usually does not occur. The standard procedure for the diagnosis of oral cancer is based on histopathological examination, however, the main problem in this kind of procedure is tumor heterogeneity where a subjective component of the examination could directly impact patient-specific treatment intervention. For this reason, artificial intelligence (AI) algorithms are widely used as computational aid in the diagnosis for classification and segmentation of tumors, in order to reduce inter- and intra-observer variability. In this research, a two-stage AI-based system for automatic multiclass grading (the first stage) and segmentation of the epithelial and stromal tissue (the second stage) from oral histopathological images is proposed in order to assist the clinician in oral squamous cell carcinoma diagnosis. The integration of Xception and SWT resulted in the highest classification value of 0.963 (σ = 0.042) AUCmacro and 0.966 (σ = 0.027) AUCmicro while using DeepLabv3+ along with Xception_65 as backbone and data preprocessing, semantic segmentation prediction resulted in 0.878 (σ = 0.027) mIOU and 0.955 (σ = 0.014) F1 score. Obtained results reveal that the proposed AI-based system has great potential in the diagnosis of OSCC.
Collapse
Affiliation(s)
- Jelena Musulin
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| | - Daniel Štifanić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| | - Ana Zulijani
- Department of Oral Surgery, Clinical Hospital Center Rijeka, Krešimirova Ul. 40, 51000 Rijeka, Croatia;
| | - Tomislav Ćabov
- Faculty of Dental Medicine, University of Rijeka, Krešimirova Ul. 40, 51000 Rijeka, Croatia
| | - Andrea Dekanić
- Department of Pathology and Cytology, Clinical Hospital Center Rijeka, Krešimirova Ul. 42, 51000 Rijeka, Croatia;
- Faculty of Medicine, University of Rijeka, Ul. Braće Branchetta 20/1, 51000 Rijeka, Croatia
| | - Zlatan Car
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| |
Collapse
|
25
|
Abstract
Breast cancer is a severe problem for women around the world especially in developing countries, according to recent reports from the World Health Organization (WHO). High accuracy and early detection of breast cancer reduces the mortality rate, in the other hand, recognition of breast cancer is a complicated issue. Various studies and methods have been carried out to overcome this problem and to obtain accurate screening of breast cancer. One of the most recent methods with high performance is deep learning; it has been used to classify breast cancer using mammograms or histopathological images. This paper proposes a new using the concept of sliding window, and using the ensemble of four pre-trained convolutional neural networks (CNN) in order to classify breast cancer into eight classes. In this study, each image produces 4 non-overlapped sliding windows which are fed to GoogleNet, AlexNet, ResNet50, and DenseNet-201 CNNs, and an ensemble is then done to find the major class of each window, the ensemble is then applied again to find the class of the whole histopathological image. Breast Cancer Histopathological Database (BreakHis) database has been employed in this paper with eight classes (Adenosis, Ductal Carcinoma, Fibroadenoma, Lobular Carcinoma, Mucinous Carcinoma Papillary Carcinoma, Phyllodes Tumour, Tubular Adenoma). The proposed method is applied to four magnification cases: 40x, 100x, 200x, and 400x images. The proposed ensemble technique achieved an accuracy of 99.3325%. The results of the proposed system are comparable to recent studies results.
Collapse
Affiliation(s)
- Amin Alqudah
- Department of Computer Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Ali Mohammad Alqudah
- Department of Biomedical Systems and Informatics Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| |
Collapse
|
26
|
Boumaraf S, Liu X, Wan Y, Zheng Z, Ferkous C, Ma X, Li Z, Bardou D. Conventional Machine Learning versus Deep Learning for Magnification Dependent Histopathological Breast Cancer Image Classification: A Comparative Study with Visual Explanation. Diagnostics (Basel) 2021; 11:528. [PMID: 33809611 DOI: 10.3390/diagnostics11030528] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/03/2021] [Accepted: 03/08/2021] [Indexed: 12/14/2022] Open
Abstract
Breast cancer is a serious threat to women. Many machine learning-based computer-aided diagnosis (CAD) methods have been proposed for the early diagnosis of breast cancer based on histopathological images. Even though many such classification methods achieved high accuracy, many of them lack the explanation of the classification process. In this paper, we compare the performance of conventional machine learning (CML) against deep learning (DL)-based methods. We also provide a visual interpretation for the task of classifying breast cancer in histopathological images. For CML-based methods, we extract a set of handcrafted features using three feature extractors and fuse them to get image representation that would act as an input to train five classical classifiers. For DL-based methods, we adopt the transfer learning approach to the well-known VGG-19 deep learning architecture, where its pre-trained version on the large scale ImageNet, is block-wise fine-tuned on histopathological images. The evaluation of the proposed methods is carried out on the publicly available BreaKHis dataset for the magnification dependent classification of benign and malignant breast cancer and their eight sub-classes, and a further validation on KIMIA Path960, a magnification-free histopathological dataset with 20 image classes, is also performed. After providing the classification results of CML and DL methods, and to better explain the difference in the classification performance, we visualize the learned features. For the DL-based method, we intuitively visualize the areas of interest of the best fine-tuned deep neural networks using attention maps to explain the decision-making process and improve the clinical interpretability of the proposed models. The visual explanation can inherently improve the pathologist’s trust in automated DL methods as a credible and trustworthy support tool for breast cancer diagnosis. The achieved results show that DL methods outperform CML approaches where we reached an accuracy between 94.05% and 98.13% for the binary classification and between 76.77% and 88.95% for the eight-class classification, while for DL approaches, the accuracies range from 85.65% to 89.32% for the binary classification and from 63.55% to 69.69% for the eight-class classification.
Collapse
|
27
|
Nasir IM, Rashid M, Shah JH, Sharif M, Awan MYH, Alkinani MH. An Optimized Approach for Breast Cancer Classification for Histopathological Images Based on Hybrid Feature Set. Curr Med Imaging 2021; 17:136-147. [PMID: 32324518 DOI: 10.2174/1573405616666200423085826] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 03/05/2020] [Accepted: 03/24/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Breast cancer is considered as one of the most perilous sickness among females worldwide and the ratio of new cases is increasing yearly. Many researchers have proposed efficient algorithms to diagnose breast cancer at early stages, which have increased the efficiency and performance by utilizing the learned features of gold standard histopathological images. OBJECTIVE Most of these systems have either used traditional handcrafted or deep features, which had a lot of noise and redundancy, and ultimately decrease the performance of the system. METHODS A hybrid approach is proposed by fusing and optimizing the properties of handcrafted and deep features to classify the breast cancer images. HOG and LBP features are serially fused with pre-trained models VGG19 and InceptionV3. PCR and ICR are used to evaluate the classification performance of the proposed method. RESULTS The method concentrates on histopathological images to classify the breast cancer. The performance is compared with the state-of-the-art techniques, where an overall patient-level accuracy of 97.2% and image-level accuracy of 96.7% is recorded. CONCLUSION The proposed hybrid method achieves the best performance as compared to previous methods and it can be used for the intelligent healthcare systems and early breast cancer detection.
Collapse
Affiliation(s)
| | - Muhammad Rashid
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Jamal Hussain Shah
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | | | - Monagi H Alkinani
- College of Computer Science and Engineering, Department of Computer Science and Artificial Intelligence, University of Jeddah, Saudi Arabia
| |
Collapse
|
28
|
Wu W, Mehta S, Nofallah S, Knezevich S, May CJ, Chang OH, Elmore JG, Shapiro LG. Scale-Aware Transformers for Diagnosing Melanocytic Lesions. IEEE Access 2021; 9:163526-163541. [PMID: 35211363 PMCID: PMC8865389 DOI: 10.1109/access.2021.3132958] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Diagnosing melanocytic lesions is one of the most challenging areas of pathology with extensive intra- and inter-observer variability. The gold standard for a diagnosis of invasive melanoma is the examination of histopathological whole slide skin biopsy images by an experienced dermatopathologist. Digitized whole slide images offer novel opportunities for computer programs to improve the diagnostic performance of pathologists. In order to automatically classify such images, representations that reflect the content and context of the input images are needed. In this paper, we introduce a novel self-attention-based network to learn representations from digital whole slide images of melanocytic skin lesions at multiple scales. Our model softly weighs representations from multiple scales, allowing it to discriminate between diagnosis-relevant and -irrelevant information automatically. Our experiments show that our method outperforms five other state-of-the-art whole slide image classification methods by a significant margin. Our method also achieves comparable performance to 187 practicing U.S. pathologists who interpreted the same cases in an independent study. To facilitate relevant research, full training and inference code is made publicly available at https://github.com/meredith-wenjunwu/ScATNet.
Collapse
Affiliation(s)
- Wenjun Wu
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
| | - Sachin Mehta
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | - Shima Nofallah
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | | | | | - Oliver H Chang
- Department of Pathology, University of Washington, Seattle, WA 98195, USA
| | - Joann G Elmore
- David Geffen School of Medicine, UCLA, Los Angeles, CA 90024, USA
| | - Linda G Shapiro
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
29
|
Liu W, Juhas M, Zhang Y. Fine-Grained Breast Cancer Classification With Bilinear Convolutional Neural Networks (BCNNs). Front Genet 2020; 11:547327. [PMID: 33101377 PMCID: PMC7500315 DOI: 10.3389/fgene.2020.547327] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 08/17/2020] [Indexed: 12/24/2022] Open
Abstract
Classification of histopathological images of cancer is challenging even for well-trained professionals, due to the fine-grained variability of the disease. Deep Convolutional Neural Networks (CNNs) showed great potential for classification of a number of the highly variable fine-grained objects. In this study, we introduce a Bilinear Convolutional Neural Networks (BCNNs) based deep learning method for fine-grained classification of breast cancer histopathological images. We evaluated our model by comparison with several deep learning algorithms for fine-grained classification. We used bilinear pooling to aggregate a large number of orderless features without taking into consideration the disease location. The experimental results on BreaKHis, a publicly available breast cancer dataset, showed that our method is highly accurate with 99.24% and 95.95% accuracy in binary and in fine-grained classification, respectively.
Collapse
Affiliation(s)
- Weihuang Liu
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Mario Juhas
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - Yang Zhang
- College of Science, Harbin Institute of Technology, Shenzhen, China
| |
Collapse
|
30
|
Abstract
Breast cancer is associated with the highest morbidity rates for cancer diagnoses in the world and has become a major public health issue. Early diagnosis can increase the chance of successful treatment and survival. However, it is a very challenging and time-consuming task that relies on the experience of pathologists. The automatic diagnosis of breast cancer by analyzing histopathological images plays a significant role for patients and their prognosis. However, traditional feature extraction methods can only extract some low-level features of images, and prior knowledge is necessary to select useful features, which can be greatly affected by humans. Deep learning techniques can extract high-level abstract features from images automatically. Therefore, we introduce it to analyze histopathological images of breast cancer via supervised and unsupervised deep convolutional neural networks. First, we adapted Inception_V3 and Inception_ResNet_V2 architectures to the binary and multi-class issues of breast cancer histopathological image classification by utilizing transfer learning techniques. Then, to overcome the influence from the imbalanced histopathological images in subclasses, we balanced the subclasses with Ductal Carcinoma as the baseline by turning images up and down, right and left, and rotating them counterclockwise by 90 and 180 degrees. Our experimental results of the supervised histopathological image classification of breast cancer and the comparison to the results from other studies demonstrate that Inception_V3 and Inception_ResNet_V2 based histopathological image classification of breast cancer is superior to the existing methods. Furthermore, these findings show that Inception_ResNet_V2 network is the best deep learning architecture so far for diagnosing breast cancers by analyzing histopathological images. Therefore, we used Inception_ResNet_V2 to extract features from breast cancer histopathological images to perform unsupervised analysis of the images. We also constructed a new autoencoder network to transform the features extracted by Inception_ResNet_V2 to a low dimensional space to do clustering analysis of the images. The experimental results demonstrate that using our proposed autoencoder network results in better clustering results than those based on features extracted only by Inception_ResNet_V2 network. All of our experimental results demonstrate that Inception_ResNet_V2 network based deep transfer learning provides a new means of performing analysis of histopathological images of breast cancer.
Collapse
Affiliation(s)
- Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an, China
| | - Ran Liu
- School of Computer Science, Shaanxi Normal University, Xi'an, China
| | - Joseph Luttrell
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, United States
| | - Chaoyang Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, United States
| |
Collapse
|