1
|
Majanga V, Mnkandla E, Wang Z, Moulla DK. Automatic Blob Detection Method for Cancerous Lesions in Unsupervised Breast Histology Images. Bioengineering (Basel) 2025; 12:364. [PMID: 40281724 PMCID: PMC12024787 DOI: 10.3390/bioengineering12040364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Revised: 02/20/2025] [Accepted: 03/11/2025] [Indexed: 04/29/2025] Open
Abstract
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the visual inspection of medical images, which is ineffective, particularly for large and visible cancerous lesions in such images. Additionally, conventional methods face challenges in analyzing objects in large images due to overlapping/intersecting objects and the inability to resolve their image boundaries/edges. Nevertheless, the early detection of breast cancer lesions is a key determinant for diagnosis and treatment. In this study, we present a deep learning-based technique for breast cancer lesion detection, namely blob detection, which automatically detects hidden and inaccessible cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Secondly, a stain normalization technique is applied to the augmented images to separate nucleus features from tissue structures. Thirdly, morphology operation techniques, namely erosion, dilation, opening, and a distance transform, are used to enhance the images by highlighting foreground and background pixels while removing overlapping regions from the highlighted nucleus objects in the image. Subsequently, image segmentation is handled via the connected components method, which groups highlighted pixel components with similar intensity values and assigns them to their relevant labeled components (binary masks). These binary masks are then used in the active contours method for further segmentation by highlighting the boundaries/edges of ROIs. Finally, a deep learning recurrent neural network (RNN) model automatically detects and extracts cancerous lesions and their edges from the histology images via the blob detection method. This proposed approach utilizes the capabilities of both the connected components method and the active contours method to resolve the limitations of blob detection. This detection method is evaluated on 27,249 unsupervised, augmented human breast cancer histology dataset images, and it shows a significant evaluation result in the form of a 98.82% F1 accuracy score.
Collapse
|
2
|
Fiaz A, Raza B, Faheem M, Raza A. A deep fusion-based vision transformer for breast cancer classification. Healthc Technol Lett 2024; 11:471-484. [PMID: 39720758 PMCID: PMC11665795 DOI: 10.1049/htl2.12093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 09/16/2024] [Accepted: 10/14/2024] [Indexed: 12/26/2024] Open
Abstract
Breast cancer is one of the most common causes of death in women in the modern world. Cancerous tissue detection in histopathological images relies on complex features related to tissue structure and staining properties. Convolutional neural network (CNN) models like ResNet50, Inception-V1, and VGG-16, while useful in many applications, cannot capture the patterns of cell layers and staining properties. Most previous approaches, such as stain normalization and instance-based vision transformers, either miss important features or do not process the whole image effectively. Therefore, a deep fusion-based vision Transformer model (DFViT) that combines CNNs and transformers for better feature extraction is proposed. DFViT captures local and global patterns more effectively by fusing RGB and stain-normalized images. Trained and tested on several datasets, such as BreakHis, breast cancer histology (BACH), and UCSC cancer genomics (UC), the results demonstrate outstanding accuracy, F1 score, precision, and recall, setting a new milestone in histopathological image analysis for diagnosing breast cancer.
Collapse
Affiliation(s)
- Ahsan Fiaz
- Department of Computer ScienceCOMSATS University Islamabad (CUI)IslamabadPakistan
| | - Basit Raza
- Department of Computer ScienceCOMSATS University Islamabad (CUI)IslamabadPakistan
| | - Muhammad Faheem
- School of Technology and InnovationsUniversity of VaasaVaasaFinland
| | - Aadil Raza
- Department of PhysicsCOMSATS University Islamabad (CUI)IslamabadPakistan
| |
Collapse
|
3
|
Balasubramanian AA, Al-Heejawi SMA, Singh A, Breggia A, Ahmad B, Christman R, Ryan ST, Amal S. Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology. Cancers (Basel) 2024; 16:2222. [PMID: 38927927 PMCID: PMC11201924 DOI: 10.3390/cancers16122222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/07/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024] Open
Abstract
Cancer diagnosis and classification are pivotal for effective patient management and treatment planning. In this study, a comprehensive approach is presented utilizing ensemble deep learning techniques to analyze breast cancer histopathology images. Our datasets were based on two widely employed datasets from different centers for two different tasks: BACH and BreakHis. Within the BACH dataset, a proposed ensemble strategy was employed, incorporating VGG16 and ResNet50 architectures to achieve precise classification of breast cancer histopathology images. Introducing a novel image patching technique to preprocess a high-resolution image facilitated a focused analysis of localized regions of interest. The annotated BACH dataset encompassed 400 WSIs across four distinct classes: Normal, Benign, In Situ Carcinoma, and Invasive Carcinoma. In addition, the proposed ensemble was used on the BreakHis dataset, utilizing VGG16, ResNet34, and ResNet50 models to classify microscopic images into eight distinct categories (four benign and four malignant). For both datasets, a five-fold cross-validation approach was employed for rigorous training and testing. Preliminary experimental results indicated a patch classification accuracy of 95.31% (for the BACH dataset) and WSI image classification accuracy of 98.43% (BreakHis). This research significantly contributes to ongoing endeavors in harnessing artificial intelligence to advance breast cancer diagnosis, potentially fostering improved patient outcomes and alleviating healthcare burdens.
Collapse
Affiliation(s)
| | | | - Akarsh Singh
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (S.M.A.A.-H.); (A.S.)
| | - Anne Breggia
- MaineHealth Institute for Research, Scarborough, ME 04074, USA;
| | - Bilal Ahmad
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Robert Christman
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Stephen T. Ryan
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Saeed Amal
- The Roux Institute, Department of Bioengineering, College of Engineering, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
4
|
Zhang L, Xu R, Zhao J. Learning technology for detection and grading of cancer tissue using tumour ultrasound images1. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:157-171. [PMID: 37424493 DOI: 10.3233/xst-230085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND Early diagnosis of breast cancer is crucial to perform effective therapy. Many medical imaging modalities including MRI, CT, and ultrasound are used to diagnose cancer. OBJECTIVE This study aims to investigate feasibility of applying transfer learning techniques to train convoluted neural networks (CNNs) to automatically diagnose breast cancer via ultrasound images. METHODS Transfer learning techniques helped CNNs recognise breast cancer in ultrasound images. Each model's training and validation accuracies were assessed using the ultrasound image dataset. Ultrasound images educated and tested the models. RESULTS MobileNet had the greatest accuracy during training and DenseNet121 during validation. Transfer learning algorithms can detect breast cancer in ultrasound images. CONCLUSIONS Based on the results, transfer learning models may be useful for automated breast cancer diagnosis in ultrasound images. However, only a trained medical professional should diagnose cancer, and computational approaches should only be used to help make quick decisions.
Collapse
Affiliation(s)
- Liyan Zhang
- Department of Ultrasound, Sunshine Union Hospital, Weifang, China
| | - Ruiyan Xu
- College of Health, Binzhou Polytechnical College, Binzhou, China
| | - Jingde Zhao
- Department of Imaging, Qingdao Hospital of Traditional Chinese Medicine (Qingdao HaiCi Hospital), Qingdao, China
| |
Collapse
|
5
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
6
|
To T, Lu T, Jorns JM, Patton M, Schmidt TG, Yen T, Yu B, Ye DH. Deep learning classification of deep ultraviolet fluorescence images toward intra-operative margin assessment in breast cancer. Front Oncol 2023; 13:1179025. [PMID: 37397361 PMCID: PMC10313133 DOI: 10.3389/fonc.2023.1179025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 05/22/2023] [Indexed: 07/04/2023] Open
Abstract
Background Breast-conserving surgery is aimed at removing all cancerous cells while minimizing the loss of healthy tissue. To ensure a balance between complete resection of cancer and preservation of healthy tissue, it is necessary to assess themargins of the removed specimen during the operation. Deep ultraviolet (DUV) fluorescence scanning microscopy provides rapid whole-surface imaging (WSI) of resected tissues with significant contrast between malignant and normal/benign tissue. Intra-operative margin assessment with DUV images would benefit from an automated breast cancer classification method. Methods Deep learning has shown promising results in breast cancer classification, but the limited DUV image dataset presents the challenge of overfitting to train a robust network. To overcome this challenge, the DUV-WSI images are split into small patches, and features are extracted using a pre-trained convolutional neural network-afterward, a gradient-boosting tree trains on these features for patch-level classification. An ensemble learning approach merges patch-level classification results and regional importance to determine the margin status. An explainable artificial intelligence method calculates the regional importance values. Results The proposed method's ability to determine the DUV WSI was high with 95% accuracy. The 100% sensitivity shows that the method can detect malignant cases efficiently. The method could also accurately localize areas that contain malignant or normal/benign tissue. Conclusion The proposed method outperforms the standard deep learning classification methods on the DUV breast surgical samples. The results suggest that it can be used to improve classification performance and identify cancerous regions more effectively.
Collapse
Affiliation(s)
- Tyrell To
- Department of Electrical and Computer Engineering, Marquette University, Opus College of Engineering, Milwaukee, WI, United States
| | - Tongtong Lu
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Julie M. Jorns
- Department of Pathology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Mollie Patton
- Department of Pathology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Taly Gilat Schmidt
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Tina Yen
- Department of Surgery, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Bing Yu
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Dong Hye Ye
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| |
Collapse
|
7
|
Parvathi S, Vaishnavi P. An efficient breast cancer detection with secured cloud storage & reliability analysis using FMEA. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Breast cancer is considered as a most dangerous type of cancer found in women among all the cancers. Around 2.3 million women in the world are affected by this cancer and there is no cure if it is left untreated at an earlier stage. Therefore, early diagnosis of this disease is an important consideration to save the life of millions of women. Many machine learning models have been evolved in the recent years for breast cancer detection. However, all the currently available works focused only on improving the prediction accuracy, they need more attention on providing reliable services. This work presents an efficient breast cancer detection mechanism using deep learning strategies. The various assortments like breast image shapes, the intensity of images, regions of an image, illuminations, and contrast are the conceivable factors that define breast cancer identification. This study offers a strong image detection process for breast cancer mammography images by considering the whole slide image. Here, the input process for the preprocessing stage will remove the noise present in the image using Gaussian Filter (GF). The preprocessed image moves to the image segmentation and then forward to the feature extraction for extracting the features of the images using Cauchy distribution-based segmentation and Shearlet based feature extraction. Then the specialized features can be isolated using the Entropy PCA based feature selection. Finally, the breast cancer area is to be detected as benign or malignant accurately by using the Unified probability with LSTM neural network classification (UP-LSTM) for whole slide image (WSI). The attained outcomes and the detected outcomes were stored in cloud using a security mechanism for further monitoring purposes. To provide an efficient security, a Bio-inspired Iterative Honey Bee (BI-IHB) encryption is employed which is decrypted on user request. The reliability of the stored data is then found using FMEA (Failure mode and effective analysis) approach. From the experimental analysis, it is observed that UP-LSTM classifier model offers accuracy of 99.26% , sensitivity of 100% , and precision value of 98.59% which is better than the other state of the art techniques.
Collapse
Affiliation(s)
- S. Parvathi
- Department of Computer Applications, UCE, Anna University, BIT Campus, Trichy, India
| | - P. Vaishnavi
- Department of Computer Applications, UCE, Anna University, BIT Campus, Trichy, India
| |
Collapse
|
8
|
Mukhlif AA, Al-Khateeb B, Mohammed MA. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
Collapse
Affiliation(s)
- Abdulrahman Abbas Mukhlif
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Belal Al-Khateeb
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Mazin Abed Mohammed
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| |
Collapse
|
9
|
Bhuiyan MR, Abdullah J. Detection on Cell Cancer Using the Deep Transfer Learning and Histogram Based Image Focus Quality Assessment. SENSORS (BASEL, SWITZERLAND) 2022; 22:7007. [PMID: 36146356 PMCID: PMC9504738 DOI: 10.3390/s22187007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 08/15/2022] [Accepted: 08/26/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the number of studies using whole-slide imaging (WSIs) of histopathology slides has expanded significantly. For the development and validation of artificial intelligence (AI) systems, glass slides from retrospective cohorts including patient follow-up data have been digitized. It has become crucial to determine that the quality of such resources meets the minimum requirements for the development of AI in the future. The need for automated quality control is one of the obstacles preventing the clinical implementation of digital pathology work processes. As a consequence of the inaccuracy of scanners in determining the focus of the image, the resulting visual blur can render the scanned slide useless. Moreover, when scanned at a resolution of 20× or higher, the resulting picture size of a scanned slide is often enormous. Therefore, for digital pathology to be clinically relevant, computational algorithms must be used to rapidly and reliably measure the picture's focus quality and decide if an image requires re-scanning. We propose a metric for evaluating the quality of digital pathology images that uses a sum of even-derivative filter bases to generate a human visual-system-like kernel, which is described as the inverse of the lens' point spread function. This kernel is then used for a digital pathology image to change high-frequency image data degraded by the scanner's optics and assess the patch-level focus quality. Through several studies, we demonstrate that our technique correlates with ground-truth z-level data better than previous methods, and is computationally efficient. Using deep learning techniques, our suggested system is able to identify positive and negative cancer cells in images. We further expand our technique to create a local slide-level focus quality heatmap, which can be utilized for automated slide quality control, and we illustrate our method's value in clinical scan quality control by comparing it to subjective slide quality ratings. The proposed method, GoogleNet, VGGNet, and ResNet had accuracy values of 98.5%, 94.5%, 94.00%, and 95.00% respectively.
Collapse
|
10
|
A Color-Texture-Based Deep Neural Network Technique to Detect Face Spoofing Attacks. CYBERNETICS AND INFORMATION TECHNOLOGIES 2022. [DOI: 10.2478/cait-2022-0032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Given the face spoofing attack, adequate protection of human identity through face has become a significant challenge globally. Face spoofing is an act of presenting a recaptured frame before the verification device to gain illegal access on behalf of a legitimate person with or without their concern. Several methods have been proposed to detect face spoofing attacks over the last decade. However, these methods only consider the luminance information, reflecting poor discrimination of spoofed face from the genuine face. This article proposes a practical approach combining Local Binary Patterns (LBP) and convolutional neural network-based transfer learning models to extract low-level and high-level features. This paper analyzes three color spaces (i.e., RGB, HSV, and YCrCb) to understand the impact of the color distribution on real and spoofed faces for the NUAA benchmark dataset. In-depth analysis of experimental results and comparison with other existing approaches show the superiority and effectiveness of our proposed models.
Collapse
|
11
|
Ahmad S, Ullah T, Ahmad I, AL-Sharabi A, Ullah K, Khan RA, Rasheed S, Ullah I, Uddin MN, Ali MS. A Novel Hybrid Deep Learning Model for Metastatic Cancer Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8141530. [PMID: 35785076 PMCID: PMC9249449 DOI: 10.1155/2022/8141530] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/28/2022] [Accepted: 06/01/2022] [Indexed: 12/18/2022]
Abstract
Cancer has been found as a heterogeneous disease with various subtypes and aims to destroy the body's normal cells abruptly. As a result, it is essential to detect and prognosis the distinct type of cancer since they may help cancer survivors with treatment in the early stage. It must also divide cancer patients into high- and low-risk groups. While realizing efficient detection of cancer is frequently a time-taking and exhausting task with the high possibility of pathologist errors and previous studies employed data mining and machine learning (ML) techniques to identify cancer, these strategies rely on handcrafted feature extraction techniques that result in incorrect classification. On the contrary, deep learning (DL) is robust in feature extraction and has recently been widely used for classification and detection purposes. This research implemented a novel hybrid AlexNet-gated recurrent unit (AlexNet-GRU) model for the lymph node (LN) breast cancer detection and classification. We have used a well-known Kaggle (PCam) data set to classify LN cancer samples. This study is tested and compared among three models: convolutional neural network GRU (CNN-GRU), CNN long short-term memory (CNN-LSTM), and the proposed AlexNet-GRU. The experimental results indicated that the performance metrics accuracy, precision, sensitivity, and specificity (99.50%, 98.10%, 98.90%, and 97.50) of the proposed model can reduce the pathologist errors that occur during the diagnosis process of incorrect classification and significantly better performance than CNN-GRU and CNN-LSTM models. The proposed model is compared with other recent ML/DL algorithms to analyze the model's efficiency, which reveals that the proposed AlexNet-GRU model is computationally efficient. Also, the proposed model presents its superiority over state-of-the-art methods for LN breast cancer detection and classification.
Collapse
Affiliation(s)
- Shahab Ahmad
- School of Management Science and Engineering, Chongqing University of Post and Telecommunication, Chongqing 400065, China
| | - Tahir Ullah
- Department of Electronics and Information Engineering, Xian Jiaotong University, Xian, China
| | - Ijaz Ahmad
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | | | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology, Bannu 28100, Pakistan
| | - Saim Rasheed
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University Jeddah, Saudi Arabia
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus, Nanjing 213022, China
| | - Md. Nasir Uddin
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| |
Collapse
|
12
|
BM-Net: CNN-Based MobileNet-V3 and Bilinear Structure for Breast Cancer Detection in Whole Slide Images. Bioengineering (Basel) 2022; 9:bioengineering9060261. [PMID: 35735504 PMCID: PMC9220285 DOI: 10.3390/bioengineering9060261] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 06/15/2022] [Accepted: 06/15/2022] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common types of cancer and is the leading cause of cancer-related death. Diagnosis of breast cancer is based on the evaluation of pathology slides. In the era of digital pathology, these slides can be converted into digital whole slide images (WSIs) for further analysis. However, due to their sheer size, digital WSIs diagnoses are time consuming and challenging. In this study, we present a lightweight architecture that consists of a bilinear structure and MobileNet-V3 network, bilinear MobileNet-V3 (BM-Net), to analyze breast cancer WSIs. We utilized the WSI dataset from the ICIAR2018 Grand Challenge on Breast Cancer Histology Images (BACH) competition, which contains four classes: normal, benign, in situ carcinoma, and invasive carcinoma. We adopted data augmentation techniques to increase diversity and utilized focal loss to remove class imbalance. We achieved high performance, with 0.88 accuracy in patch classification and an average 0.71 score, which surpassed state-of-the-art models. Our BM-Net shows great potential in detecting cancer in WSIs and is a promising clinical tool.
Collapse
|
13
|
Mahmood T, Li J, Pei Y, Akhtar F, Rehman MU, Wasti SH. Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS One 2022; 17:e0263126. [PMID: 35085352 PMCID: PMC8794221 DOI: 10.1371/journal.pone.0263126] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 01/12/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is one of the worst illnesses, with a higher fatality rate among women globally. Breast cancer detection needs accurate mammography interpretation and analysis, which is challenging for radiologists owing to the intricate anatomy of the breast and low image quality. Advances in deep learning-based models have significantly improved breast lesions’ detection, localization, risk assessment, and categorization. This study proposes a novel deep learning-based convolutional neural network (ConvNet) that significantly reduces human error in diagnosing breast malignancy tissues. Our methodology is most effective in eliciting task-specific features, as feature learning is coupled with classification tasks to achieve higher performance in automatically classifying the suspicious regions in mammograms as benign and malignant. To evaluate the model’s validity, 322 raw mammogram images from Mammographic Image Analysis Society (MIAS) and 580 from Private datasets were obtained to extract in-depth features, the intensity of information, and the high likelihood of malignancy. Both datasets are magnificently improved through preprocessing, synthetic data augmentation, and transfer learning techniques to attain the distinctive combination of breast tumors. The experimental findings indicate that the proposed approach achieved remarkable training accuracy of 0.98, test accuracy of 0.97, high sensitivity of 0.99, and an AUC of 0.99 in classifying breast masses on mammograms. The developed model achieved promising performance that helps the clinician in the speedy computation of mammography, breast masses diagnosis, treatment planning, and follow-up of disease progression. Moreover, it has the immense potential over retrospective approaches in consistency feature extraction and precise lesions classification.
Collapse
Affiliation(s)
- Tariq Mahmood
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima, Japan
- * E-mail:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Mujeeb Ur Rehman
- Radiology Department, Continental Medical College and Hayat Memorial Teaching Hospital, Lahore, Pakistan
| | - Shahbaz Hassan Wasti
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| |
Collapse
|
14
|
Zhong Y, Piao Y, Zhang G. Dilated and soft attention-guided convolutional neural network for breast cancer histology images classification. Microsc Res Tech 2021; 85:1248-1257. [PMID: 34859543 DOI: 10.1002/jemt.23991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 10/03/2021] [Accepted: 10/18/2021] [Indexed: 01/22/2023]
Abstract
Breast cancer is one of the most common types of cancer in women, and histopathological imaging is considered the gold standard for its diagnosis. However, the great complexity of histopathological images and the considerable workload make this work extremely time-consuming, and the results may be affected by the subjectivity of the pathologist. Therefore, the development of an accurate, automated method for analysis of histopathological images is critical to this field. In this article, we propose a deep learning method guided by the attention mechanism for fast and effective classification of haematoxylin and eosin-stained breast biopsy images. First, this method takes advantage of DenseNet and uses the feature map's information. Second, we introduce dilated convolution to produce a larger receptive field. Finally, spatial attention and channel attention are used to guide the extraction of the most useful visual features. With the use of fivefold cross-validation, the best model obtained an accuracy of 96.47% on the BACH2018 dataset. We also evaluated our method on other datasets, and the experimental results demonstrated that our model has reliable performance. This study indicates that our histopathological image classifier with a soft attention-guided deep learning model for breast cancer shows significantly better results than the latest methods. It has great potential as an effective tool for automatic evaluation of digital histopathological microscopic images for computer-aided diagnosis.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Guohui Zhang
- Pneumoconiosis Diagnosis and Treatment Center, Occupational Preventive and Treatment Hospital in Jilin Province, Changchun, China
| |
Collapse
|
15
|
Morilla I. Repairing the human with artificial intelligence in oncology. Artif Intell Cancer 2021; 2:60-68. [DOI: 10.35713/aic.v2.i5.60] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 10/26/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Affiliation(s)
- Ian Morilla
- Laboratoire Analyse, Géométrie et Applications - Institut Galilée, Sorbonne Paris Nord University, Paris 75006, France
| |
Collapse
|
16
|
Ahmad F, Ghani Khan MU, Javed K. Deep learning model for distinguishing novel coronavirus from other chest related infections in X-ray images. Comput Biol Med 2021; 134:104401. [PMID: 34010794 PMCID: PMC8058056 DOI: 10.1016/j.compbiomed.2021.104401] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 12/16/2022]
Abstract
Novel Coronavirus is deadly for humans and animals. The ease of its dispersion, coupled with its tremendous capability for ailment and death in infected people, makes it a risk to society. The chest X-ray is conventional but hard to interpret radiographic test for initial diagnosis of coronavirus from other related infections. It bears a considerable amount of information on physiological and anatomical features. To extract relevant information from it can occasionally become challenging even for a professional radiologist. In this regard, deep-learning models can help in swift, accurate and reliable outcomes. Existing datasets are small and suffer from the balance issue. In this paper, we prepare a relatively larger and well-balanced dataset as compared to the available datasets. Furthermore, we analyze deep learning models, namely, AlexNet, SqueezeNet, DenseNet201, MobileNetV2 and InceptionV3 with numerous variations such as training the models from scratch, fine-tuning without pre-trained weights, fine-tuning along with updating pre-trained weights of all layers, and fine-tuning with pre-trained weights along with applying augmentation. Our results show that fine-tuning with augmentation generates best results in pre-trained models. Finally, we have made architectural adjustments in MobileNetV2 and InceptionV3 models to learn more intricate features, which are then merged in our proposed ensemble model. The performance of our model is statistically analyzed against other models using four different performance metrics with paired two-sided t-test on 5 different splits of training and test sets of our dataset. We find that it is statistically better than its competing methods for the four metrics. Thus, the computer-aided classification based on the proposed model can assist radiologists in identifying coronavirus from other related infections in chest X-rays with higher accuracy. This can help in a reliable and speedy diagnosis, thereby saving valuable lives and mitigating the adverse impact on the socioeconomics of our community.
Collapse
Affiliation(s)
- Fareed Ahmad
- Department of Computer Science, University of Engineering and Technology, Lahore, Pakistan,Quality Operations Laboratory, Institute of Microbiology, University of Veterinary and Animal Sciences, Lahore, Pakistan,Corresponding author. Department of Computer Science, University of Engineering and Technology, Lahore, Pakistan
| | | | - Kashif Javed
- Department of Electrical Engineering, University of Engineering and Technology, Lahore, Pakistan
| |
Collapse
|
17
|
Li J, Wang P, Zhou Y, Liang H, Luan K. Application of Deep Transfer Learning to the Classification of Colorectal Cancer Lymph Node Metastasis. J Imaging Sci Technol 2021. [DOI: 10.2352/j.imagingsci.technol.2021.65.3.030401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
18
|
Munien C, Viriri S. Classification of Hematoxylin and Eosin-Stained Breast Cancer Histology Microscopy Images Using Transfer Learning with EfficientNets. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5580914. [PMID: 33897774 PMCID: PMC8052174 DOI: 10.1155/2021/5580914] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 03/15/2021] [Accepted: 03/29/2021] [Indexed: 12/19/2022]
Abstract
Breast cancer is a fatal disease and is a leading cause of death in women worldwide. The process of diagnosis based on biopsy tissue is nontrivial, time-consuming, and prone to human error, and there may be conflict about the final diagnosis due to interobserver variability. Computer-aided diagnosis systems have been designed and implemented to combat these issues. These systems contribute significantly to increasing the efficiency and accuracy and reducing the cost of diagnosis. Moreover, these systems must perform better so that their determined diagnosis can be more reliable. This research investigates the application of the EfficientNet architecture for the classification of hematoxylin and eosin-stained breast cancer histology images provided by the ICIAR2018 dataset. Specifically, seven EfficientNets were fine-tuned and evaluated on their ability to classify images into four classes: normal, benign, in situ carcinoma, and invasive carcinoma. Moreover, two standard stain normalization techniques, Reinhard and Macenko, were observed to measure the impact of stain normalization on performance. The outcome of this approach reveals that the EfficientNet-B2 model yielded an accuracy and sensitivity of 98.33% using Reinhard stain normalization method on the training images and an accuracy and sensitivity of 96.67% using the Macenko stain normalization method. These satisfactory results indicate that transferring generic features from natural images to medical images through fine-tuning on EfficientNets can achieve satisfactory results.
Collapse
Affiliation(s)
- Chanaleä Munien
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 217013433, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 217013433, South Africa
| |
Collapse
|
19
|
Jiang J, Li X, Wang J, Lim SJ. Breast Cancer Recognition Algorithm Based on Convolution Neural Network. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Most of the preliminary attempts of deep learning in medical images focus on replacing natural images with medical images into convolutional neural networks. In doing so, however, the particularity of medical images and the basic differences between the two types of images are ignored. This difference makes it impossible to directly use the network architecture developed for natural images. This paper therefore uses medical data sets for migration learning. Moreover, the reason why deep learning is difficult to apply in medicine is that it can easily lead to medical disputes because of its unexplainability. In this paper, the deep learning model is explained and implemented by using the theory of fuzzy logic. This paper tests the accuracy and stability of the original model and the new model in classification prediction. Our results show that the model implemented by fuzzy logic improves the accuracy, and makes the prediction more stable as well.
Collapse
Affiliation(s)
- Jiafu Jiang
- Institute of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410076, China
| | - Xinpei Li
- Institute of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410076, China
| | - Jin Wang
- Institute of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410076, China
| | - Se-Jung Lim
- Liberal Arts & Convergence Studies, Honam University, Gwangju 62399, Republic of Korea
| |
Collapse
|
20
|
BFCNet: a CNN for diagnosis of ductal carcinoma in breast from cytology images. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-021-00962-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
21
|
Li J, Wang P, Zhou Y, Liang H, Luan K. Different Machine Learning and Deep Learning Methods for the Classification of Colorectal Cancer Lymph Node Metastasis Images. Front Bioeng Biotechnol 2021; 8:620257. [PMID: 33520971 PMCID: PMC7841386 DOI: 10.3389/fbioe.2020.620257] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 12/14/2020] [Indexed: 12/14/2022] Open
Abstract
The classification of colorectal cancer (CRC) lymph node metastasis (LNM) is a vital clinical issue related to recurrence and design of treatment plans. However, it remains unclear which method is effective in automatically classifying CRC LNM. Hence, this study compared the performance of existing classification methods, i.e., machine learning, deep learning, and deep transfer learning, to identify the most effective method. A total of 3,364 samples (1,646 positive and 1,718 negative) from Harbin Medical University Cancer Hospital were collected. All patches were manually segmented by experienced radiologists, and the image size was based on the lesion to be intercepted. Two classes of global features and one class of local features were extracted from the patches. These features were used in eight machine learning algorithms, while the other models used raw data. Experiment results showed that deep transfer learning was the most effective method with an accuracy of 0.7583 and an area under the curve of 0.7941. Furthermore, to improve the interpretability of the results from the deep learning and deep transfer learning models, the classification heat-map features were used, which displayed the region of feature extraction by superposing with raw data. The research findings are expected to promote the use of effective methods in CRC LNM detection and hence facilitate the design of proper treatment plans.
Collapse
Affiliation(s)
- Jin Li
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Peng Wang
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Yang Zhou
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hong Liang
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Kuan Luan
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| |
Collapse
|
22
|
Ahmad F, Farooq A, Ghani MU. Deep Ensemble Model for Classification of Novel Coronavirus in Chest X-Ray Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8890226. [PMID: 33488691 PMCID: PMC7805527 DOI: 10.1155/2021/8890226] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/22/2020] [Accepted: 12/04/2020] [Indexed: 12/15/2022]
Abstract
The novel coronavirus, SARS-CoV-2, can be deadly to people, causing COVID-19. The ease of its propagation, coupled with its high capacity for illness and death in infected individuals, makes it a hazard to the community. Chest X-rays are one of the most common but most difficult to interpret radiographic examination for early diagnosis of coronavirus-related infections. They carry a considerable amount of anatomical and physiological information, but it is sometimes difficult even for the expert radiologist to derive the related information they contain. Automatic classification using deep learning models can help in better assessing these infections swiftly. Deep CNN models, namely, MobileNet, ResNet50, and InceptionV3, were applied with different variations, including training the model from the start, fine-tuning along with adjusting learned weights of all layers, and fine-tuning with learned weights along with augmentation. Fine-tuning with augmentation produced the best results in pretrained models. Out of these, two best-performing models (MobileNet and InceptionV3) selected for ensemble learning produced accuracy and FScore of 95.18% and 90.34%, and 95.75% and 91.47%, respectively. The proposed hybrid ensemble model generated with the merger of these deep models produced a classification accuracy and FScore of 96.49% and 92.97%. For test dataset, which was separately kept, the model generated accuracy and FScore of 94.19% and 88.64%. Automatic classification using deep ensemble learning can help radiologists in the correct identification of coronavirus-related infections in chest X-rays. Consequently, this swift and computer-aided diagnosis can help in saving precious human lives and minimizing the social and economic impact on society.
Collapse
Affiliation(s)
- Fareed Ahmad
- Department of Computer Science, University of Engineering and Technology, Lahore 54890, Pakistan
- Quality Operations Laboratory, Institute of Microbiology, University of Veterinary and Animal Sciences, Lahore, Pakistan
| | - Amjad Farooq
- Department of Computer Science, University of Engineering and Technology, Lahore 54890, Pakistan
| | - Muhammad Usman Ghani
- Department of Computer Science, University of Engineering and Technology, Lahore 54890, Pakistan
| |
Collapse
|
23
|
Abstract
AbstractWhite Blood Cell (WBC) Leukaemia is caused by excessive production of leukocytes in the bone marrow, and image-based detection of malignant WBCs is important for its detection. Convolutional Neural Networks (CNNs) present the current state-of-the-art for this type of image classification, but their computational cost for training and deployment can be high. We here present an improved hybrid approach for efficient classification of WBC Leukemia. We first extract features from WBC images using VGGNet, a powerful CNN architecture, pre-trained on ImageNet. The extracted features are then filtered using a statistically enhanced Salp Swarm Algorithm (SESSA). This bio-inspired optimization algorithm selects the most relevant features and removes highly correlated and noisy features. We applied the proposed approach to two public WBC Leukemia reference datasets and achieve both high accuracy and reduced computational complexity. The SESSA optimization selected only 1 K out of 25 K features extracted with VGGNet, while improving accuracy at the same time. The results are among the best achieved on these datasets and outperform several convolutional network models. We expect that the combination of CNN feature extraction and SESSA feature optimization could be useful for many other image classification tasks.
Collapse
|
24
|
Chen S, Stromer D, Alabdalrahim HA, Schwab S, Weih M, Maier A. Automatic dementia screening and scoring by applying deep learning on clock-drawing tests. Sci Rep 2020; 10:20854. [PMID: 33257744 PMCID: PMC7704614 DOI: 10.1038/s41598-020-74710-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Accepted: 10/05/2020] [Indexed: 11/28/2022] Open
Abstract
Dementia is one of the most common neurological syndromes in the world. Usually, diagnoses are made based on paper-and-pencil tests and scored depending on personal judgments of experts. This technique can introduce errors and has high inter-rater variability. To overcome these issues, we present an automatic assessment of the widely used paper-based clock-drawing test by means of deep neural networks. Our study includes a comparison of three modern architectures: VGG16, ResNet-152, and DenseNet-121. The dataset consisted of 1315 individuals. To deal with the limited amount of data, which also included several dementia types, we used optimization strategies for training the neural network. The outcome of our work is a standardized and digital estimation of the dementia screening result and severity level for an individual. We achieved accuracies of 96.65% for screening and up to 98.54% for scoring, overcoming the reported state-of-the-art as well as human accuracies. Due to the digital format, the paper-based test can be simply scanned by using a mobile device and then be evaluated also in areas where there is a staff shortage or where no clinical experts are available.
Collapse
Affiliation(s)
- Shuqing Chen
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
| | - Daniel Stromer
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Harb Alnasser Alabdalrahim
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Stefan Schwab
- Department of Neurology, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Markus Weih
- Department of Neurology, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| |
Collapse
|
25
|
Kumar D, Batra U. An ensemble algorithm for breast cancer histopathology image classification. JOURNAL OF STATISTICS & MANAGEMENT SYSTEMS 2020. [DOI: 10.1080/09720510.2020.1818451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Deepika Kumar
- School of Engineering, G. D. Goenka University, Gurugram 122103 Haryana, India
| | - Usha Batra
- School of Engineering, G. D. Goenka University, Gurugram 122103 Haryana, India
| |
Collapse
|
26
|
Vizcarra J, Place R, Tong L, Gutman D, Wang MD. Fusion in Breast Cancer Histology Classification. ACM-BCB ... ... : THE ... ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE. ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY AND BIOMEDICINE 2020; 2019:485-493. [PMID: 32637941 DOI: 10.1145/3307339.3342166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Breast cancer is a deadly disease that affects millions of women worldwide. The International Conference on Image Analysis and Recognition in 2018 presents the BreAst Cancer Histology (ICIAR2018 BACH) image data challenge that calls for computer tools to assist pathologists and doctors in the clinical diagnosis of breast cancer subtypes. Using the BACH dataset, we have developed an image classification pipeline that combines both a shallow learner (support vector machine) and a deep learner (convolutional neural network). The shallow learner and deep learners achieved moderate accuracies of 79% and 81% individually. When being integrated by fusion algorithms, the system outperformed any individual learner with the highest accuracy as 92%. The fusion presents big potential for improving clinical design support.
Collapse
Affiliation(s)
- Juan Vizcarra
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332
| | - Ryan Place
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332
| | - Li Tong
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332
| | - David Gutman
- Department of Neurology, Emory University, Atlanta, Georgia, United States
| | - May D Wang
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332
| |
Collapse
|
27
|
Carvalho ED, Filho AOC, Silva RRV, Araújo FHD, Diniz JOB, Silva AC, Paiva AC, Gattass M. Breast cancer diagnosis from histopathological images using textural features and CBIR. Artif Intell Med 2020; 105:101845. [PMID: 32505426 DOI: 10.1016/j.artmed.2020.101845] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 02/27/2020] [Accepted: 03/12/2020] [Indexed: 12/30/2022]
Abstract
Currently, breast cancer diagnosis is an extensively researched topic. An effective method to diagnose breast cancer is to use histopathological images. However, extracting features from these images is a challenging task. Thus, we propose a method that uses phylogenetic diversity indexes to characterize images for creating a model to classify histopathological breast images into four classes - invasive carcinoma, in situ carcinoma, normal tissue, and benign lesion. The classifiers used were the most robust ones according to the existing literature: XGBoost, random forest, multilayer perceptron, and support vector machine. Moreover, we performed content-based image retrieval to confirm the classification results and suggest a ranking for sets of images that were not labeled. The results obtained were considerably robust and proved to be effective for the composition of a CADx system to help specialists at large medical centers.
Collapse
Affiliation(s)
| | | | | | | | - João O B Diniz
- Federal Institute of Education, Science and Technology of Maranhão - IFMA, Grajaú, MA, Brazil; Federal University of Maranhão - UFMA, São Luís, MA, Brazil.
| | | | - Anselmo C Paiva
- Federal University of Maranhão - UFMA, São Luís, MA, Brazil.
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro - PUC - Rio, Rio de Janeiro, RJ, Brazil.
| |
Collapse
|
28
|
Efficient Classification of White Blood Cell Leukemia with Improved Swarm Optimization of Deep Features. Sci Rep 2020; 10:2536. [PMID: 32054876 PMCID: PMC7018965 DOI: 10.1038/s41598-020-59215-9] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Accepted: 01/27/2020] [Indexed: 11/09/2022] Open
Abstract
White Blood Cell (WBC) Leukaemia is caused by excessive production of leukocytes in the bone marrow, and image-based detection of malignant WBCs is important for its detection. Convolutional Neural Networks (CNNs) present the current state-of-the-art for this type of image classification, but their computational cost for training and deployment can be high. We here present an improved hybrid approach for efficient classification of WBC Leukemia. We first extract features from WBC images using VGGNet, a powerful CNN architecture, pre-trained on ImageNet. The extracted features are then filtered using a statistically enhanced Salp Swarm Algorithm (SESSA). This bio-inspired optimization algorithm selects the most relevant features and removes highly correlated and noisy features. We applied the proposed approach to two public WBC Leukemia reference datasets and achieve both high accuracy and reduced computational complexity. The SESSA optimization selected only 1 K out of 25 K features extracted with VGGNet, while improving accuracy at the same time. The results are among the best achieved on these datasets and outperform several convolutional network models. We expect that the combination of CNN feature extraction and SESSA feature optimization could be useful for many other image classification tasks.
Collapse
|
29
|
Breast cancer histopathological image classification using a hybrid deep neural network. Methods 2020; 173:52-60. [DOI: 10.1016/j.ymeth.2019.06.014] [Citation(s) in RCA: 102] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 06/11/2019] [Accepted: 06/13/2019] [Indexed: 12/21/2022] Open
|
30
|
Sari CT, Gunduz-Demir C. Unsupervised Feature Extraction via Deep Learning for Histopathological Classification of Colon Tissue Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1139-1149. [PMID: 30403624 DOI: 10.1109/tmi.2018.2879369] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathological examination is today's gold standard for cancer diagnosis. However, this task is time consuming and prone to errors as it requires a detailed visual inspection and interpretation of a pathologist. Digital pathology aims at alleviating these problems by providing computerized methods that quantitatively analyze digitized histopathological tissue images. The performance of these methods mainly relies on the features that they use, and thus, their success strictly depends on the ability of these features by successfully quantifying the histopathology domain. With this motivation, this paper presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This feature extractor has three main contributions: First, it proposes to identify salient subregions in an image, based on domain-specific prior knowledge, and to quantify the image by employing only the characteristics of these subregions instead of considering the characteristics of all image locations. Second, it introduces a new deep learning-based technique that quantizes the salient subregions by extracting a set of features directly learned on image data and uses the distribution of these quantizations for image representation and classification. To this end, the proposed deep learning-based technique constructs a deep belief network of the restricted Boltzmann machines (RBMs), defines the activation values of the hidden unit nodes in the final RBM as the features, and learns the quantizations by clustering these features in an unsupervised way. Third, this extractor is the first example for successfully using the restricted Boltzmann machines in the domain of histopathological image analysis. Our experiments on microscopic colon tissue images reveal that the proposed feature extractor is effective to obtain more accurate classification results compared to its counterparts.
Collapse
|