1
|
Lubbad MAH, Kurtulus IL, Karaboga D, Kilic K, Basturk A, Akay B, Nalbantoglu OU, Yilmaz OMD, Ayata M, Yilmaz S, Pacal I. A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01086-x. [PMID: 38565730 DOI: 10.1007/s10278-024-01086-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 02/28/2024] [Accepted: 03/12/2024] [Indexed: 04/04/2024]
Abstract
This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system's potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes.
Collapse
Affiliation(s)
- Mohammed A H Lubbad
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey.
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey.
| | | | - Dervis Karaboga
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | - Kerem Kilic
- Department of Prosthodontics, Dentistry Faculty, Erciyes University, Kayseri, Turkey
| | - Alper Basturk
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | - Bahriye Akay
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | - Ozkan Ufuk Nalbantoglu
- Department of Computer Engineering, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| | | | - Mustafa Ayata
- Department of Prosthodontics, Dentistry Faculty, Erciyes University, Kayseri, Turkey
| | - Serkan Yilmaz
- Department of Dentomaxillofacial Radiology, Dentistry Faculty, Erciyes University, Kayseri, Turkey
| | - Ishak Pacal
- Department of Computer Engineering, Engineering Faculty, Igdir University, Igdir, Turkey
- Artificial Intelligence and Big Data Application and Research Center, Erciyes University, Kayseri, Turkey
| |
Collapse
|
2
|
Safdar Ali Khan M, Husen A, Nisar S, Ahmed H, Shah Muhammad S, Aftab S. Offloading the computational complexity of transfer learning with generic features. PeerJ Comput Sci 2024; 10:e1938. [PMID: 38660182 PMCID: PMC11041970 DOI: 10.7717/peerj-cs.1938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/19/2024] [Indexed: 04/26/2024]
Abstract
Deep learning approaches are generally complex, requiring extensive computational resources and having high time complexity. Transfer learning is a state-of-the-art approach to reducing the requirements of high computational resources by using pre-trained models without compromising accuracy and performance. In conventional studies, pre-trained models are trained on datasets from different but similar domains with many domain-specific features. The computational requirements of transfer learning are directly dependent on the number of features that include the domain-specific and the generic features. This article investigates the prospects of reducing the computational requirements of the transfer learning models by discarding domain-specific features from a pre-trained model. The approach is applied to breast cancer detection using the dataset curated breast imaging subset of the digital database for screening mammography and various performance metrics such as precision, accuracy, recall, F1-score, and computational requirements. It is seen that discarding the domain-specific features to a specific limit provides significant performance improvements as well as minimizes the computational requirements in terms of training time (reduced by approx. 12%), processor utilization (reduced approx. 25%), and memory usage (reduced approx. 22%). The proposed transfer learning strategy increases accuracy (approx. 7%) and offloads computational complexity expeditiously.
Collapse
Affiliation(s)
- Muhammad Safdar Ali Khan
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Arif Husen
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
- Department of Computer Science, COMSATS Institute of Information Technology, Lahore, Punjab, Pakistan
| | - Shafaq Nisar
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Hasnain Ahmed
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Syed Shah Muhammad
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Shabib Aftab
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| |
Collapse
|
3
|
Leblebicioglu Kurtulus I, Lubbad M, Yilmaz OMD, Kilic K, Karaboga D, Basturk A, Akay B, Nalbantoglu U, Yilmaz S, Ayata M, Pacal I. A robust deep learning model for the classification of dental implant brands. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:101818. [PMID: 38462066 DOI: 10.1016/j.jormas.2024.101818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 02/26/2024] [Accepted: 03/07/2024] [Indexed: 03/12/2024]
Abstract
OBJECTIVE In cases where the brands of implants are not known, treatment options can be significantly limited in potential complications arising from implant procedures. This research aims to explore the application of deep learning techniques for the classification of dental implant systems using panoramic radiographs. The primary objective is to assess the superiority of the proposed model in achieving accurate and efficient dental implant classification. MATERIAL AND METHODS A comprehensive analysis was conducted using a diverse set of 25 convolutional neural network (CNN) models, including popular architectures such as VGG16, ResNet-50, EfficientNet, and ConvNeXt. The dataset of 1258 panoramic radiographs from patients who underwent implant treatment at faculty of dentistry was utilized for training and evaluation. Six different dental implant systems were employed as prototypes for the classification task. The precision, recall, F1 score, and support scores for each class have included in the classification accuracy report to ensure accurate and reliable results from the model. RESULTS The experimental results demonstrate that the proposed model consistently outperformed the other evaluated CNN architectures in terms of accuracy, precision, recall, and F1-score. With an impressive accuracy of 95.74 % and high precision and recall rates, the ConvNeXt model showcased its superiority in accurately classifying dental implant systems. Notably, the model's performance was achieved with a relatively smaller number of parameters, indicating its efficiency and speed during inference. CONCLUSION The findings highlight the effectiveness of deep learning techniques, particularly the proposed model, in accurately classifying dental implant systems from panoramic radiographs.
Collapse
Affiliation(s)
| | - Mohammed Lubbad
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | | | - Kerem Kilic
- Department of Prosthodontics, Faculty of Dentistry, Erciyes University, Kayseri, Turkey
| | - Dervis Karaboga
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Alper Basturk
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Bahriye Akay
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Ufuk Nalbantoglu
- Department of Computer Engineering, Faculty of Engineering, Erciyes University, Kayseri, Turkey
| | - Serkan Yilmaz
- Department of Dentomaxillofacial Radiology, Ministry of Health, Mersin Oral and Dental Health Hospital, Mersin, Turkey
| | - Mustafa Ayata
- Dentos Oral and Dental Health Polyclinic, Kayseri, Turkey
| | - Ishak Pacal
- Department of Computer Engineering, Faculty of Engineering, Igdir University, Igdir, Turkey
| |
Collapse
|
4
|
Abhisheka B, Biswas SK, Purkayastha B. HBMD-Net: Feature Fusion Based Breast Cancer Classification with Class Imbalance Resolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01046-5. [PMID: 38409609 DOI: 10.1007/s10278-024-01046-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
Breast cancer, a widespread global disease, represents a significant threat to women's health and lives, ranking as one of the most vulnerable malignant tumors they face. Many researchers have proposed their computer-aided diagnosis systems for classifying breast cancer. The majority of these approaches primarily utilize deep learning (DL) methods, which are not entirely reliable. These approaches overlook the crucial necessity of incorporating both local and global information for precise tumor detection, despite the fact that the subtle nuances are crucial for precise breast cancer classification. In addition, there are a limited number of publicly available breast cancer datasets, and the ones that are available tend to be imbalanced in nature. Therefore, this paper presents the hybrid breast mass detection-network (HBMD-Net) to address two critical challenges: class imbalance and the need to recognize that relying solely on either global or local features falls short in achieving precise tumor classification. To overcome the problem of class imbalance, HBMD-Net incorporates the borderline synthetic minority over-sampling technique (BSMOTE). Simultaneously, it employs a feature fusion approach, combining features by utilizing ResNet50 to extract deep features that provide global information, while handcrafted features are derived using histogram orientation gradient (HOG), that provide local information. In addition, an ROI segmentation has been implemented to avoid misclassifications. This integrated strategy substantially enhances breast cancer classification performance. Moreover, the proposed method integrates the block matching and 3D (BM3D) denoising filter to effectively eliminate multiplicative noise that has enhanced the performance of the system. The evaluation of the proposed HBMD-Net encompasses two breast ultrasound (BUS) datasets, namely BUSI and UDIAT. The proposed model has demonstrated a satisfactory performance, achieving accuracies of 99.14% and 94.49% respectively.
Collapse
Affiliation(s)
- Barsha Abhisheka
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India.
| | - Saroj Kr Biswas
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India
| | | |
Collapse
|
5
|
Ortiz S, Rojas-Valenzuela I, Rojas F, Valenzuela O, Herrera LJ, Rojas I. Novel methodology for detecting and localizing cancer area in histopathological images based on overlapping patches. Comput Biol Med 2024; 168:107713. [PMID: 38000243 DOI: 10.1016/j.compbiomed.2023.107713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/07/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Cancer disease is one of the most important pathologies in the world, as it causes the death of millions of people, and the cure of this disease is limited in most cases. Rapid spread is one of the most important features of this disease, so many efforts are focused on its early-stage detection and localization. Medicine has made numerous advances in the recent decades with the help of artificial intelligence (AI), reducing costs and saving time. In this paper, deep learning models (DL) are used to present a novel method for detecting and localizing cancerous zones in WSI images, using tissue patch overlay to improve performance results. A novel overlapping methodology is proposed and discussed, together with different alternatives to evaluate the labels of the patches overlapping in the same zone to improve detection performance. The goal is to strengthen the labeling of different areas of an image with multiple overlapping patch testing. The results show that the proposed method improves the traditional framework and provides a different approach to cancer detection. The proposed method, based on applying 3x3 step 2 average pooling filters on overlapping patch labels, provides a better result with a 12.9% correction percentage for misclassified patches on the HUP dataset and 15.8% on the CINIJ dataset. In addition, a filter is implemented to correct isolated patches that were also misclassified. Finally, a CNN decision threshold study is performed to analyze the impact of the threshold value on the accuracy of the model. The alteration of the threshold decision along with the filter for isolated patches and the proposed method for overlapping patches, corrects about 20% of the patches that are mislabeled in the traditional method. As a whole, the proposed method achieves an accuracy rate of 94.6%. The code is available at https://github.com/sergioortiz26/Cancer_overlapping_filter_WSI_images.
Collapse
Affiliation(s)
- Sergio Ortiz
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| | - Ignacio Rojas-Valenzuela
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Fernando Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Olga Valenzuela
- Department of Applied Mathematics, University of Granada, Facultad de Ciencias, Avenida de la Fuente Nueva S/N CP:18071 Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Ignacio Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| |
Collapse
|
6
|
Kaur A, Kaushal C, Sandhu JK, Damaševičius R, Thakur N. Histopathological Image Diagnosis for Breast Cancer Diagnosis Based on Deep Mutual Learning. Diagnostics (Basel) 2023; 14:95. [PMID: 38201406 PMCID: PMC10795733 DOI: 10.3390/diagnostics14010095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 01/12/2024] Open
Abstract
Every year, millions of women across the globe are diagnosed with breast cancer (BC), an illness that is both common and potentially fatal. To provide effective therapy and enhance patient outcomes, it is essential to make an accurate diagnosis as soon as possible. In recent years, deep-learning (DL) approaches have shown great effectiveness in a variety of medical imaging applications, including the processing of histopathological images. Using DL techniques, the objective of this study is to recover the detection of BC by merging qualitative and quantitative data. Using deep mutual learning (DML), the emphasis of this research was on BC. In addition, a wide variety of breast cancer imaging modalities were investigated to assess the distinction between aggressive and benign BC. Based on this, deep convolutional neural networks (DCNNs) have been established to assess histopathological images of BC. In terms of the Break His-200×, BACH, and PUIH datasets, the results of the trials indicate that the level of accuracy achieved by the DML model is 98.97%, 96.78, and 96.34, respectively. This indicates that the DML model outperforms and has the greatest value among the other methodologies. To be more specific, it improves the results of localization without compromising the performance of the classification, which is an indication of its increased utility. We intend to proceed with the development of the diagnostic model to make it more applicable to clinical settings.
Collapse
Affiliation(s)
- Amandeep Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Chetna Kaushal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Jasjeet Kaur Sandhu
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 53361 Akademija, Lithuania
| | - Neetika Thakur
- Junior Laboratory Technician, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| |
Collapse
|
7
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
8
|
Morovati B, Lashgari R, Hajihasani M, Shabani H. Reduced Deep Convolutional Activation Features (R-DeCAF) in Histopathology Images to Improve the Classification Performance for Breast Cancer Diagnosis. J Digit Imaging 2023; 36:2602-2612. [PMID: 37532925 PMCID: PMC10584742 DOI: 10.1007/s10278-023-00887-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/04/2023] Open
Abstract
Breast cancer is the second most common cancer among women worldwide, and the diagnosis by pathologists is a time-consuming procedure and subjective. Computer-aided diagnosis frameworks are utilized to relieve pathologist workload by classifying the data automatically, in which deep convolutional neural networks (CNNs) are effective solutions. The features extracted from the activation layer of pre-trained CNNs are called deep convolutional activation features (DeCAF). In this paper, we have analyzed that all DeCAF features are not necessarily led to higher accuracy in the classification task and dimension reduction plays an important role. We have proposed reduced DeCAF (R-DeCAF) for this purpose, and different dimension reduction methods are applied to achieve an effective combination of features by capturing the essence of DeCAF features. This framework uses pre-trained CNNs such as AlexNet, VGG-16, and VGG-19 as feature extractors in transfer learning mode. The DeCAF features are extracted from the first fully connected layer of the mentioned CNNs, and a support vector machine is used for classification. Among linear and nonlinear dimensionality reduction algorithms, linear approaches such as principal component analysis (PCA) represent a better combination among deep features and lead to higher accuracy in the classification task using a small number of features considering a specific amount of cumulative explained variance (CEV) of features. The proposed method is validated using experimental BreakHis and ICIAR datasets. Comprehensive results show improvement in the classification accuracy up to 4.3% with a feature vector size (FVS) of 23 and CEV equal to 0.15.
Collapse
Affiliation(s)
- Bahareh Morovati
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Reza Lashgari
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Mojtaba Hajihasani
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Hasti Shabani
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran.
| |
Collapse
|
9
|
Hanna MG, Brogi E. Future Practices of Breast Pathology Using Digital and Computational Pathology. Adv Anat Pathol 2023; 30:421-433. [PMID: 37737690 DOI: 10.1097/pap.0000000000000414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
Pathology clinical practice has evolved by adopting technological advancements initially regarded as potentially disruptive, such as electron microscopy, immunohistochemistry, and genomic sequencing. Breast pathology has a critical role as a medical domain, where the patient's pathology diagnosis has significant implications for prognostication and treatment of diseases. The advent of digital and computational pathology has brought about significant advancements in the field, offering new possibilities for enhancing diagnostic accuracy and improving patient care. Digital slide scanning enables to conversion of glass slides into high-fidelity digital images, supporting the review of cases in a digital workflow. Digitization offers the capability to render specimen diagnoses, digital archival of patient specimens, collaboration, and telepathology. Integration of image analysis and machine learning-based systems layered atop the high-resolution digital images offers novel workflows to assist breast pathologists in their clinical, educational, and research endeavors. Decision support tools may improve the detection and classification of breast lesions and the quantification of immunohistochemical studies. Computational biomarkers may help to contribute to patient management or outcomes. Furthermore, using digital and computational pathology may increase standardization and quality assurance, especially in areas with high interobserver variability. This review explores the current landscape and possible future applications of digital and computational techniques in the field of breast pathology.
Collapse
Affiliation(s)
- Matthew G Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY
| | | |
Collapse
|
10
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
11
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
12
|
Rai HM. Cancer detection and segmentation using machine learning and deep learning techniques: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2023. [DOI: 10.1007/s11042-023-16520-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 05/12/2023] [Accepted: 08/13/2023] [Indexed: 09/16/2023]
|
13
|
TCNN: A Transformer Convolutional Neural Network for artifact classification in whole slide images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
14
|
Ali MD, Saleem A, Elahi H, Khan MA, Khan MI, Yaqoob MM, Farooq Khattak U, Al-Rasheed A. Breast Cancer Classification through Meta-Learning Ensemble Technique Using Convolution Neural Networks. Diagnostics (Basel) 2023; 13:2242. [PMID: 37443636 DOI: 10.3390/diagnostics13132242] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/22/2023] [Accepted: 06/23/2023] [Indexed: 07/15/2023] Open
Abstract
This study aims to develop an efficient and accurate breast cancer classification model using meta-learning approaches and multiple convolutional neural networks. This Breast Ultrasound Images (BUSI) dataset contains various types of breast lesions. The goal is to classify these lesions as benign or malignant, which is crucial for the early detection and treatment of breast cancer. The problem is that traditional machine learning and deep learning approaches often fail to accurately classify these images due to their complex and diverse nature. In this research, to address this problem, the proposed model used several advanced techniques, including meta-learning ensemble technique, transfer learning, and data augmentation. Meta-learning will optimize the model's learning process, allowing it to adapt to new and unseen datasets quickly. Transfer learning will leverage the pre-trained models such as Inception, ResNet50, and DenseNet121 to enhance the model's feature extraction ability. Data augmentation techniques will be applied to artificially generate new training images, increasing the size and diversity of the dataset. Meta ensemble learning techniques will combine the outputs of multiple CNNs, improving the model's classification accuracy. The proposed work will be investigated by pre-processing the BUSI dataset first, then training and evaluating multiple CNNs using different architectures and pre-trained models. Then, a meta-learning algorithm will be applied to optimize the learning process, and ensemble learning will be used to combine the outputs of multiple CNN. Additionally, the evaluation results indicate that the model is highly effective with high accuracy. Finally, the proposed model's performance will be compared with state-of-the-art approaches in other existing systems' accuracy, precision, recall, and F1 score.
Collapse
Affiliation(s)
- Muhammad Danish Ali
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Adnan Saleem
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Hubaib Elahi
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Muhammad Amir Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
- Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Shah Alam 40450, Malaysia
| | - Muhammad Ijaz Khan
- Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan
| | - Muhammad Mateen Yaqoob
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Umar Farooq Khattak
- School of Information Technology, UNITAR International University, Kelana Jaya, Petaling Jaya 47301, Malaysia
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| |
Collapse
|
15
|
Kode H, Barkana BD. Deep Learning- and Expert Knowledge-Based Feature Extraction and Performance Evaluation in Breast Histopathology Images. Cancers (Basel) 2023; 15:3075. [PMID: 37370687 DOI: 10.3390/cancers15123075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/31/2023] [Accepted: 06/03/2023] [Indexed: 06/29/2023] Open
Abstract
Cancer develops when a single or a group of cells grows and spreads uncontrollably. Histopathology images are used in cancer diagnosis since they show tissue and cell structures under a microscope. Knowledge-based and deep learning-based computer-aided detection is an ongoing research field in cancer diagnosis using histopathology images. Feature extraction is vital in both approaches since the feature set is fed to a classifier and determines the performance. This paper evaluates three feature extraction methods and their performance in breast cancer diagnosis. Features are extracted by (1) a Convolutional Neural Network, (2) a transfer learning architecture VGG16, and (3) a knowledge-based system. The feature sets are tested by seven classifiers, including Neural Network (64 units), Random Forest, Multilayer Perceptron, Decision Tree, Support Vector Machines, K-Nearest Neighbors, and Narrow Neural Network (10 units) on the BreakHis 400× image dataset. The CNN achieved up to 85% for the Neural Network and Random Forest, the VGG16 method achieved up to 86% for the Neural Network, and the knowledge-based features achieved up to 98% for Neural Network, Random Forest, Multilayer Perceptron classifiers.
Collapse
Affiliation(s)
- Hepseeba Kode
- Computer Science and Engineering Department, University of Bridgeport, Bridgeport, CT 06604, USA
| | - Buket D Barkana
- Electrical Engineering Department, University of Bridgeport, Bridgeport, CT 06604, USA
| |
Collapse
|
16
|
Tuerhong A, Silamujiang M, Xianmuxiding Y, Wu L, Mojarad M. An ensemble classifier method based on teaching-learning-based optimization for breast cancer diagnosis. J Cancer Res Clin Oncol 2023:10.1007/s00432-023-04861-5. [PMID: 37202580 DOI: 10.1007/s00432-023-04861-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/13/2023] [Indexed: 05/20/2023]
Abstract
INTRODUCTION Epidemiological studies show that breast cancer is the most common cancer in women in the world. Breast cancer treatment can be very effective, especially when the disease is detected in the early stages. The goal can be achieved by using large-scale breast cancer data with the machine learning models METHODS: This paper proposes a new intelligent approach using an optimized ensemble classifier for breast cancer diagnosis. The classification is done by proposing a new intelligent Group Method of Data Handling (GMDH) neural network-based ensemble classifier. This method improves the performance of the machine learning technique by using a Teaching-Learning-Based Optimization (TLBO) algorithm to optimize the hyperparameters of the classifier. Meanwhile, we use TLBO as an evolutionary method to address the problem of appropriate feature selection in breast cancer data. RESULTS The simulation results show that the proposed method has a better accuracy between 7 and 26% compared to the best results of the existing equivalent algorithms. CONCLUSION According to the obtained results, we suggest the proposed algorithm as an intelligent medical assistant system for breast cancer diagnosis.
Collapse
Affiliation(s)
- Adila Tuerhong
- Department of Cardio-Oncology, Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, Xinjiang, China
| | - Mutalipu Silamujiang
- Department of Traumatic Orthopedic, The Sixth Affiliated Hospital of Xinjiang Medical University, Urumqi, 830002, Xinjiang, China
| | - Yilixiati Xianmuxiding
- Department of Emergency, Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, Xinjiang, China
| | - Li Wu
- Department of Cardio-Oncology, Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, Xinjiang, China.
| | - Musa Mojarad
- Department of Computer Engineering, Firoozabad Branch, Islamic Azad University, Firoozabad, Iran.
| |
Collapse
|
17
|
Yong MP, Hum YC, Lai KW, Lee YL, Goh CH, Yap WS, Tee YK. Histopathological Gastric Cancer Detection on GasHisSDB Dataset Using Deep Ensemble Learning. Diagnostics (Basel) 2023; 13:diagnostics13101793. [PMID: 37238277 DOI: 10.3390/diagnostics13101793] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 05/28/2023] Open
Abstract
Gastric cancer is a leading cause of cancer-related deaths worldwide, underscoring the need for early detection to improve patient survival rates. The current clinical gold standard for detection is histopathological image analysis, but this process is manual, laborious, and time-consuming. As a result, there has been growing interest in developing computer-aided diagnosis to assist pathologists. Deep learning has shown promise in this regard, but each model can only extract a limited number of image features for classification. To overcome this limitation and improve classification performance, this study proposes ensemble models that combine the decisions of several deep learning models. To evaluate the effectiveness of the proposed models, we tested their performance on the publicly available gastric cancer dataset, Gastric Histopathology Sub-size Image Database. Our experimental results showed that the top 5 ensemble model achieved state-of-the-art detection accuracy in all sub-databases, with the highest detection accuracy of 99.20% in the 160 × 160 pixels sub-database. These results demonstrated that ensemble models could extract important features from smaller patch sizes and achieve promising performance. Overall, our proposed work could assist pathologists in detecting gastric cancer through histopathological image analysis and contribute to early gastric cancer detection to improve patient survival rates.
Collapse
Affiliation(s)
- Ming Ping Yong
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Yan Chai Hum
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Ying Loong Lee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Choon-Hian Goh
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Wun-She Yap
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| | - Yee Kai Tee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang 43000, Malaysia
| |
Collapse
|
18
|
Mokoatle M, Marivate V, Mapiye D, Bornman R, Hayes VM. A review and comparative study of cancer detection using machine learning: SBERT and SimCSE application. BMC Bioinformatics 2023; 24:112. [PMID: 36959534 PMCID: PMC10037872 DOI: 10.1186/s12859-023-05235-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/17/2023] [Indexed: 03/25/2023] Open
Abstract
BACKGROUND Using visual, biological, and electronic health records data as the sole input source, pretrained convolutional neural networks and conventional machine learning methods have been heavily employed for the identification of various malignancies. Initially, a series of preprocessing steps and image segmentation steps are performed to extract region of interest features from noisy features. Then, the extracted features are applied to several machine learning and deep learning methods for the detection of cancer. METHODS In this work, a review of all the methods that have been applied to develop machine learning algorithms that detect cancer is provided. With more than 100 types of cancer, this study only examines research on the four most common and prevalent cancers worldwide: lung, breast, prostate, and colorectal cancer. Next, by using state-of-the-art sentence transformers namely: SBERT (2019) and the unsupervised SimCSE (2021), this study proposes a new methodology for detecting cancer. This method requires raw DNA sequences of matched tumor/normal pair as the only input. The learnt DNA representations retrieved from SBERT and SimCSE will then be sent to machine learning algorithms (XGBoost, Random Forest, LightGBM, and CNNs) for classification. As far as we are aware, SBERT and SimCSE transformers have not been applied to represent DNA sequences in cancer detection settings. RESULTS The XGBoost model, which had the highest overall accuracy of 73 ± 0.13 % using SBERT embeddings and 75 ± 0.12 % using SimCSE embeddings, was the best performing classifier. In light of these findings, it can be concluded that incorporating sentence representations from SimCSE's sentence transformer only marginally improved the performance of machine learning models.
Collapse
Affiliation(s)
- Mpho Mokoatle
- Department of Computer Science, University of Pretoria, Pretoria, South Africa.
| | - Vukosi Marivate
- Department of Computer Science, University of Pretoria, Pretoria, South Africa
| | | | - Riana Bornman
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| | - Vanessa M Hayes
- School of Medical Sciences, The University of Sydney, Sydney, Australia
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
19
|
Garg S, Singh P. Transfer Learning Based Lightweight Ensemble Model for Imbalanced Breast Cancer Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1529-1539. [PMID: 35536810 DOI: 10.1109/tcbb.2022.3174091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Automated classification of breast cancer can often save lives, as manual detection is usually time-consuming & expensive. Since the last decade, deep learning techniques have been most widely used for the automatic classification of breast cancer using histopathology images. This paper has performed the binary and multi-class classification of breast cancer using a transfer learning-based ensemble model. To analyze the correctness and reliability of the proposed model, we have used an imbalance IDC dataset, an imbalance BreakHis dataset in the binary class scenario, and a balanced BACH dataset for the multi-class classification. A lightweight shallow CNN model with batch normalization technology to accelerate convergence is aggregated with lightweight MobileNetV2 to improve learning and adaptability. The aggregation output is fed into a multilayer perceptron to complete the final classification task. The experimental study on all three datasets was performed and compared with the recent works. We have fine-tuned three different pre-trained models (ResNet50, InceptionV4, and MobilNetV2) and compared it with the proposed lightweight ensemble model in terms of execution time, number of parameters, model size, etc. In both the evaluation phases, it is seen that our model outperforms in all three datasets.
Collapse
|
20
|
Yusoff M, Haryanto T, Suhartanto H, Mustafa WA, Zain JM, Kusmardi K. Accuracy Analysis of Deep Learning Methods in Breast Cancer Classification: A Structured Review. Diagnostics (Basel) 2023; 13:diagnostics13040683. [PMID: 36832171 PMCID: PMC9955565 DOI: 10.3390/diagnostics13040683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer is diagnosed using histopathological imaging. This task is extremely time-consuming due to high image complexity and volume. However, it is important to facilitate the early detection of breast cancer for medical intervention. Deep learning (DL) has become popular in medical imaging solutions and has demonstrated various levels of performance in diagnosing cancerous images. Nonetheless, achieving high precision while minimizing overfitting remains a significant challenge for classification solutions. The handling of imbalanced data and incorrect labeling is a further concern. Additional methods, such as pre-processing, ensemble, and normalization techniques, have been established to enhance image characteristics. These methods could influence classification solutions and be used to overcome overfitting and data balancing issues. Hence, developing a more sophisticated DL variant could improve classification accuracy while reducing overfitting. Technological advancements in DL have fueled automated breast cancer diagnosis growth in recent years. This paper reviewed studies on the capability of DL to classify histopathological breast cancer images, as the objective of this study was to systematically review and analyze current research on the classification of histopathological images. Additionally, literature from the Scopus and Web of Science (WOS) indexes was reviewed. This study assessed recent approaches for histopathological breast cancer image classification in DL applications for papers published up until November 2022. The findings of this study suggest that DL methods, especially convolution neural networks and their hybrids, are the most cutting-edge approaches currently in use. To find a new technique, it is necessary first to survey the landscape of existing DL approaches and their hybrid methods to conduct comparisons and case studies.
Collapse
Affiliation(s)
- Marina Yusoff
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Toto Haryanto
- Department of Computer Science, IPB University, Bogor 16680, Indonesia
| | - Heru Suhartanto
- Faculty of Computer Science, Universitas Indonesia, Depok 16424, Indonesia
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, Padang Besar 02100, Perlis, Malaysia
- Correspondence: (M.Y.); (W.A.M.)
| | - Jasni Mohamad Zain
- Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
- College of Computing, Informatic and Media, Kompleks Al-Khawarizmi, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
| | - Kusmardi Kusmardi
- Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia/Cipto Mangunkusumo Hospital, Jakarta 10430, Indonesia
- Human Cancer Research Cluster, Indonesia Medical Education and Research Institute, Universitas Indonesia, Jakarta 10430, Indonesia
| |
Collapse
|
21
|
Grading of gliomas using transfer learning on MRI images. MAGMA (NEW YORK, N.Y.) 2023; 36:43-53. [PMID: 36326937 DOI: 10.1007/s10334-022-01046-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 09/04/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Despite the critical role of Magnetic Resonance Imaging (MRI) in the diagnosis of brain tumours, there are still many pitfalls in the exact grading of them, in particular, gliomas. In this regard, it was aimed to examine the potential of Transfer Learning (TL) and Machine Learning (ML) algorithms in the accurate grading of gliomas on MRI images. MATERIALS AND METHODS Dataset has included four types of axial MRI images of glioma brain tumours with grades I-IV: T1-weighted, T2-weighted, FLAIR, and T1-weighted Contrast-Enhanced (T1-CE). Images were resized, normalized, and randomly split into training, validation, and test sets. ImageNet pre-trained Convolutional Neural Networks (CNNs) were utilized for feature extraction and classification, using Adam and SGD optimizers. Logistic Regression (LR) and Support Vector Machine (SVM) methods were also implemented for classification instead of Fully Connected (FC) layers taking advantage of features extracted by each CNN. RESULTS Evaluation metrics were computed to find the model with the best performance, and the highest overall accuracy of 99.38% was achieved for the model containing an SVM classifier and features extracted by pre-trained VGG-16. DISCUSSION It was demonstrated that developing Computer-aided Diagnosis (CAD) systems using pre-trained CNNs and classification algorithms is a functional approach to automatically specify the grade of glioma brain tumours in MRI images. Using these models is an excellent alternative to invasive methods and helps doctors diagnose more accurately before treatment.
Collapse
|
22
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
23
|
Breast cancer classification by a new approach to assessing deep neural network-based uncertainty quantification methods. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
24
|
Chan RC, To CKC, Cheng KCT, Yoshikazu T, Yan LLA, Tse GM. Artificial intelligence in breast cancer histopathology. Histopathology 2023; 82:198-210. [PMID: 36482271 DOI: 10.1111/his.14820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 12/13/2022]
Abstract
This is a review on the use of artificial intelligence for digital breast pathology. A systematic search on PubMed was conducted, identifying 17,324 research papers related to breast cancer pathology. Following a semimanual screening, 664 papers were retrieved and pursued. The papers are grouped into six major tasks performed by pathologists-namely, molecular and hormonal analysis, grading, mitotic figure counting, ki-67 indexing, tumour-infiltrating lymphocyte assessment, and lymph node metastases identification. Under each task, open-source datasets for research to build artificial intelligence (AI) tools are also listed. Many AI tools showed promise and demonstrated feasibility in the automation of routine pathology investigations. We expect continued growth of AI in this field as new algorithms mature.
Collapse
Affiliation(s)
- Ronald Ck Chan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Chun Kit Curtis To
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Ka Chuen Tom Cheng
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Tada Yoshikazu
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Lai Ling Amy Yan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Gary M Tse
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
25
|
Duenweg SR, Brehler M, Bobholz SA, Lowman AK, Winiarz A, Kyereme F, Nencka A, Iczkowski KA, LaViolette PS. Comparison of a machine and deep learning model for automated tumor annotation on digitized whole slide prostate cancer histology. PLoS One 2023; 18:e0278084. [PMID: 36928230 PMCID: PMC10019669 DOI: 10.1371/journal.pone.0278084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/04/2023] [Indexed: 03/18/2023] Open
Abstract
One in eight men will be affected by prostate cancer (PCa) in their lives. While the current clinical standard prognostic marker for PCa is the Gleason score, it is subject to inter-reviewer variability. This study compares two machine learning methods for discriminating between cancerous regions on digitized histology from 47 PCa patients. Whole-slide images were annotated by a GU fellowship-trained pathologist for each Gleason pattern. High-resolution tiles were extracted from annotated and unlabeled tissue. Patients were separated into a training set of 31 patients (Cohort A, n = 9345 tiles) and a testing cohort of 16 patients (Cohort B, n = 4375 tiles). Tiles from Cohort A were used to train a ResNet model, and glands from these tiles were segmented to calculate pathomic features to train a bagged ensemble model to discriminate tumors as (1) cancer and noncancer, (2) high- and low-grade cancer from noncancer, and (3) all Gleason patterns. The outputs of these models were compared to ground-truth pathologist annotations. The ensemble and ResNet models had overall accuracies of 89% and 88%, respectively, at predicting cancer from noncancer. The ResNet model was additionally able to differentiate Gleason patterns on data from Cohort B while the ensemble model was not. Our results suggest that quantitative pathomic features calculated from PCa histology can distinguish regions of cancer; however, texture features captured by deep learning frameworks better differentiate unique Gleason patterns.
Collapse
Affiliation(s)
- Savannah R Duenweg
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Michael Brehler
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Samuel A Bobholz
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Allison K Lowman
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Aleksandra Winiarz
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Fitzgerald Kyereme
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Andrew Nencka
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Kenneth A Iczkowski
- Department of Pathology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Peter S LaViolette
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
- Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| |
Collapse
|
26
|
Saini M, Susan S. VGGIN-Net: Deep Transfer Network for Imbalanced Breast Cancer Dataset. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:752-762. [PMID: 35349449 DOI: 10.1109/tcbb.2022.3163277] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
In this paper, we have presented a novel deep neural network architecture involving transfer learning approach, formed by freezing and concatenating all the layers till block4 pool layer of VGG16 pre-trained model (at the lower level) with the layers of a randomly initialized naïve Inception block module (at the higher level). Further, we have added the batch normalization, flatten, dropout and dense layers in the proposed architecture. Our transfer network, called VGGIN-Net, facilitates the transfer of domain knowledge from the larger ImageNet object dataset to the smaller imbalanced breast cancer dataset. To improve the performance of the proposed model, regularization was used in the form of dropout and data augmentation. A detailed block-wise fine tuning has been conducted on the proposed deep transfer network for images of different magnification factors. The results of extensive experiments indicate a significant improvement of classification performance after the application of fine-tuning. The proposed deep learning architecture with transfer learning and fine-tuning yields the highest accuracies in comparison to other state-of-the-art approaches for the classification of BreakHis breast cancer dataset. The articulated architecture is designed in a way that it can be effectively transfer learned on other breast cancer datasets.
Collapse
|
27
|
The Systematic Review of Artificial Intelligence Applications in Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 13:diagnostics13010045. [PMID: 36611337 PMCID: PMC9818874 DOI: 10.3390/diagnostics13010045] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/16/2022] [Accepted: 12/17/2022] [Indexed: 12/28/2022] Open
Abstract
Several studies have demonstrated the value of artificial intelligence (AI) applications in breast cancer diagnosis. The systematic review of AI applications in breast cancer diagnosis includes several studies that compare breast cancer diagnosis and AI. However, they lack systematization, and each study appears to be conducted uniquely. The purpose and contributions of this study are to offer elaborative knowledge on the applications of AI in the diagnosis of breast cancer through citation analysis in order to categorize the main area of specialization that attracts the attention of the academic community, as well as thematic issue analysis to identify the species being researched in each category. In this study, a total number of 17,900 studies addressing breast cancer and AI published between 2012 and 2022 were obtained from these databases: IEEE, Embase: Excerpta Medica Database Guide-Ovid, PubMed, Springer, Web of Science, and Google Scholar. We applied inclusion and exclusion criteria to the search; 36 studies were identified. The vast majority of AI applications used classification models for the prediction of breast cancer. Howbeit, accuracy (99%) has the highest number of performance metrics, followed by specificity (98%) and area under the curve (0.95). Additionally, the Convolutional Neural Network (CNN) was the best model of choice in several studies. This study shows that the quantity and caliber of studies that use AI applications in breast cancer diagnosis will continue to rise annually. As a result, AI-based applications are viewed as a supplement to doctors' clinical reasoning, with the ultimate goal of providing quality healthcare that is both affordable and accessible to everyone worldwide.
Collapse
|
28
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
29
|
Brancati N, Anniciello AM, Pati P, Riccio D, Scognamiglio G, Jaume G, De Pietro G, Di Bonito M, Foncubierta A, Botti G, Gabrani M, Feroce F, Frucci M. BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images. Database (Oxford) 2022; 2022:6762252. [PMID: 36251776 PMCID: PMC9575967 DOI: 10.1093/database/baac093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 09/16/2022] [Accepted: 10/01/2022] [Indexed: 11/11/2022]
Abstract
Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women. Advances in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by pathologists is cumbersome, time-consuming and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems has empowered the rapid digitization of pathology slides and enabled the development of Artificial Intelligence (AI)-assisted digital workflows. However, AI techniques, especially Deep Learning, require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as data-acquisition level constraints, time-consuming and expensive annotations and anonymization of patient information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin and Eosin (H&E)-stained images to advance AI development in the automatic characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs) and 4539 Regions Of Interest (ROIs) extracted from the WSIs. Each WSI and respective ROIs are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI and ROI levels. Furthermore, by including the understudied atypical lesions, BRACS offers a unique opportunity for leveraging AI to better understand their characteristics. We encourage AI practitioners to develop and evaluate novel algorithms on the BRACS dataset to further breast cancer diagnosis and patient care. Database URL: https://www.bracs.icar.cnr.it/
Collapse
|
30
|
Mukhlif AA, Al-Khateeb B, Mohammed MA. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
Collapse
Affiliation(s)
- Abdulrahman Abbas Mukhlif
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Belal Al-Khateeb
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Mazin Abed Mohammed
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| |
Collapse
|
31
|
Deep and dense convolutional neural network for multi category classification of magnification specific and magnification independent breast cancer histopathological images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
32
|
Aljuaid H, Alturki N, Alsubaie N, Cavallaro L, Liotta A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106951. [PMID: 35767911 DOI: 10.1016/j.cmpb.2022.106951] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/25/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Najah Alsubaie
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Lucia Cavallaro
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy.
| |
Collapse
|
33
|
Guan X, Lu N, Zhang J. Evaluation of Epidermal Growth Factor Receptor 2 Status in Gastric Cancer by CT-Based Deep Learning Radiomics Nomogram. Front Oncol 2022; 12:905203. [PMID: 35898877 PMCID: PMC9309372 DOI: 10.3389/fonc.2022.905203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 06/21/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose To explore the role of computed tomography (CT)-based deep learning and radiomics in preoperative evaluation of epidermal growth factor receptor 2 (HER2) status in gastric cancer. Materials and methods The clinical data on gastric cancer patients were evaluated retrospectively, and 357 patients were chosen for this study (training cohort: 249; test cohort: 108). The preprocessed enhanced CT arterial phase images were selected for lesion segmentation, radiomics and deep learning feature extraction. We integrated deep learning features and radiomic features (Inte). Four methods were used for feature selection. We constructed models with support vector machine (SVM) or random forest (RF), respectively. The area under the receiver operating characteristics curve (AUC) was used to assess the performance of these models. We also constructed a nomogram including Inte-feature scores and clinical factors. Results The radiomics-SVM model showed good classification performance (AUC, training cohort: 0.8069; test cohort: 0.7869). The AUC of the ResNet50-SVM model and the Inte-SVM model in the test cohort were 0.8955 and 0.9055. The nomogram also showed excellent discrimination achieving greater AUC (training cohort, 0.9207; test cohort, 0.9224). Conclusion CT-based deep learning radiomics nomogram can accurately and effectively assess the HER2 status in patients with gastric cancer before surgery and it is expected to assist physicians in clinical decision-making and facilitates individualized treatment planning.
Collapse
Affiliation(s)
- Xiao Guan
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China
| | - Na Lu
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China
| | | |
Collapse
|
34
|
Lu SY, Wang SH, Zhang YD. SAFNet: A deep spatial attention network with classifier fusion for breast cancer detection. Comput Biol Med 2022; 148:105812. [PMID: 35834967 DOI: 10.1016/j.compbiomed.2022.105812] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 06/15/2022] [Accepted: 07/03/2022] [Indexed: 11/28/2022]
Abstract
Breast cancer is a top dangerous killer for women. An accurate early diagnosis of breast cancer is the primary step for treatment. A novel breast cancer detection model called SAFNet is proposed based on ultrasound images and deep learning. We employ a pre-trained ResNet-18 embedded with the spatial attention mechanism as the backbone model. Three randomized network models are trained for prediction in the SAFNet, which are fused by majority voting to produce more accurate results. A public ultrasound image dataset is utilized to evaluate the generalization ability of our SAFNet using 5-fold cross-validation. The simulation experiments reveal that the SAFNet can produce higher classification results compared with four existing breast cancer classification methods. Therefore, our SAFNet is an accurate tool to detect breast cancer that can be applied in clinical diagnosis.
Collapse
Affiliation(s)
- Si-Yuan Lu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| |
Collapse
|
35
|
Fu X, Bates PA. Application of deep learning methods: From molecular modelling to patient classification. Exp Cell Res 2022; 418:113278. [PMID: 35810775 DOI: 10.1016/j.yexcr.2022.113278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/16/2022] [Accepted: 07/05/2022] [Indexed: 11/28/2022]
Abstract
We are now well into the information driven age with complex, heterogeneous, datasets in the biological sciences continuing to grow at a rapid pace. Moreover, distilling of such datasets, to find new governing principles, are underway. Leading the surge are new and exciting algorithmic developments in computer simulation and machine learning, most notably for the latter, those centred on deep learning. However, practical applications of cell centric computations within the biological sciences, even when carefully benchmarked against existing experimental datasets, remain challenging. Here we discuss the application of deep learning methodologies to support our understanding of cell functionality and as an aid to patient classification. Whilst comprehensive end-to-end deep learning approaches that utilise knowledge of the cell and its molecular components to aid human disease classification are yet to be implemented, important for opening the door to more effective molecular and cell-based therapies, we illustrate that many deep learning applications have been developed to tackle components of such an ambitious pipeline. We end our discussion on what the future may hold, especially how an integrated framework of computer simulations and deep learning, in conjunction with wet-bench experimentation, could enable to reveal the governing principles underlying cell functionalities within the tissue environments cells operate.
Collapse
Affiliation(s)
- Xiao Fu
- Biomolecular Modelling Laboratory, The Francis Crick Institute, 1 Midland Rd, London, NW1 1AT, UK.
| | - Paul A Bates
- Biomolecular Modelling Laboratory, The Francis Crick Institute, 1 Midland Rd, London, NW1 1AT, UK.
| |
Collapse
|
36
|
Liu X, Yuan P, Li R, Zhang D, An J, Ju J, Liu C, Ren F, Hou R, Li Y, Yang J. Predicting breast cancer recurrence and metastasis risk by integrating color and texture features of histopathological images and machine learning technologies. Comput Biol Med 2022; 146:105569. [DOI: 10.1016/j.compbiomed.2022.105569] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 04/24/2022] [Accepted: 04/25/2022] [Indexed: 12/11/2022]
|
37
|
Ai Z, Huang X, Feng J, Wang H, Tao Y, Zeng F, Lu Y. FN-OCT: Disease Detection Algorithm for Retinal Optical Coherence Tomography Based on a Fusion Network. Front Neuroinform 2022; 16:876927. [PMID: 35784186 PMCID: PMC9243322 DOI: 10.3389/fninf.2022.876927] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 05/04/2022] [Indexed: 01/31/2023] Open
Abstract
Optical coherence tomography (OCT) is a new type of tomography that has experienced rapid development and potential in recent years. It is playing an increasingly important role in retinopathy diagnoses. At present, due to the uneven distributions of medical resources in various regions, the uneven proficiency levels of doctors in grassroots and remote areas, and the development needs of rare disease diagnosis and precision medicine, artificial intelligence technology based on deep learning can provide fast, accurate, and effective solutions for the recognition and diagnosis of retinal OCT images. To prevent vision damage and blindness caused by the delayed discovery of retinopathy, a fusion network (FN)-based retinal OCT classification algorithm (FN-OCT) is proposed in this paper to improve upon the adaptability and accuracy of traditional classification algorithms. The InceptionV3, Inception-ResNet, and Xception deep learning algorithms are used as base classifiers, a convolutional block attention mechanism (CBAM) is added after each base classifier, and three different fusion strategies are used to merge the prediction results of the base classifiers to output the final prediction results (choroidal neovascularization (CNV), diabetic macular oedema (DME), drusen, normal). The results show that in a classification problem involving the UCSD common retinal OCT dataset (108,312 OCT images from 4,686 patients), compared with that of the InceptionV3 network model, the prediction accuracy of FN-OCT is improved by 5.3% (accuracy = 98.7%, area under the curve (AUC) = 99.1%). The predictive accuracy and AUC achieved on an external dataset for the classification of retinal OCT diseases are 92 and 94.5%, respectively, and gradient-weighted class activation mapping (Grad-CAM) is used as a visualization tool to verify the effectiveness of the proposed FNs. This finding indicates that the developed fusion algorithm can significantly improve the performance of classifiers while providing a powerful tool and theoretical support for assisting with the diagnosis of retinal OCT.
Collapse
Affiliation(s)
- Zhuang Ai
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| | - Xuan Huang
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
- Medical Research Center, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Jing Feng
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Yong Tao
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Fanxin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Sichuan, China
| | - Yaping Lu
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| |
Collapse
|
38
|
Image Moment-Based Features for Mass Detection in Breast US Images via Machine Learning and Neural Network Classification Models. INVENTIONS 2022. [DOI: 10.3390/inventions7020042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Differentiating between malignant and benign masses using machine learning in the recognition of breast ultrasound (BUS) images is a technique with good accuracy and precision, which helps doctors make a correct diagnosis. The method proposed in this paper integrates Hu’s moments in the analysis of the breast tumor. The extracted features feed a k-nearest neighbor (k-NN) classifier and a radial basis function neural network (RBFNN) to classify breast tumors into benign and malignant. The raw images and the tumor masks provided as ground-truth images belong to the public digital BUS images database. Certain metrics such as accuracy, sensitivity, precision, and F1-score were used to evaluate the segmentation results and to select Hu’s moments showing the best capacity to discriminate between malignant and benign breast tissues in BUS images. Regarding the selection of Hu’s moments, the k-NN classifier reached 85% accuracy for moment M1 and 80% for moment M5 whilst RBFNN reached an accuracy of 76% for M1. The proposed method might be used to assist the clinical diagnosis of breast cancer identification by providing a good combination between segmentation and Hu’s moments.
Collapse
|
39
|
Hao Y, Zhang L, Qiao S, Bai Y, Cheng R, Xue H, Hou Y, Zhang W, Zhang G. Breast cancer histopathological images classification based on deep semantic features and gray level co-occurrence matrix. PLoS One 2022; 17:e0267955. [PMID: 35511877 PMCID: PMC9070886 DOI: 10.1371/journal.pone.0267955] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Accepted: 04/19/2022] [Indexed: 12/19/2022] Open
Abstract
Breast cancer is regarded as the leading killer of women today. The early diagnosis and treatment of breast cancer is the key to improving the survival rate of patients. A method of breast cancer histopathological images recognition based on deep semantic features and gray level co-occurrence matrix (GLCM) features is proposed in this paper. Taking the pre-trained DenseNet201 as the basic model, part of the convolutional layer features of the last dense block are extracted as the deep semantic features, which are then fused with the three-channel GLCM features, and the support vector machine (SVM) is used for classification. For the BreaKHis dataset, we explore the classification problems of magnification specific binary (MSB) classification and magnification independent binary (MIB) classification, and compared the performance with the seven baseline models of AlexNet, VGG16, ResNet50, GoogLeNet, DenseNet201, SqueezeNet and Inception-ResNet-V2. The experimental results show that the method proposed in this paper performs better than the pre-trained baseline models in MSB and MIB classification problems. The highest image-level recognition accuracy of 40×, 100×, 200×, 400× is 96.75%, 95.21%, 96.57%, and 93.15%, respectively. And the highest patient-level recognition accuracy of the four magnifications is 96.33%, 95.26%, 96.09%, and 92.99%, respectively. The image-level and patient-level recognition accuracy for MIB classification is 95.56% and 95.54%, respectively. In addition, the recognition accuracy of the method in this paper is comparable to some state-of-the-art methods.
Collapse
Affiliation(s)
- Yan Hao
- Department of Mathematics, Taiyuan Normal University, Taiyuan, China
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Li Zhang
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Shichang Qiao
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Yanping Bai
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
- * E-mail:
| | - Rong Cheng
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Hongxin Xue
- Data Science and Technology, North University of China, Taiyuan, China
| | - Yuchao Hou
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Wendong Zhang
- School of Instrument and Electronics, Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Guojun Zhang
- School of Instrument and Electronics, Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
40
|
Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:diagnostics12051152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
Introduction and Background: Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. Methods: This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. Result: A detailed evaluation of the proposed model’s accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. Conclusion: The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
|
41
|
Kumar N, Sharma M, Singh VP, Madan C, Mehandia S. An empirical study of handcrafted and dense feature extraction techniques for lung and colon cancer classification from histopathological images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103596] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
42
|
Zhang X, Zhang Y, Zhang G, Qiu X, Tan W, Yin X, Liao L. Deep Learning With Radiomics for Disease Diagnosis and Treatment: Challenges and Potential. Front Oncol 2022; 12:773840. [PMID: 35251962 PMCID: PMC8891653 DOI: 10.3389/fonc.2022.773840] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 01/17/2022] [Indexed: 12/12/2022] Open
Abstract
The high-throughput extraction of quantitative imaging features from medical images for the purpose of radiomic analysis, i.e., radiomics in a broad sense, is a rapidly developing and emerging research field that has been attracting increasing interest, particularly in multimodality and multi-omics studies. In this context, the quantitative analysis of multidimensional data plays an essential role in assessing the spatio-temporal characteristics of different tissues and organs and their microenvironment. Herein, recent developments in this method, including manually defined features, data acquisition and preprocessing, lesion segmentation, feature extraction, feature selection and dimension reduction, statistical analysis, and model construction, are reviewed. In addition, deep learning-based techniques for automatic segmentation and radiomic analysis are being analyzed to address limitations such as rigorous workflow, manual/semi-automatic lesion annotation, and inadequate feature criteria, and multicenter validation. Furthermore, a summary of the current state-of-the-art applications of this technology in disease diagnosis, treatment response, and prognosis prediction from the perspective of radiology images, multimodality images, histopathology images, and three-dimensional dose distribution data, particularly in oncology, is presented. The potential and value of radiomics in diagnostic and therapeutic strategies are also further analyzed, and for the first time, the advances and challenges associated with dosiomics in radiotherapy are summarized, highlighting the latest progress in radiomics. Finally, a robust framework for radiomic analysis is presented and challenges and recommendations for future development are discussed, including but not limited to the factors that affect model stability (medical big data and multitype data and expert knowledge in medical), limitations of data-driven processes (reproducibility and interpretability of studies, different treatment alternatives for various institutions, and prospective researches and clinical trials), and thoughts on future directions (the capability to achieve clinical applications and open platform for radiomics analysis).
Collapse
Affiliation(s)
- Xingping Zhang
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
- Department of New Networks, Peng Cheng Laboratory, Shenzhen, China
| | - Yanchun Zhang
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
- Department of New Networks, Peng Cheng Laboratory, Shenzhen, China
| | - Guijuan Zhang
- Department of Respiratory Medicine, First Affiliated Hospital of Gannan Medical University, Ganzhou, China
| | - Xingting Qiu
- Department of Radiology, First Affiliated Hospital of Gannan Medical University, Ganzhou, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Xiaoxia Yin
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
| | - Liefa Liao
- School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, China
| |
Collapse
|
43
|
Subasree S, Sakthivel N, Tripathi K, Agarwal D, Tyagi AK. Combining the advantages of radiomic features based feature extraction and hyper parameters tuned RERNN using LOA for breast cancer classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
44
|
A deep learning model for breast ductal carcinoma in situ classification in whole slide images. Virchows Arch 2022; 480:1009-1022. [PMID: 35076741 DOI: 10.1007/s00428-021-03241-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 11/12/2021] [Accepted: 11/20/2021] [Indexed: 02/06/2023]
Abstract
The pathological differential diagnosis between breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC) is of pivotal importance for determining optimum cancer treatment(s) and clinical outcomes. Since conventional diagnosis by pathologists using microscopes is limited in terms of human resources, it is necessary to develop new techniques that can rapidly and accurately diagnose large numbers of histopathological specimens. Computational pathology tools which can assist pathologists in detecting and classifying DCIS and IDC from whole slide images (WSIs) would be of great benefit for routine pathological diagnosis. In this paper, we trained deep learning models capable of classifying biopsy and surgical histopathological WSIs into DCIS, IDC, and benign. We evaluated the models on two independent test sets (n= 1382, n= 548), achieving ROC areas under the curves (AUCs) up to 0.960 and 0.977 for DCIS and IDC, respectively.
Collapse
|
45
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
46
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
47
|
Rashmi R, Prasad K, Udupa CBK. Breast histopathological image analysis using image processing techniques for diagnostic puposes: A methodological review. J Med Syst 2021; 46:7. [PMID: 34860316 PMCID: PMC8642363 DOI: 10.1007/s10916-021-01786-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/21/2021] [Indexed: 12/24/2022]
Abstract
Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
48
|
R R, Prasad K, Udupa CBK. BCHisto-Net: Breast histopathological image classification by global and local feature aggregation. Artif Intell Med 2021; 121:102191. [PMID: 34763806 DOI: 10.1016/j.artmed.2021.102191] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 09/15/2021] [Accepted: 10/05/2021] [Indexed: 02/06/2023]
Abstract
Breast cancer among women is the second most common cancer worldwide. Non-invasive techniques such as mammograms and ultrasound imaging are used to detect the tumor. However, breast histopathological image analysis is inevitable for the detection of malignancy of the tumor. Manual analysis of breast histopathological images is subjective, tedious, laborious and is prone to human errors. Recent developments in computational power and memory have made automation a popular choice for the analysis of these images. One of the key challenges of breast histopathological image classification at 100× magnification is to extract the features of the potential regions of interest to decide on the malignancy of the tumor. The current state-of-the-art CNN based methods for breast histopathological image classification extract features from the entire image (global features) and thus may overlook the features of the potential regions of interest. This can lead to inaccurate diagnosis of breast histopathological images. This research gap has motivated us to propose BCHisto-Net to classify breast histopathological images at 100× magnification. The proposed BCHisto-Net extracts both global and local features required for the accurate classification of breast histopathological images. The global features extract abstract image features while local features focus on potential regions of interest. Furthermore, a feature aggregation branch is proposed to combine these features for the classification of 100× images. The proposed method is quantitatively evaluated on red a private dataset and publicly available BreakHis dataset. An extensive evaluation of the proposed model showed the effectiveness of the local and global features for the classification of these images. The proposed method achieved an accuracy of 95% and 89% on KMC and BreakHis datasets respectively, outperforming state-of-the-art classifiers.
Collapse
Affiliation(s)
- Rashmi R
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Chethana Babu K Udupa
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
49
|
Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9025470. [PMID: 34754327 PMCID: PMC8572604 DOI: 10.1155/2021/9025470] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 12/30/2022]
Abstract
Deep learning (DL) is a branch of machine learning and artificial intelligence that has been applied to many areas in different domains such as health care and drug design. Cancer prognosis estimates the ultimate fate of a cancer subject and provides survival estimation of the subjects. An accurate and timely diagnostic and prognostic decision will greatly benefit cancer subjects. DL has emerged as a technology of choice due to the availability of high computational resources. The main components in a standard computer-aided design (CAD) system are preprocessing, feature recognition, extraction and selection, categorization, and performance assessment. Reduction of costs associated with sequencing systems offers a myriad of opportunities for building precise models for cancer diagnosis and prognosis prediction. In this survey, we provided a summary of current works where DL has helped to determine the best models for the cancer diagnosis and prognosis prediction tasks. DL is a generic model requiring minimal data manipulations and achieves better results while working with enormous volumes of data. Aims are to scrutinize the influence of DL systems using histopathology images, present a summary of state-of-the-art DL methods, and give directions to future researchers to refine the existing methods.
Collapse
|
50
|
Kanavati F, Tsuneki M. Breast Invasive Ductal Carcinoma Classification on Whole Slide Images with Weakly-Supervised and Transfer Learning. Cancers (Basel) 2021; 13:cancers13215368. [PMID: 34771530 PMCID: PMC8582388 DOI: 10.3390/cancers13215368] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/22/2021] [Accepted: 10/23/2021] [Indexed: 12/12/2022] Open
Abstract
Simple Summary In this study, we have trained deep learning models using transfer learning and weakly-supervised learning for the classification of breast invasive ductal carcinoma (IDC) in whole slide images (WSIs). We evaluated the models on four test sets: one biopsy (n = 522) and three surgical (n = 1129) achieving AUCs in the range 0.95 to 0.99. We have also compared the trained models to existing pre-trained models on different organs for adenocarcinoma classification and they have achieved lower AUC performances in the range 0.66 to 0.89 despite adenocarcinoma exhibiting some structural similarity to IDC. Therefore, performing fine-tuning on the breast IDC training set was beneficial for improving performance. The results demonstrate the potential use of such models to aid pathologists in clinical practice. Abstract Invasive ductal carcinoma (IDC) is the most common form of breast cancer. For the non-operative diagnosis of breast carcinoma, core needle biopsy has been widely used in recent years for the evaluation of histopathological features, as it can provide a definitive diagnosis between IDC and benign lesion (e.g., fibroadenoma), and it is cost effective. Due to its widespread use, it could potentially benefit from the use of AI-based tools to aid pathologists in their pathological diagnosis workflows. In this paper, we trained invasive ductal carcinoma (IDC) whole slide image (WSI) classification models using transfer learning and weakly-supervised learning. We evaluated the models on a core needle biopsy (n = 522) test set as well as three surgical test sets (n = 1129) obtaining ROC AUCs in the range of 0.95–0.98. The promising results demonstrate the potential of applying such models as diagnostic aid tools for pathologists in clinical practice.
Collapse
|